<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Reasoning strategies for diagnostic probability estimates in causal contexts: Preference for defeasible deduction over abduction?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jean-Louis Stilgenbauer</string-name>
          <email>jlstilgenbauer@ipc-paris.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jean Baratgin</string-name>
          <email>jean.baratgin@univ-paris8.fr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Igor Douven</string-name>
          <email>igor.douven@paris-sorbonne.fr</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>9 rue d'Ulm</institution>
          ,
          <addr-line>75005 Paris -</addr-line>
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institut Jean Nicod (IJN), École Normale Supérieure</institution>
          ,
          <addr-line>ENS</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Sciences</institution>
          ,
          <addr-line>Normes, Décision (SND)</addr-line>
          ,
          <institution>CNRS/Université Paris-Sorbonne</institution>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>rue Victor Cousin</institution>
          ,
          <addr-line>75005 Paris -</addr-line>
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Recently, Meder, Mayrhofer, and Waldmann [1,2] have proposed a model of causal diagnostic reasoning that predicts an interference of the predictive probability, Pr(Effect | Cause), in estimating the diagnostic probability, Pr(Cause | Effect), specifically, that the interference leads to an underestimation bias of the diagnostic probability. The objective of the experiment reported in the present paper was twofold. A first aim was to test the existence of the underestimation bias in individuals. Our results indicate the presence of an underestimation of the diagnostic probability that depends on the value of the predictive probability. Secondly, we investigated whether this bias was related to the type of estimation strategy followed by participants. We tested two main strategies: abductive inference and defeasible deduction. Our results reveal that the underestimation of diagnostic probability is more pronounced under abductive inference than under defeasible deduction. Our data also suggest that defeasible deduction is for individuals the most natural reasoning strategy to estimate Pr(Cause | Effect).</p>
      </abstract>
      <kwd-group>
        <kwd>diagnostic inference</kwd>
        <kwd>defeasible deduction</kwd>
        <kwd>abduction</kwd>
        <kwd>causal Bayes nets</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>? All supplementary information as well as all materials, data and scripts for the
statistical analyses can be downloaded from:
https://osf.io/7yc92/?view_only=57085866d97d48a4bbd75938d3b4a6af.
estimates the probability of Cause given Effect, Pr(c | e). In so-called predictive
reasoning, causal inferences go from Cause to Effect; here the aim is to estimate
the probability of Effect given Cause, Pr(e | c). The present paper focuses on the
former type of reasoning. Diagnostic reasoning is emblematic not only in the
field of medicine, but also in our everyday lives. For example, we reason from
effects to causes when we try to understand why our car refuses to start or why
we failed the final year exam. Here, we focus on the most elementary type of
diagnostic inference, which involves a single cause–effect relation between two
binary events (i.e., events that either occur or do not occur).</p>
      <p>
        In the experiment to be reported, we show that individuals’ reasoning
corroborates the predictions of the Structure Induction Model (SIM) recently proposed
by Meder, Mayrhofer, and Waldmann [
        <xref ref-type="bibr" rid="ref1 ref2">1,2</xref>
        ]. Whereas it is usually assumed that
diagnostic judgments should merely be a function of the empirical conditional
probability Pr(c | e), the SIM predicts that diagnostic inferences are also
systematically affected by the empirical predictive probability Pr(e | c).
      </p>
      <p>We generalize this result by showing that the influence of the predictive
probability in the estimation of the diagnostic probability is effective whatever the
reasoning strategy followed by the participants. Two strategies have particularly
caught our attention: the estimate of Pr(c | e) by abduction, on the one hand, and
the estimate through defeasible deduction, on the other hand (see Section 3).
2</p>
      <p>
        Estimate of Pr(c | e) via Causal Bayes Nets
Causal Bayes Nets (CBNs) are today the dominant type of model for formalizing
causal inferences [
        <xref ref-type="bibr" rid="ref3 ref4 ref5 ref6 ref7">3,4,5,6,7</xref>
        ]. Applied to basic diagnostic inferences, CBNs can
define basic causal structures with three parameters.
      </p>
      <p>We first find Pc, which represents the cause base rate, considered here as the
prior probability of the cause. Knowledge of this parameter generally comes from
an external source, for instance, it could be reported by an expert. We then find
the causal power of the target cause Wc. This is an unobservable parameter that
can only be estimated from the data provided by nature.1 Finally, to be fully
characterized, the network must be complemented by a nuisance parameter that
represents all possible alternative causes. Associated with the nuisance parameter
is Wa, which is analogues to Wc. It corresponds to the aggregate formed from
the causal power and the base rates of all possible alternative causes, and it
represents the probability that the effect is present while the target cause is
absent.</p>
      <p>
        The activation function (noisy-OR type) of the causal network implies that
the effect can be generated independently by the target cause, or by the amalgam
of the alternative causes, or by these two variables simultaneously [
        <xref ref-type="bibr" rid="ref10 ref3 ref8">3,8,10</xref>
        ]. In
1 According to power PC theory [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], causal power represents the probability of a cause,
acting alone, to produce a effect: Wc = Pr(e | c)−Pr(e |¬c) / 1−Pr(e |¬c) . Given
that causes are never observed alone, this is a theoretical measure that can only be
estimated from data.
this context, Pr(c | e) is a function of the three parameters defined above, as
follows:
      </p>
      <p>Pr(c | e) = 1 − (1 − Pc) × Pc · Wc + WaW−a Pc · Wc · Wa .
(1)
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>The Structure Induction Model</title>
      <p>Meder et al.’s previously mentioned SIM builds on the CBN literature. In their
model, the diagnostic probability depends on the parameters Pc, Wc, Wa, but
also on the uncertainty concerning the causal structure itself. An individual in a
diagnosis situation is often uncertain about which world she inhabits. Figure 1
illustrates the situation in which she does not know whether she is in a world
where there is a causal connection between the cause of interest and the effect
(structure S1) or whether she is in a world where such a link does not exist
(structure S0). In such a situation, estimating the diagnostic probability follows
a three-step process:
1. Based on available data D, the first step consists in estimating, by Bayesian
inference, the said parameters for each structure, S0 and S1, separately. We
will call θ the set of parameters to be estimated. For S0, there will be a guess
of Pc and Wa (the value of Wc is set to 0). For the S1 structure, there will
be a guess of Pc, Wc, and Wa. The posterior probability distribution of the
parameters for a structure Si is given by</p>
      <p>Pr(θ | D ; Si) =</p>
      <p>Pr(D | θ ; Si) Pr(θ | Si) ,</p>
      <p>Pr(D | Si)
where the prior probabilities of the parameters, Pr(θ | Si), are assumed to
follow a Beta(1, 1) distribution. Under a noisy-OR hypothesis, the likelihood
of the data given the parameters values for a structure Si, Pr(D | θ ; Si), are
given by Equations 3 (structure S0) and 4 (structure S1):2</p>
      <p>Pr(D | θ ; S0) = [(1 − Pc)(1 − Wa)]N(¬c, ¬e) · [(1 − Pc)Wa]N(¬c, e) ·
[Pc(1 − Wa)]N(c, ¬e) · [Pc Wa]N(c, e) ; (3)
(2)
(5)
(6)</p>
      <p>Pr(Si) is the prior probability of structure Si and it was set to 0.5 for both
structures. Pr(D | Si) is the likelihood of the data given the structure and
is computed by integrating over the likelihood functions of the parameters
(see Equations 3 and 4) under structure Si</p>
      <p>Pr(D | Si) =</p>
      <p>Z Z Z</p>
      <p>Pr(D | θ ; Si) Pr(θ | Si) dθ,
(7)
2 In these two equations, the terms c and e in the exponent N (c, e), which may also
occur negated, denote contingencies in the data about the target cause and the effect.
For example, N (¬c, e) denotes the number of cases where the cause is missing and
the effect present.
3 A weak empirical contingency between the Cause of interest and the Effect will
suggest that Pr(S0 | D) &gt; Pr(S1 | D). A strong contingency between Cause and Effect
will instead suggest that Pr(S0 | D) &lt; Pr(S1 | D).</p>
      <p>Pr(D | θ ; S1) = [(1 − Pc)(1 − Wa)]N(¬c, ¬e) · [(1 − Pc)Wa]N(¬c, e) ·
[Pc(1 − Wc)(1 − Wa)]N(c, ¬e) · [Pc(Wc + Wa − Wc.Wa)]N(c, e). (4)
2. The next step consists in estimating the probabilities of the causal structures
Si themselves. The calculation of these probabilities is carried out for each of
the structures S0 and S1 separately, on the basis of the available empirical
data D and the parameters estimated in the previous step. This leads to
the posterior probabilities of each structure under the data: Pr(S0 | D) and
Pr(S1 | D).3 These two conditional probabilities are given by
the probability of the data being given by</p>
      <p>Pr(Si | D) =</p>
      <p>Pr(D | Si) Pr(Si) ,</p>
      <p>Pr(D)</p>
      <p>X
where Pr(θ | Si) denotes the joint prior probability over the structures’
parameters. For both structures, this probability is assume to follow a Beta(1, 1)
distribution.
3. Finally for S0 and S1, from the structure’ parameters and from the posterior
probabilities of the structures, one calculates two diagnostic probabilities.
These probabilities are computed by integrating over the parameters’ values
weighted by their posterior probabilities:</p>
      <p>Pr(c | e ; D, Si) =</p>
      <p>Pr(c | e ; θ, Si)
Z Z Z</p>
      <p>Pr(D | θ ; Si) Pr(θ | Si) dθ,</p>
      <p>Pr(D | Si)
with
and</p>
      <p>Pr(c | e ; θ, S0) = Pc,
Pr(c | e ; θ, S1) =</p>
      <p>(Wc + Wa − WcWa)Pc
(Wc + Wa − WcWa)Pc + (1 − Pc)Wa
.</p>
      <p>To obtain a single diagnostic probability Pr(c | e ; D), the diagnostic
probabilities for the two structures (see Equation 8) are weighted by the posterior
probabilities of the corresponding structure (see Equations 5 and 6) and
summed together:
(8)
(9)
(10)</p>
      <p>X
This posterior probability will thus take into account both the uncertainty
concerning the parameters Pc, Wc, Wa, as well as the uncertainty related to
the causal structure.4</p>
      <p>
        In the process that leads to the estimation of the ultimate diagnostic
probability Pr(c | e ; D), the second step is crucial because it is the main source of the
predictions of the SIM (the influence on the diagnostic probability Pr(c | e ; D) of
the predictive probability Pr(e | c)). This step consists in determining the
probability of each of the structures, knowing the data: Pr(Si | D), for i ∈ {0, 1}. Since
individuals have limited cognitive capacities [
        <xref ref-type="bibr" rid="ref11 ref9">9,11</xref>
        ] and do not have direct access
to the parameters of the structure (e.g., the causal power of the cause of interest
Wc is an unobservable parameter), they estimate Pr(Si | D) by examining the
contingencies between the Cause of interest and the Effect in the available data.
This examination will lead them to estimate the predictive probability Pr(e | c).
The calculation of this value functions as a kind of heuristic to approximate
Pr(Si | D) [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>Thus, when Pr(e | c) is small, this suggests the absence of a link between
Cause and Effect. In this case, Pr(S0 | D) will be more salient than Pr(S1 | D).
Conversely, when Pr(e | c) is high, it will suggest the existence of a causal link
and Pr(S1 | D) will be more salient than Pr(S0 | D). This mechanism constitutes
4 See [2, p. 299] for more formal details involved in these three calculation steps.
the main novelty of the SIM, and it predicts the influence of the predictive
probability on diagnostic judgments via the evaluation of the probability of the
causal structure.
2.2</p>
    </sec>
    <sec id="sec-3">
      <title>Key Predictions of the Structure Induction Model</title>
      <p>The dependence of the diagnostic probability on the predictive probability is
expected to introduce a bias in the estimation of the former. Specifically, the
prediction is that when the predictive probability Pr(e | c) is low, individuals
will tend to underestimate the diagnostic probability Pr(c | e) to a much greater
extent than when Pr(e | c) is high.</p>
      <p>
        This phenomenon can be explained intuitively by considering the situation
in which an agent does not know which of the worlds S0 and S1 she inhabits.
If the empirical predictive probability Pr(e | c) is small, this suggests that there
may not be any link between Cause and Effect, in which case the agent will
tend to believe that she is in world S0. She will then be reluctant to attribute
diagnostic virtues to the Effect, which in turn will lead her to underestimate
the value of the diagnostic probability calculated from the data. On the other
hand, if Pr(e | c) is high, the agent will be inclined to believe that there is a causal
relationship between Cause and Effect, whence she is likely to conclude that she is
in world S1. In that case, the estimate of Pr(c | e) will more objectively reflect the
empirical diagnostic probability of the data. This assumption has been confirmed
empirically by Meder et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In this study, we want to reproduce and generalize
their important finding by testing the evolution of the bias as a function of the
strategy that individuals use to estimate the diagnostic probability.
3
      </p>
      <p>
        Diagnostic Reasoning Strategies: Abduction versus
Defeasible Deduction
The SIM specifies a rational computation procedure which links the diagnostic
judgments of two types of uncertainty (uncertainty about the parameters and
uncertainty concerning the causal structure). In the terminology of Marr [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ],
this procedure is at the computational level of the cognitive system. However,
Meder et al. also showed that the execution of the rational calculation will have
consequences at the algorithmic (i.e., psychological) level, in particular, for the
influence of the predictive probability on diagnostic judgments. At the
algorithmic level, however, the psychological mechanisms underlying the estimation of
the diagnostic probability itself have not been precisely described by these
authors. We aim to fill this lacuna by introducing the concept of estimation strategy
of the diagnostic probability.
      </p>
      <p>Bruner, Goodnow, and Austin [14, p. 54] defined the concept of strategy in
a very general way as a pattern of decisions in the acquisition, retention, and
use of information to achieve specific objectives. Siegler and Jenkins [15, p. 11]
clarified this idea further and defined strategies as sets of procedures or possible
methods put in place by individuals to accomplish a given cognitive task.</p>
      <p>
        Various results suggest that individuals can estimate the diagnostic
probability by following essentially different inferential strategies. In two experiments,
Stilgenbauer and Baratgin [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] showed that individuals preferred to follow an
abductive strategy to estimate Pr(c | e) when the causal rule “If cause then
P(effect)” was made sufficiently plausible (the operator P has the intended
meaning “there is a chance that”). In that kind of situation, participants’
diagnostic inferences followed a probabilistic Affirming the Consequent schema
(henceforth ACp). This pattern of inference is also often recognized as the basic
pattern of abduction [
        <xref ref-type="bibr" rid="ref17 ref18 ref19">17,18,19</xref>
        ]:
effect
If cause then P(effect)
      </p>
      <sec id="sec-3-1">
        <title>P(cause)</title>
        <p>
          Stilgenbauer and Baratgin [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] further showed that individuals came to prefer
estimating the diagnostic probability via a defeasible deduction type of reasoning
when the plausibility of the rule “If cause then P(effect)” decreased. In that
kind of case, individuals tended to reason in accordance with defeasible Modus
Ponens (henceforth MPd; see [
          <xref ref-type="bibr" rid="ref20 ref21 ref22">20,21,22</xref>
          ]):
effect
If effect then P(cause)
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>P(cause)</title>
        <p>
          There is much evidence supporting the thought that ACp and MPd are the
two major strategies used in diagnostic reasoning. For example, Patel and Groen
[
          <xref ref-type="bibr" rid="ref23">23</xref>
          ] showed that medical students naturally form rules of the form “If cause
then P(effect)” to evaluate the likelihood of a disease (Cause) from a set
of symptoms (Effect), that, in other words, their novice participants engaged
in abductive reasoning. By contrast, more experienced doctors were found to
typically construct “If effect then P(cause)” rules to arrive at a diagnosis.
Similarly, in the field of legal reasoning, Prakken and Renooij [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] showed that
it is formally possible to follow a purely abductive strategy on the one hand and
a strategy which uses defeasible deduction schemes on the other hand to trace
the causes in a legal case.
4
        </p>
        <p>Experimental Study of Diagnostic Reasoning Strategies
for Estimating Pr(c | e)
4.1</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Predictions</title>
      <p>
        The objective of this research is to test the predictions of the SIM, taking into
account the strategies of diagnostic reasoning followed by participants.5 As
pre5 This experiment is an extension of Chapter 6 of Jean-Louis Stilgenbauer’s doctoral
thesis [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ].
viously explained, the SIM predicts an influence on the diagnostic probability
Pr(c | e) of the predictive probability Pr(e | c), a phenomenon that leads in
practice to an underestimation bias of the diagnostic probability. We assume that
the bias will be more pronounced for abductive estimates than for estimates
obtained via defeasible deduction, given that abduction requires to construct a rule
of type “If cause then P(effect),” which is related to the predictive
probability Pr(e | c).6 In this type of situation, Pr(e | c) should become more salient
to participants, and since this value is connected to the underestimation bias
predicted by the SIM, we expect to see here the clearest evidence of interference
with the diagnostic estimate.
4.2
      </p>
    </sec>
    <sec id="sec-5">
      <title>Participants</title>
      <p>There were 114 participants in this experiment. All were French native speakers,
studying at IPC Paris (Faculty of Philosophy and Psychology). Of these
participants, 27 were male (M = 20.3, sd = 1.5) and 87 female (M = 20.5, sd = 2). We
eliminated 17 subjects from the protocol: 8 because they left the experiment too
early, and 9 because the latency of their responses was too high (over 3 minutes).
4.3</p>
    </sec>
    <sec id="sec-6">
      <title>Materials</title>
      <p>To test the impact of predictive probability on diagnostic probability estimates,
we created four diagnostic reasoning situations, keeping empirical Pr(c | e)
constant at 0.75 while varying empirical Pr(e | c) across the situations; specifically,
Pr(e | c) ∈ {0.1, 0.3, 0.6, 0.9}. In each situation, participants were asked to
estimate Pr(c | e). All participants were exposed to the four diagnostic reasoning
contexts, which were presented in an order randomized per participant
(withinsubjects factor). On the basis of the SIM, we predicted that participants would
tend to estimate Pr(c | e) below the empirical probability of 0.75, and that their
estimates would depend on the value of Pr(e | c).</p>
      <p>
        The reasoning situations were medical cases in which there were imaginary
viruses (Cause) that could infect people. An infected person might or might not
develop a characteristic symptom (Effect). Each situation was introduced using
a population-based stimulus that summarized a set of observations. The example
6 There is a wealth of evidence showing that people tend to interpret the
probability of an indicative conditional, Pr(If A, then C), as the conditional probability
Pr(C | A); see, e.g., [
        <xref ref-type="bibr" rid="ref26 ref27 ref28">26,27,28</xref>
        ]. Admittedly, the conditional we are considering is not
“If cause then effect,” whose probability will, for most people, equal Pr(e | c),
but rather “If cause then P(effect),” whose probability will, by the same token,
equal Pr(Pe | c), which is not necessarily equal to Pr(e | c). But, first, we are only
claiming that “If cause then P(effect)” makes Pr(e | c) salient, and that it can
do by making Pr(Pe | c) salient (given the similarity between the two expressions).
Second, although the two conditional probabilities are formally distinct, anyone who
has taught a course in modal logic knows that people have a tendency to collapse
iterated or even mixed modalities. As a result, many may fail to distinguish between
Pr(e | c) and Pr(Pe | c) in the first place [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ].
in Figure 2 shows the stimulus corresponding to the values Pr(e | c) = 0.1 and
Pr(c | e) = 0.75. The other three stimuli were defined by the following probability
pairs: Pr(e | c) = 0.3 and Pr(c | e) = 0.75; Pr(e | c) = 0.6 and Pr(c | e) = 0.75; and
Pr(e | c) = 0.9 and Pr(c | e) = 0.75.
      </p>
      <p>To test the impact of reasoning strategy (ACp/MPd) on estimates of Pr(c | e),
we created four non-overlapping groups of participants (between-subjects
factor). Each group was encouraged to estimate the diagnostic probability through
a particular type of reasoning. In addition, we created a situation of “free”
reasoning to determine the spontaneous and natural estimates of the participants.
Participants were randomly assigned to one of four groups:
1. In the first group, participants were encouraged to reason in accordance
with the abductive schema ACp. To this end, we introduced with the stimuli
the following conditional rule: “If cause then P(effect).” The Cause is
a fictitious virus and the Effect is its characteristic symptom. For example,
with the material shown in Figure 2, the following rule appeared at the top
of the picture: If a patient is infected with the Igorusphère, there
is a chance that the patient has nausea.
2. In the second group, the participants were encouraged to reason in
accordance with the defeasible Modus Ponens schema (MPd). Now, at the top of
the picture, there appeared the default rule: “If effect then P(cause).”
For the example of Figure 2, the specific instance was: If a patient has
nausea, there is a chance that the patient has been infected
with the Igorusphère.
3. In the third group, no rules were proposed, leaving participants completely
free to estimate Pr(c | e) in whichever way they preferred. For each stimulus,
an accompanying sentence merely emphasized the existence of an
association between Cause (disease) and Effect (the symptom). In this group, we
first introduced the disease and only then the symptom. With Figure 2
appeared the sentence: There is a chance that Igorusphère infection
is associated with nausea.
4. The fourth group was like the third—so this was also a free reasoning group—
except that the order in which disease and symptom were introduced was
reversed. For example, the sentence that now went with the situation
depicted in Figure 2 was: There is a chance that nausea is associated
with Igorusphère infection.
4.4</p>
    </sec>
    <sec id="sec-7">
      <title>Procedure</title>
      <p>The experiment was implemented on the SoSci Survey website (https://www.
soscisurvey.de/). Participants were recruited via a group email. Once connected
to the experiment, they were informed that they were supposed to answer all four
questions. It was emphasized that their answers should be spontaneous and
provided fairly quickly. Then, participants were asked to provide some demographic
information: gender, age, native language, type and level of the university course.
After this information had been recorded, the general experimental instructions
were presented. In each of the four situations, participants were asked to
estimate Pr(c | e), for the relevant c and e. More specifically, in each situation a new
patient with the characteristic symptom of the virus displayed in the stimulus
was presented to the participants. They were then asked to assess the chances
that this new patient had been infected by the virus. Participants were asked to
give their responses on a scale from 0 % to 100 %, which appeared beneath the
stimulus, with the cursor initially set at the 0 % end of the scale.
4.5</p>
    </sec>
    <sec id="sec-8">
      <title>Results</title>
      <p>The results are summarized in Figure 3. A repeated measures analysis of
variance (ANOVA) was carried out on the estimates of Pr(c | e) recorded in the
free reasoning groups 3 and 4. There was no order effect related to the terms
virus/symptoms and symptoms/virus: F (1, 46) = 0.083, p &gt; .05.
Accordingly, we merged the data from groups 3 and 4 for the remainder of the analysis.
Impact of predictive probability. A repeated measures ANOVA revealed a
main effect of the within-subjects factor predictive probability. The estimates
of diagnostic probability Pr(c | e) were found to depend on the predictive
probability value Pr(e | c). The graph shows the Pr(c | e) estimates to increase (and
to approximate the empirical diagnostic probability Pr(c | e) = 0.75) as the
predictive probability values Pr(e | c) increase. The effect was highly significant:
F (3, 276) = 27.27, p &lt; .001.</p>
      <p>Impact of estimation strategy. The ANOVA also showed a main effect of
the between-subjects factor diagnostic probability estimation strategy. The
effect was significant: F (2, 92) = 3.44, p &lt; .05. Multiple comparisons with
Bonferroni correction showed that the Pr(c | e) estimates under defeasible deduction
did not vary significantly from estimates made in the free reasoning condition
(p &gt; .05). However, Pr(c | e) estimates made under abduction did vary
significantly from estimates made under defeasible deduction (p &lt; .01) as well as from
estimates made under free reasoning (p &lt; 0.05). The graph shows that Pr(c | e)
estimates were more strongly underestimated compared to the empirical value
Pr(c | e) = 0.75 under abductive strategy than under defeasible deduction and
free reasoning.</p>
    </sec>
    <sec id="sec-9">
      <title>Interaction between predictive probability and estimation strategy.</title>
      <p>The statistical analysis did not reveal any interaction between the predictive
probability Pr(e | c) and the diagnostic estimation strategy: F (6, 276) = 0.17,
p &gt; .05.
4.6</p>
    </sec>
    <sec id="sec-10">
      <title>Discussion</title>
      <p>Our results are clear evidence that participants systematically underestimated
Pr(c | e), and that this bias was strongly related to the predictive empirical
probability Pr(e | c). For small values of Pr(e | c), the underestimation of Pr(c | e) was
maximal, and it decreased as the predictive probability increased. This result
is important because it strongly supports the key calculation step of the causal
structure probability Pr(Si | D) that was expected in light of Meder, Mayrhofer,
and Waldmann’s Structure Induction Model (SIM).</p>
      <p>Our data also show that the best estimates of Pr(c | e) were given under
defeasible Modus Ponens and under free reasoning (there was no significant
difference between these two conditions). This result suggests that defeasible
deduction is the natural inference mode to estimate the diagnostic probability. In
any case, diagnostic estimates made through abduction deviated much more from
the empirical value Pr(c | e) = 0.75. This confirms our initial hypothesis: when
the salience of predictive probability Pr(e | c) is increased by the introduction of
a causal rule of the form “If cause then P(effect),” the underestimation bias
predicted by the SIM gets worse.</p>
      <p>
        Jointly, these results shed interesting new light on the reasoning process
underlying defeasible Modus Ponens in causal contexts. In our opinion, this
reasoning can be considered as an elementary form of inference to the best
explanation [
        <xref ref-type="bibr" rid="ref30 ref38">30,38</xref>
        ] because the major premise of the MPd schema (If effect
then P(cause)) is an explanation-evoking rule or evidential rule which,
according to Pearl [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ], suggests the activation and search for explanation. Coupled
with the operating principle of the SIM—in particular the computation step
of Pr(Si | D)—this inference tends to support the significant role played by
explanatory considerations in the process of determining the diagnostic estimate
[
        <xref ref-type="bibr" rid="ref33 ref34 ref35">33,34,35</xref>
        ].
      </p>
      <p>
        In this work, we only tested a minimal type of explanatory consideration,
one corresponding to the predictive probability Pr(e | c). In actuality, however,
people may well exploit more complex predictive probability-based measures,
such as Popper’s measure [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ] or Good’s [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ].7 Whether this is so, we leave as a
topic for future research.
7 According to Popper’s measure, the explanatory power of c in light of e (plus
background knowledge) is given by Pr(e | c) − Pr(e) / Pr(e | c) + Pr(e) . According to
Good, that power is given by ln Pr(e | c)/ Pr(e) . See [
        <xref ref-type="bibr" rid="ref33 ref38">33,38</xref>
        ] for discussion of these
and other probabilistic measures of explanatory power. See [
        <xref ref-type="bibr" rid="ref39 ref40">39,40</xref>
        ] for discussion of
human (heuristic and analytic) cognitive process in causal reasoning.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Meder</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mayrhofer</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Waldmann</surname>
            ,
            <given-names>M. R.:</given-names>
          </string-name>
          <article-title>A rational model of elemental diagnostic inference</article-title>
          .
          <source>In N. A</source>
          .
          <string-name>
            <surname>Taatgen</surname>
          </string-name>
          , H. van Rijn (Eds.),
          <source>Proceedings of the 31st Annual Conference of the Cognitive Science Society</source>
          (pp.
          <fpage>2176</fpage>
          -
          <lpage>2181</lpage>
          ). Austin,
          <source>TX: Cognitive Science Society</source>
          (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Meder</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mayrhofer</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Waldmann</surname>
            ,
            <given-names>M. R.</given-names>
          </string-name>
          :
          <article-title>Structure induction in diagnostic causal reasoning</article-title>
          .
          <source>Psychological Review</source>
          ,
          <volume>121</volume>
          (
          <issue>3</issue>
          ),
          <fpage>277</fpage>
          -
          <lpage>301</lpage>
          (
          <year>2014</year>
          ). doi.org/10.1037/ a0035944
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Pearl</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Probabilistic reasoning in intelligent systems: Networks of plausible inference</article-title>
          .
          <source>Kaufmann</source>
          , San Mateo: CA (
          <year>1988</year>
          ). doi.org/10.1016/
          <fpage>0004</fpage>
          -
          <lpage>3702</lpage>
          (
          <issue>91</issue>
          )
          <fpage>90084</fpage>
          -W
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Pearl</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <source>Causality: Models, Reasoning, and Inference</source>
          . Cambridge University Press, New York (
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Glymour</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>The Mind's Arrows: Bayes Nets and Graphical Causal Models in Psychology</article-title>
          . MIT Press, Cambridge MA (
          <year>2001</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Hagmayer</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Causal Bayes nets as psychological theories of causal reasoning: Evidence from psychological research</article-title>
          . Synthese,
          <volume>193</volume>
          (
          <issue>4</issue>
          ),
          <fpage>1107</fpage>
          -
          <lpage>1126</lpage>
          (
          <year>2016</year>
          ). doi. org/10.1007/s11229-015-0734-0
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Rottman</surname>
            ,
            <given-names>B. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hastie</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Reasoning about causal relationships: Inferences on causal networks</article-title>
          .
          <source>Psychological Bulletin</source>
          ,
          <volume>140</volume>
          ,
          <fpage>109</fpage>
          -
          <lpage>139</lpage>
          (
          <year>2014</year>
          ). doi.org/10.1037/ a0031903
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8. Cheng, P. W.:
          <article-title>From covariation to causation: A causal power theory</article-title>
          .
          <source>Psychological Review</source>
          ,
          <volume>104</volume>
          ,
          <fpage>367</fpage>
          -
          <lpage>405</lpage>
          (
          <year>1997</year>
          ). doi.org/10.1037/
          <fpage>0033</fpage>
          -
          <lpage>295X</lpage>
          .
          <year>104</year>
          .2.
          <fpage>367</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Simon</surname>
          </string-name>
          , H.:
          <article-title>Models of Man, Social and Rational: Mathematical Essays on Rational Human Behavior in a Social Setting</article-title>
          . Wiley, New York (
          <year>1957</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Glymour</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Learning, prediction and causal Bayes nets</article-title>
          .
          <source>Trends in Cognitive Sciences</source>
          ,
          <volume>7</volume>
          ,
          <fpage>43</fpage>
          -
          <lpage>48</lpage>
          (
          <year>2003</year>
          ). doi.org/10.1016/S1364-
          <volume>6613</volume>
          (
          <issue>02</issue>
          )
          <fpage>00009</fpage>
          -
          <lpage>8</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Simon</surname>
          </string-name>
          , H.:
          <article-title>Bounded rationality and organizational learning</article-title>
          .
          <source>Organization Science</source>
          ,
          <volume>2</volume>
          ,
          <fpage>125</fpage>
          -
          <lpage>134</lpage>
          (
          <year>1991</year>
          ). doi.org/10.1287/orsc.2.1.
          <fpage>125</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Meder</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gerstenberg</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hagmayer</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Waldmann</surname>
            ,
            <given-names>M. R.</given-names>
          </string-name>
          :
          <article-title>Observing and intervening: Rational and heuristic models of causal decision making</article-title>
          .
          <source>The Open Psychology Journal</source>
          ,
          <volume>3</volume>
          (
          <issue>2</issue>
          ),
          <fpage>119</fpage>
          -
          <lpage>135</lpage>
          (
          <year>2010</year>
          ). doi.org/10.2174/1874350101003020119
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Marr</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Vision: A Computational Investigation into the Human Representation and Processing of Visual Information</article-title>
          . San Francisco, Freeman (
          <year>1982</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Bruner</surname>
            ,
            <given-names>J. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goodnow</surname>
            ,
            <given-names>J. J.</given-names>
          </string-name>
          , Austin,
          <string-name>
            <surname>G. A.</surname>
          </string-name>
          :
          <article-title>A Study of Thinking</article-title>
          . Wiley, New York (
          <year>1956</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Siegler</surname>
            ,
            <given-names>R. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jenkins</surname>
            ,
            <given-names>E. A.</given-names>
          </string-name>
          :
          <article-title>How Children Discover New Strategies</article-title>
          . Lawrence Erlbaum Associates, Hillsdale (
          <year>1989</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Stilgenbauer</surname>
            ,
            <given-names>J.-L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baratgin</surname>
          </string-name>
          , J.:
          <article-title>Étude des stratégies de raisonnement causal dans l'estimation de la probabilité diagnostique à travers un paradigme expérimental de production de règle [Study of causal reasoning strategies in diagnostic probability estimation through an experimental rule production paradigm]</article-title>
          .
          <source>Canadian Journal of Experimental</source>
          Psychology/Revue Canadienne de Psychologie Expérimentale (
          <year>2017</year>
          ). doi.org/10.1037/cep0000108
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Magnani</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Abduction</surname>
          </string-name>
          , Reason, and
          <source>Science: Processes of Discovery and Explanation</source>
          . Kluwer, New York (
          <year>2001</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Aliseda</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Abductive Reasoning: Logical Investigations into Discovery and Explanation</article-title>
          . Springer, Berlin (
          <year>2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Schurz</surname>
          </string-name>
          , G.:
          <article-title>Patterns of abduction</article-title>
          .
          <source>Synthese</source>
          ,
          <volume>164</volume>
          (
          <issue>2</issue>
          ),
          <fpage>201</fpage>
          -
          <lpage>234</lpage>
          (
          <year>2007</year>
          ). doi.org/10. 1007/s11229-007-9223-4
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Toulmin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <source>The Uses of Argument</source>
          . Cambridge University Press, Cambridge (
          <year>1958</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Reiter</surname>
            ,
            <given-names>R.:</given-names>
          </string-name>
          <article-title>A logic for default reasoning</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>13</volume>
          (
          <issue>1-2</issue>
          ),
          <fpage>81</fpage>
          -
          <lpage>132</lpage>
          (
          <year>1980</year>
          ). doi.org/10.1016/
          <fpage>0004</fpage>
          -
          <lpage>3702</lpage>
          (
          <issue>80</issue>
          )
          <fpage>90014</fpage>
          -
          <lpage>4</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Pollock</surname>
            ,
            <given-names>J. L.</given-names>
          </string-name>
          :
          <article-title>Defeasible reasoning</article-title>
          .
          <source>Cognitive Science</source>
          ,
          <volume>11</volume>
          (
          <issue>4</issue>
          ),
          <fpage>481</fpage>
          -
          <lpage>518</lpage>
          (
          <year>1987</year>
          ). doi. org/10.1207/s15516709cog1104_
          <fpage>4</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Patel</surname>
            ,
            <given-names>V. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Groen</surname>
            ,
            <given-names>G. J.:</given-names>
          </string-name>
          <article-title>Developmental accounts of the transition from medical students to doctor: Some problems and suggestions</article-title>
          .
          <source>Medical Education</source>
          ,
          <volume>25</volume>
          ,
          <fpage>527</fpage>
          -
          <lpage>535</lpage>
          (
          <year>1991</year>
          ). doi.org/10.1111/j.1365-
          <fpage>2923</fpage>
          .
          <year>1991</year>
          .tb00106.x
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Prakken</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Renooij</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Reconstructing causal reasoning about evidence: a case study</article-title>
          .
          <source>In Legal Knowledge and Information Systems. JURIX: The Fourteenth Annual Conference</source>
          (pp.
          <fpage>131</fpage>
          -
          <lpage>142</lpage>
          ). IOS Press, Amsterdam (
          <year>2001</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Stilgenbauer</surname>
          </string-name>
          , J.-L.:
          <article-title>Étude expérimentale des stratégies de raisonnement causal dans l'estimation de la probabilité diagnostique : Stratégie abductive versus stratégie par déduction rétractable [Experimental Study of Causal Reasoning Strategies in the Estimate of Diagnostic Probability: Abductive Strategy versus Defeasible Deduction Strategy]</article-title>
          . Thèse de doctorat,
          <source>École Pratique des Hautes Études (EPHE)</source>
          , Paris - France (
          <year>2016</year>
          ). doi.
          <source>org/10.13140/RG.2.2.28698.03523</source>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Baratgin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Politzer</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Logic, probability and inference: A methodology for a new paradigm</article-title>
          . In L. Macchi,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bagassi</surname>
          </string-name>
          , R. Viale (Eds.),
          <source>Cognitive Unconscious and Human Rationality</source>
          (pp.
          <fpage>119</fpage>
          -
          <lpage>142</lpage>
          ). MIT Press, Cambridge MA (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Baratgin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ocak</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bessaa</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stilgenbauer</surname>
          </string-name>
          , J.-L.:
          <article-title>Updating context in the Equation: An experimental argument with eye tracking</article-title>
          . In M. B.
          <string-name>
            <surname>Ferraro</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Giordani</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Vantagi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Gagolewski</surname>
            , M. Angeles Gil,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Grzegorzewski</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          Hryniewicz (Eds.),
          <source>Soft Methods for Data Science, Advances in Intelligent Systems and Computing</source>
          . Vol.
          <volume>456</volume>
          (pp.
          <fpage>25</fpage>
          -
          <lpage>33</lpage>
          ). Springer, Warsaw (
          <year>2017</year>
          ). doi.org/10. 1007/978-3-
          <fpage>319</fpage>
          -42972-
          <issue>4</issue>
          _
          <fpage>4</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Douven</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>The Epistemology of Indicative Conditionals</article-title>
          . Cambridge University Press, Cambridge (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Over</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Douven</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Verbrugge</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Scope ambiguities and conditionals</article-title>
          .
          <source>Thinking &amp; Reasoning</source>
          ,
          <volume>19</volume>
          (
          <issue>3</issue>
          ),
          <fpage>284</fpage>
          -
          <lpage>307</lpage>
          (
          <year>2013</year>
          ). doi.org/10.1080/13546783.
          <year>2013</year>
          .810172
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Lipton</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Inference to the Best Explanation</article-title>
          . Routledge, London and New York (
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Douven</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          : Abduction. In E. N.
          <string-name>
            <surname>Zalta</surname>
          </string-name>
          (Ed.),
          <source>Stanford Encyclopedia of Philosophy</source>
          (
          <year>2011</year>
          ). https://plato.stanford.edu/entries/abduction/
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>Pearl</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Embracing causality in default reasoning</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>35</volume>
          (
          <issue>2</issue>
          ),
          <fpage>259</fpage>
          -
          <lpage>271</lpage>
          (
          <year>1988</year>
          ). doi.org/10.1016/
          <fpage>0004</fpage>
          -
          <lpage>3702</lpage>
          (
          <issue>88</issue>
          )
          <fpage>90015</fpage>
          -X
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>Douven</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schupbach</surname>
            ,
            <given-names>J. N.</given-names>
          </string-name>
          :
          <article-title>The role of explanatory considerations in updating</article-title>
          .
          <source>Cognition</source>
          ,
          <volume>142</volume>
          ,
          <fpage>299</fpage>
          -
          <lpage>311</lpage>
          (
          <year>2015</year>
          ). doi.org/10.1016/j.cognition.
          <year>2015</year>
          .
          <volume>04</volume>
          .017
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>Douven</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schupbach</surname>
            ,
            <given-names>J. N.</given-names>
          </string-name>
          :
          <article-title>Probabilistic alternatives to Bayesianism: The case of explanationism</article-title>
          . Frontiers in Psychology,
          <volume>6</volume>
          (
          <issue>459</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          (
          <year>2015</year>
          ). doi.org/10.3389/ fpsyg.
          <year>2015</year>
          .00459
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>Douven</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wenmackers</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Inference to the Best Explanation versus Bayes's Rule in a Social Setting. The British Journal for the Philosophy of Science</article-title>
          , in press. doi.org/10.1093/bjps/axv025
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36.
          <string-name>
            <surname>Popper</surname>
            ,
            <given-names>K. R.</given-names>
          </string-name>
          :
          <source>The Logic of Scientific Discovery. Hutchinson</source>
          , London (
          <year>1959</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37.
          <string-name>
            <surname>Good</surname>
            ,
            <given-names>I. J.:</given-names>
          </string-name>
          <article-title>Weight of evidence, corroboration, explanatory power, information and the utility of experiment</article-title>
          .
          <source>Journal of the Royal Statistical Society. Series B (Methodological)</source>
          ,
          <volume>22</volume>
          (
          <issue>2</issue>
          ),
          <fpage>319</fpage>
          -
          <lpage>331</lpage>
          (
          <year>1960</year>
          ). http://www.jstor.org/stable/2984102
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          38.
          <string-name>
            <surname>Douven</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Inference to the Best Explanation: What is it? And why should we care</article-title>
          ? In T. Poston,
          <string-name>
            <surname>K.</surname>
          </string-name>
          McCain (Eds.), Best Explanations:
          <article-title>New Essays on Inference to the Best Explanation</article-title>
          . Oxford University Press, Oxford, in press.
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          39.
          <string-name>
            <surname>Hattori</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Over</surname>
            ,
            <given-names>D. E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hattori</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Takahashi</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baratgin</surname>
          </string-name>
          , J.:
          <article-title>Dual frames in causal reasoning and other types of thinking</article-title>
          . In N. Galbraith,
          <string-name>
            <given-names>E.</given-names>
            <surname>Lucas</surname>
          </string-name>
          , D. E. Over (Eds.),
          <article-title>The Thinking Mind: A Festschrift for Ken Manktelow</article-title>
          (pp.
          <fpage>15</fpage>
          -
          <lpage>28</lpage>
          ). London: Routledge (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          40.
          <string-name>
            <surname>Hattori</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hattori</surname>
            ,
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Over</surname>
            ,
            <given-names>D. E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Takahashi</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baratgin</surname>
          </string-name>
          , J.:
          <article-title>Dual frames for causal induction: The normative and the heuristic</article-title>
          .
          <source>Thinking &amp; Reasoning</source>
          , in press. doi.org/10.1080/13546783.
          <year>2017</year>
          .
          <volume>1316314</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>