<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Bayesian Justification for the Scenario Approach to Legal Proof</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mario Günther</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Conrad Friedrich</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Carnegie Mellon University, Department of Philosophy</institution>
          ,
          <addr-line>Baker Hall 161, 5000 Forbes Avenue Pittsburgh, PA 15213</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ludwig-Maximilians-Universität München, Munich Center for Mathematical Philosophy</institution>
          ,
          <addr-line>Geschwister-Scholl-Platz 1, 80539 München</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>We probabilify the scenario approach to legal proof. The scenario approach searches for the scenario that strikes the best balance in explaining the available evidence, in fitting to general background beliefs, and in its degree of internal coherence. Our account provides a unified measure of the three dimensions in terms of probabilities, and so is proof that the scenario approach can be probabilified. Indeed, our account can be summarized by a version of Bayes Theorem: the most likely scenario in light of the evidence is the best. We thereby provide a Bayesian justification for the scenario approach.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Legal Epistemology</kwd>
        <kwd>Legal Proof</kwd>
        <kwd>Scenario Approach</kwd>
        <kwd>Bayesianism</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The scenario approach is a promising normative account of legal proof [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. It is commonly presented
as an incompatible alternative to Bayesian accounts of legal proof [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The scenario approach says that
you should find a defendant liable if and only if (if) the best scenario implies that the defendant is liable.
A scenario is best among the available scenarios if it strikes the best balance between
(i) explaining the available evidence,
(ii) fitting to the general background beliefs, and
(iii) exhibiting internal coherence.
      </p>
      <p>Here we probabilify the scenario approach. The upshot is that the most probable scenario is the best.
Unlike the original scenario approach, our probabilistic account can explain what scenario strikes the
best balance. Our account shows that the normative scenario approach is compatible with Bayesian
accounts of legal proof.</p>
      <p>
        Our goal is to provide a normative foundation for the scenario approach. Our project is not to describe
legal practice, as for example Pardo and Allen [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and Allen and Pardo [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] do in an informative way. In
this vein, Cheng [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] aims to probabilify the merely descriptive story model of juror decision making
ofered by Pennington and Hastie [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In contrast, van Koppen and Mackor [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] build on the story model
to develop the scenario approach as a normative account of legal proof. Unlike Cheng, we aim to
probabilify the normative scenario approach.
      </p>
      <p>
        As a consequence of our goal, we focus on the context of justification rather than the context of
discovery throughout. The scenario approach has proven its worth for constructing scenarios to be
compared in a court of law. It has been applied to criminal cases in the Netherlands [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. However, we
do not think that the mere generation of scenarios is normatively relevant. Hence, we abstract away
from the context of discovery.
      </p>
      <p>We will present the scenario approach for legal proof in Section 2 and our probabilistic version
thereof in what follows. Section 3 outlines our notion of best explanation and Section 4 our notion of
ift with the general background beliefs. Section 5 shows that our approach turns out to be equivalent
to Bayes Theorem. Section 6 explains our notion of internal coherence and Section 7 how internal
coherence fits into our probabilistic account of legal proof. Section 8 draws normative implications of
our account in comparison to coherence accounts and descriptive accounts of legal proof.</p>
    </sec>
    <sec id="sec-2">
      <title>2. The Scenario Approach</title>
      <p>On the scenario approach, fact-finders should compare and assess scenarios to find out which one is
best in the legal case at hand. A scenario is a story of what might have happened. Each story either
implies that the defendant is liable, or else implies that the defendant is not liable. The scenarios are
assessed on three dimensions.</p>
      <p>(i) How well does a scenario explain the available evidence?
(ii) How well does a scenario fit to the general background beliefs?
(iii) How well does a scenario internally cohere?
A scenario is, other things being equal, better than another if it explains the available evidence better, if
it fits better to the general background beliefs, and if it internally coheres better. A scenario is best if it
strikes the best balance on the three dimensions. In this sense, the scenario approach is holistic.</p>
      <p>Take, for example, a simplified version of the Simonshaven case, a Dutch criminal case provided by
Mackor (2021). The defendant Ed and his wife Jenny arrived at the Simonshaven forest by car and went
for a walk, as eyewitnesses testified. There is evidence that Jenny was hit by a blunt object and died.
There are two scenarios. According to the prosecution scenario, Ed killed his wife. On the defense
scenario, a madman jumped out of the bushes who beat up both Ed and Jenny. As a result, Ed lost his
consciousness for some time and Jenny died.</p>
      <p>Both the prosecution and the defense scenarios can explain the evidence about Jenny and Ed’s
whereabouts and Jenny’s injuries leading to her death. Both scenarios are internally coherent at least in
so far their elements are logically consistent with one another. However, the madman scenario fits less
well with our background beliefs—"our general assumptions about the world"—than a scenario where a
husband kills his wife (Mackor, 2021, p. 2414). It is much more likely that women are killed by their
(ex)partners than by madmen jumping out of bushes. Hence, the prosecution scenario is better than the
madman scenario. Or so goes Mackor’s plausible assessment.</p>
      <p>In criminal cases, the standard of proof is beyond a reasonable doubt—a more demanding standard
than in civil cases. The scenario approach specifies the criminal standard of proof as follows: a
factifnder should find a defendant guilty if the best scenario implies the defendant’s guilt and is much
better than any scenario which implies his innocence.</p>
      <p>Is the prosecution scenario the best and much better than the madman scenario? The scenario
approach has dificulties to come to a clear verdict because it does not say what it means that a scenario
is ‘much better’ than another. Furthermore, the scenario approach does not explain what it means that a
scenario strikes the best balance on the three dimensions. For example, suppose we have two scenarios
that are equally well explained by the evidence. Scenario 1 fits better with general background beliefs,
but scenario 2 is more coherent. Which is “the best scenario” in this case? In what follows, we provide
the resources to measure how good a scenario is as compared to others.</p>
      <p>In particular, we make precise both what it is for a scenario to be the best scenario and also what it is
for that scenario to be much better than other scenarios. It will turn out that a scenario is best if it is
more likely than any other scenario after conditionalizing on all available evidence. We discuss how
this coincides with the central dimensions of the scenario approach in the following sections 3 through
7. The latter is given by a standard decision theoretic argument, presented in section 7.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Degree of Explanation</title>
      <p>
        How can we measure the degree to which a scenario explains the available evidence? A standard
explication of explanation is probability raising [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Applied to the scenario approach, a scenario 
explains the available evidence  if
      </p>
      <p>( | ) &gt;  (),
where  ( | ) is the conditional probability of  given that  and  () is the probability of . We
define the degree to which a scenario  explains evidence  as follows:
(, ) :=  ( | ) , if  () &gt; 0.</p>
      <p>()</p>
      <p>If the scenario explains the evidence, (, ) will be greater than 1. If the scenario does not
explain the evidence, (, ) will be less than or equal to 1. Furthermore, a scenario ′ explains the
available evidence  better than a scenario  if</p>
      <p>(′, ) &gt; (, ).</p>
      <p>We can infer a best-explaining scenario * by searching for an argument  that maximizes (, )—
in symbols * = ((, )) for all relevant scenarios .</p>
      <p>Let  be the available evidence in the Simonshaven case—the eyewitness testimony and the autopsy
of Jenny’s injuries. Furthermore, let  be the scenario where Ed kills Jenny, and  the scenario
where a madman kills Jenny. The evidence is roughly as likely given the Ed scenario as it is given the
madman scenario:  ( | ) ≈  ( |   ). Hence, both  and  explain the available evidence
 roughly to the same degree on our probabilistic account. This result is in line with Mackor (2021,
p. 2415): "the defense scenario can also explain the evidence and it can do so roughly equally well as the
prosecution scenario."</p>
      <p>
        We rank scenarios by their degree of explanation. Any top-ranked scenario explains the evidence
best. Inferring top-ranked scenarios comes apart from notions of inference to the best explanation that
are taken to be truth-conducive [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref9">9, 10, 11, 12</xref>
        ]. A reason for a hypothesis being true is on these notions
that it best explains the evidence. The degree to which a hypothesis explains some evidence is, however,
logically independent from its truth. A scenario, in which aliens come from outer space, kill Jenny, and
prepare all the evidence just as we found it, explains the evidence well. But it does not follow from
its high explanatory degree that the scenario is true. Indeed, we are not warranted to infer that the
scenario is likely true because of its low initial probability of truth.1
      </p>
      <p>Our account developed below chooses a best scenario that strikes the best balance between its degree
of explanation and its initial probability. The best scenario is the scenario most likely to be true, given
the evidence. Our truth-conducive notion of inference to the best explanation will simply turn out to be
Bayesian inference: choose the scenario with the highest posterior probability.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Fit with General Background Beliefs</title>
      <p>
        The second dimension on which a scenario can be better than others is its fit with a fact-finder’s general
background beliefs. The fit of a scenario  with the general background beliefs can be modelled by the
probability () =  ( | ) of  given the background beliefs . We may say that a scenario fits
1Cheng [6, pp. 1268&amp;1277] elevates the comparative degree of explanation to a descriptive account of legal proof. He says a
defendant is to be found guilty in the adversarial legal system of the US if the degree to which the prosecution’s story explains
the available evidence is (much) higher than the degree to which the defense’s story does: (, ) =  ( |  )
 ( | ) &gt; 
( , )
for some constant  &gt; 1. As a consequence of Cheng’s burden of proof, Ed is not to be found guilty if the prosecution
presents either the madman or the aliens scenario. If this is indeed how the US legal system operates, we think it should be
revised by taking the initial probability of the stories into account. Cheng [6, pp. 1272-5] is aware that his account efectively
disregards the initial probabilities and so falls prey to the fallacious base rate neglect [
        <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
        ]. His account should not be
mistaken for a normative foundation of the law.
the background beliefs better than another if it is more likely in light of the background beliefs than
the other is. To be precise, a scenario ′ fits the background beliefs  better than a scenario  if
 (′ | ) &gt;  ( | ).
      </p>
      <p>The initial degrees of belief or credences of our rational fact-finder is represented by a probability
distribution  . Her background beliefs  can be encoded by conditionalizing  on . The resulting
probability distribution  measures the likelihood of propositions in light of the background beliefs
 before we receive evidence for the case at hand. As all probabilistic accounts we are aware of, we
need to assume that our rational fact-finder has a distribution  reflecting “reasonable“ background
beliefs which comply with our intuitions—just like the scenario approach needs to assume “reasonable“
background beliefs without any further specification. Reasonable background beliefs exclude, for
example, "prejudices for or against the defendant" [3, p. 456].</p>
      <p>As already mentioned, the belief that Jenny is killed by their (ex)partners is more plausible in light
of our reasonable background beliefs than the belief that Jenny is killed by a madman jumping out of
bushes (cf. Mackor 2021, p. 2416). On our model, this comparative plausibility assessment translates
into a comparative assessment of probabilities:</p>
      <p>( | ) &gt;  ( | ).</p>
      <p>The Ed scenario fits better to the background beliefs than the madman scenario.</p>
      <p>Next, we explain how our rational fact-finder changes her credences upon receiving new evidence.
We will, moreover, explain how the degree of explanation and fit with general background beliefs can
be combined in an overall assessment of scenarios.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Bayes Theorem and Learning New Evidence</title>
      <p>The best scenario strikes the best balance between explaining the available evidence and its fit with
the background beliefs. We can think of the best scenario as the one which best explains the available
evidence weighted by its fit with the background knowledge. So the goodness of a scenario  can be
measured by the product of the degree to which it explains the available evidence  and the scenario’s
initial probability:
(, ) ·  (), where (, ) =
( | )
()</p>
      <p>The formula measures how well a scenario explains the available evidence weighted by how likely
the scenario is in the first place. It ranks the diferent scenarios according to their overall goodness.
The formula is the right-hand side of Bayes Theorem—a theorem often used to calculate the probability
of a proposition after learning a new piece of evidence:
( | )</p>
      <p>() ·  ().</p>
      <p>′(·) =  (· | ) =  (· |  ∩ ).</p>
      <p>In general, a Bayesian agent learns new evidence by conditionalization. The probability distribution
′ after learning a piece of evidence  is determined as follows:</p>
      <p>Bayes Theorem can be used to calculate the probability ( | ) of each scenario  after learning
the available evidence . A Bayesian agent can thereby compare the probability ( | ) of the Ed
scenario with the probability ( | ) of the madman scenario. In what follows, we show that the
most likely scenario is the best scenario according to our probabilistic account.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Internal Coherence</title>
      <p>So far, we have understood any scenario  as a proposition—a set of possible worlds at which the
scenario  is true. However, scenarios have an internal structure. They consist of chronologically
ordered and causally connected elements. Hence, scenarios can be modeled as sets of propositions in
time, some of which are causally related. Evaluating the internal coherence of a scenario is to evaluate
how well the scenario’s elements cohere. If they cohere well, they fit well to each other.</p>
      <p>How can we measure the internal coherence of scenarios? Our basic idea is this: if the elements
of a scenario fit well to one another, they are taken together as a whole more likely than they would
be taken together as individuals. Let’s denote the elements of a scenario  by 1, 2, . . . , . The
probability of two elements 1 and 2 of a scenario taken as individuals is simply the product of their
individual probabilities: (1) ·  (2). Taking elements of a scenario as individuals means treating
them as if they were probabilistically independent. 1 is independent of 2 relative to  just in case
(1 ∩ 2) =  (1) ·  (2). If 1 and 2 are independent,</p>
      <p>(1 ∩ 2) = 1,
(1) ·  (2)
provided (1) and (2) are not zero.2 We can use the ratio as a measure of pairwise coherence.
Two propositions are coherent if the ratio is strictly higher than 1 if they are positively relevant to each
other: (1 ∩ 2) &gt;  (1) ·  (2). The ratio measures the degree by which the two propositions
as a whole are more likely than they would be if they were independent.</p>
      <p>Take, for example, two elements of the Ed scenario that cohere well: Jenny was attacked and died in
the forest (2), and Ed killed Jenny in the forest (3). The two propositions are positively relevant to
each other—and to a high degree. The conditional probability that Jenny was attacked and died in the
forest given that Ed killed Jenny in the forest is very high if not maximal. The unconditional probability
that Jenny was attacked and died in the forest is much lower (based only on our background beliefs ).
Hence,
(2 | 3) =
(2 ∩ 3)
(3)
≫  (2).</p>
      <p>Scenarios may have more than two elements. We generalize our measure of pairwise coherence to
Shogenji’s (1999) coherence measure:
ℎ(1, 2, . . . , ) :=</p>
      <p>(1 ∩ 2 ∩ . . . ∩ )
(1) ·  (2) · . . . ·  ()</p>
      <p>A scenario is internally coherent to the degree that its probability as a whole is larger than the product
of the probabilities of its individual elements. The product of the individual probabilities figures as a
neutral base-line: it measures how likely the scenario is upon the hypothetical assumption that the
elements are independent of each other. If all of the elements are indeed independent, the probability of
their conjunction—formalized by set intersection—equals the product of the individual probabilities;
and so the coherence measure assigns the scenario the neutral value 1. If the elements of the scenario fit
well together, their conjunction is more likely than the base-line; and so the coherence of the scenario
is greater than 1. Finally, if the elements do not fit well together, their conjunction is less likely than the
base-line; and so the coherence of the scenario is less than 1.</p>
      <p>For illustration, consider the following propositions adapted from Shogenji [15, p. 341]:
1: Dunnit killed the victim at time  in city .
2: Dunnit was reportedly seen at time  in city  at the other end of the country.
3: Dunnit has an identical twin sibling living in .</p>
      <p>A scenario consisting of just 1 and 2 is not coherent: how could Dunnit have been in city  and
been sighted in city  at the same time? The probability of their conjunction is low, and in particular
lower than the product of the individual probabilities. The coherence measure is then less than 1.</p>
      <sec id="sec-6-1">
        <title>2We will not mention the non-zero proviso in what follows.</title>
        <p>What does the coherence measure say about the scenario consisting of 1, 2, and 3? Adding
3 makes the scenario more coherent because 3 explains why Dunnit was reportedly seen and yet
is in a diferent city: the witness mistook Dunnit for their sibling. The explanation is reflected in the
fact that the conditional probability (1 ∩ 2 | 3) is much higher than (1 ∩ 2). Hence, the
coherence of the 1 ∩ 2-scenario is much lower than the coherence of the 1 ∩ 2 ∩ 3-scenario:
(1 ∩ 2)
(1) ·  (2) ≪
(1 ∩ 2 | 3) ·  (3) .</p>
        <p>(1) ·  (2) ·  (3)
The twin scenario is internally more coherent.</p>
        <p>Let’s apply the coherence measure to the Simonshaven case. Both, the Ed and the madman scenario,
seem to internally cohere quite well. There is no internal tension between the elements, respectively.
One element of the Ed scenario  is that Ed and Jenny go for a walk in the forest (1), another that
Jenny was attacked and died (2), and a third that Ed killed Jenny in the forest (3). The coherence
of the Ed scenario is measured by</p>
        <p>(1 ∩ 2 ∩ 3)
(1) ·  (2) ·  (3)
.</p>
        <p>The scenario coheres because the elements of the scenario are more likely to be true together than
individually.</p>
        <p>The madman scenario  includes that Ed and Jenny go for a walk in the forest (1 = 1) and
that Jenny was attacked and died (2 = 2). The diference to the Ed scenario is that a madman
jumped out of the bushes and attacked both Ed and Jenny so that Ed lost his consciousness and Jenny
died (3). The coherence of the madman scenario is measured by</p>
        <p>(1 ∩ 2 ∩ 3)
(1) ·  (2) ·  (3)
.</p>
      </sec>
      <sec id="sec-6-2">
        <title>The scenario is again coherent.</title>
        <p>Both scenarios are internally coherent to a similar degree (relative to ). To see this, observe that
(1 | 3) ≈  (1 | 3).</p>
        <p>Ed killing Jenny in the forest requires them to be in the forest. Going for a walk in the forest is a good
reason for being in the forest. Hence, the probability that Ed and Jenny go for a walk in the forest given
that Ed kills Jenny there is very high. Similarly, a madman killing Jenny and knocking Ed unconscious
in the forest requires both of them to be there. A walk in the forest is as good a reason for being there
as before. Hence, the probability that they are in the forest given that a madman killed Jenny in the
forest and knocked Ed unconscious is very high.</p>
        <p>The proposition that Jenny was attacked and died in the forest is entailed by the proposition that Ed
killed Jenny there. It is also entailed by the proposition that the madman killed her there. Hence, we
obtain</p>
        <p>(1 ∩ 2 | 3) ≈  (1 ∩ 2 | 3).</p>
      </sec>
      <sec id="sec-6-3">
        <title>And further</title>
        <p>(1 ∩ 2 | 3) ·  (3)
(1) ·  (2) ·  (3) ≈
(1 ∩ 2 | 3) ·  (3) .</p>
        <p>(1) ·  (2) ·  (3)
As 1 and 2 are identical to 1 and 2, respectively, we obtain that both scenarios are internally
coherent to a similar degree.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Bayes Theorem, Internal Coherence, and Legal Proof</title>
      <p>How does Shogenji’s measure of internal coherence fit into our account of legal proof summarized by
Bayes Theorem? The probability of the conjunction of a scenario’s elements equals their individual
probabilities weighted by their coherence:</p>
      <p>(1 ∩ . . . ∩ ) = ℎ(1, . . . , ) ·  (1) · . . . ·  ().</p>
      <p>The degree () = (1 ∩ . . . ∩ ) to which the whole scenario fits to the background beliefs can
be broken down into the product of its elements’s individual probabilities and how well they cohere.
Indeed, the probability () of a scenario is more likely the more likely its elements are individually
and the better they cohere.</p>
      <p>We can make the elements of a scenario explicit by rewriting Bayes Theorem from above:
(1 ∩ . . . ∩  | ) =
( | 1 ∩ . . . ∩ )
()
·  (1 ∩ . . . ∩ ).</p>
      <p>Finally, we substitute the degree of explanation and the coherence measure into Bayes Theorem:
Fit with background beliefs
( | ) =</p>
      <p>(, ) · ⏞ℎ(1, . . . , ) · ⏟(1) · . . . ·  () .</p>
      <p>Degree of explanation ⏟ Internal c⏞oherence</p>
      <p>⏟ ⏞</p>
      <p>The labels at the braces indicate which parts of the formula represent which features of assessing
scenarios. The best scenario given the available evidence is the scenario which strikes the best balance
between explaining the available evidence and fit to the general background beliefs. The internal
coherence can be seen as a part of the fit with the background beliefs. The best scenario is the most
likely, given the available evidence.</p>
      <p>We show now that, given the available evidence, any best liability scenario * coincides with the
probability that the defendant is liable: (* | ) = ( | ). For this to be seen, recall that any
liability scenario  implies the defendant’s liability :  ⊆  . Hence, ( | ) ≤  ( | ). In
light of the evidence, the probability of liability is an upper bound on the best liability scenario. We
can, furthermore, construct a best liability scenario. A best liability scenario * implies the defendant’s
liability and is implied by the strengthening of the evidence which implies his liability: * ⊆  and
* ⊇  ∩  . Such a best liability scenario is guaranteed to exist non-trivially just in case  ∩  ̸= ∅.
The same argument can be run for any best non-liability scenario ¬*, where ¬ is the complement
of . In a civil trial, we can always consider whether or not the defendant is liable instead of the best
liability and non-liability scenarios. This means it is a normatively valid strategy for the defense to
poke holes in the prosecution’s story without putting forth a particular story  ⊂ ¬. 3</p>
      <p>
        Our probabilification of the scenario approach straightforwardly leads to a Bayesian account of legal
proof if the goal is to minimize expected costs in legal fact-finding. As Cheng [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] points out, a true
ifnding of liability (  ) and a true finding of non-liability (   ) do not incur any cost. Erroneous
ifndings, however, incur costs. In "a civil trial, the legal system expresses no preference" between a
false finding of liability (  ) and a false finding of non-liability (   ). (p. 1261) Hence, the fact-finder
faces the decision summarized by the decision matrix in Table 1, where  is the cost function and  is a
positive value.
      </p>
      <p>According to the decision-theoretic principle of minimizing expected costs, you should find the
defendant liable if</p>
      <p>′() · ( ) +  ′(¬) · ( ) &lt;  ′() · (  ) +  ′(¬) · (  ).</p>
      <p>Under the assumptions about the costs, the equation simplifies to  ′(¬) ·  &lt;  ′() ·  , which in turn
is equivalent to  ′() &gt; 1/2.  ′(·) stands on our scenario account for (· | ) . Hence, you should
3Cheng [6, p. 1262] observes that our normatively valid strategy does not align with legal practice: "it will not do for the
defendant’s theory to be “not plaintif’s story.“ The defendant may ofer multiple possible alternatives, but each of these
alternatives will be judged separately, not simultaneously." From a normative point of view, however, this "focus on stories is
almost certainly suboptimal, because it unnecessarily forces the factfinder to assess each story in isolation, rather than make
her best global guess." (pp. 1272-3) Judging the stories separately neither maximizes accuracy nor minimizes the expected
costs. This is a high price paid by the legal practice if Cheng’s observation is right.</p>
      <p>finding 
finding ¬</p>
      <p>¬
( ) = 0 ( ) = 
(  ) =  (  ) = 0
ifnd a defendant liable if ( | ) &gt; 1/2. You should find a defendant liable if you judge his liability
more likely than his non-liability in light of the evidence and your background beliefs.</p>
      <p>
        Our account of legal proof also covers the beyond a reasonable doubt standard. For criminal cases,
we exchange the defendant’s liability  with his guilt  in the argument of the preceding paragraph.
A false finding of guilt is (much) more costly than a false finding of innocence in criminal cases:
( ) &lt; ( ) [6, p. 1275]. A wrongful finding of guilt is worse than a wrongful finding of innocence,
perhaps about ten times as worse [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. You should find the defendant guilty if
( | ) &gt;
      </p>
      <p>( )
( ) + ( )
.4
For illustration, assume ( ) = 10 and ( ) = 1. You should then find the defendant guilty if
10
( | ) &gt; 1 + 10 ≈ 0.91.</p>
      <p>
        Reasonable costs for true and false findings of guilt and innocence, respectively, provide a threshold for
the probability of a defendant’s guilt such that finding guilty minimizes expected costs. For further
details of this decision-theoretic argument, see Kaplan [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] and Günther [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. We leave the question of
what are reasonable costs for future research. Notably, our probabilified scenario account of legal proof
can be amended by several accounts using Bayesian networks [
        <xref ref-type="bibr" rid="ref19 ref20 ref21">19, 20, 21, 22, 23</xref>
        ].
      </p>
    </sec>
    <sec id="sec-8">
      <title>8. Normative Implications</title>
      <p>We have shown that our scenario account coincides with a Bayesian account of legal proof. Legal
verdicts are justified by the probability of liability and guilt, respectively. The probabilities after learning
the available evidence are determined via Bayes Theorem. Thereby, our scenario account inherits the
normative justification of Bayesianism in terms of Dutch book arguments [ 24] and epistemic utility
arguments [25]. Bayesianism confers further good-making features on our account such as avoiding
confirmation biases as well as the base rate and prosecutor’s fallacies [26].</p>
      <p>On our scenario account, the most likely scenario is the best: it strikes a best balance between
explaining the evidence and fit to general background beliefs, including being internally coherent. In
particular, our scenario account translates coherence into probability without any loss [27]. Unlike
coherence accounts, however, our account merely integrates the coherence internal to scenarios into
a wider account relying also on the degree of explanation and the scenarios’s fit with background
knowledge. All three dimensions of the scenario approach have their place in the overall probability
of liability (guilt) after learning the evidence by Bayes Theorem. Ultimately, the probability of the
defendant’s liability (guilt) is the decisive normative ground for a finding of liability (guilt).</p>
      <p>The ultimate normative ground poses a question for the original scenario approach and coherence
accounts more generally [28, 29]: why should any kind of coherence of a scenario, its capacity to
4Here is the proof:
 ′(¬) · ( ) &lt;  ′() · ( ) if (1 −  ′()) · ( ) &lt;  ′() · ( ) if
( ) −  ′() · ( ) &lt;  ′() · ( ) if ( ) &lt;  ′() · ( ) +  ′() · ( ) if
( ) &lt;  ′() · (( ) + ( )) if ( )(+ )( ) &lt;  ′().
explain the available evidence, and its fit with the general background beliefs matter independently
of its probability? Suppose a liability scenario exhibits more internal coherence than a non-liability
scenario. However, the non-liability scenario is the best non-liability scenario and more likely than
not. In such a legal proceeding, we have a hard time seeing on what normative grounds we should find
the defendant liable. The same argument applies mutatis mutandis to the other two dimensions of the
scenario approach taken in isolation.</p>
      <p>We invite the proponents of the original scenario approach and other coherence accounts to explain
how their theories deviate from ours and why this deviation is normatively justified. One could argue
that we failed the mark of a normatively justified account of legal proof and so explain one’s deviation.
But this argument must be a normative argument. It is not suficient to cite current legal practice.</p>
      <p>
        Alternatively, one could say that one takes our normative justification on board (and, perhaps,
deviates from it slightly for practical purposes). If so, a loose end remains. Our probabilified scenario
account still succumbs to the problem of statistical evidence [30]—and so does any account that takes our
normative justification on board. We could save our account by the broadly Bayesian solution ofered
by Günther [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] in terms of justified belief. It remains an open question for now whether the coherence
accounts could be similarly amended. The problem of statistical evidence, the proposed solutions, and
their drawbacks, deserve their own paper.
      </p>
      <p>A probabilistic account of legal proof faces the challenge to explain the origin and justification of
probability assignments. Where, for example, does the prior probability of the scenario come from? On
the present account, the prior probability is analysed into the product of the measures of coherence and
of the fit with general background beliefs, thus making some progress towards an explanation. Generally
speaking, we develop a purely normative account, and are thus partial to see a probabilistic account of
legal proof as more of a regulative ideal than a hands-on recipe for legal practice. Nevertheless, we
are also open to accounts spelling out the details; minimally requiring probabilistic coherence, and
additionally suggesting plausible objective Bayesian norms along probabilified legal norms like the
presumption of innocence. Perhaps, the problem of the priors can be solved by the constraint that the
initial probability of liablility and guilt, respectively, is exactly 1/2. But these issues require a paper of
their own.</p>
      <p>Finally, a note on the conjunction paradox is in order. This alleged paradox arises when the allegation
to be proven requires only that two or more claims must individually be proven for a finding of guilt
or liability to be warranted [31]. For illustration, consider a civil case, where the law code requires
that two claims,  and , must each be proven to be more likely than not. Now, it is possible that the
probability of  and , respectively, is greater than 1/2, even though the probability of the conjunction
 ∩  is less than 1/2. So it is possible on our account of legal proof that the probability of each element
of the charge is above the threshold while the probability of the charge as a whole is not. The question
is whether or not the defendant should then be found guilty.</p>
      <p>The conjunction paradox does not arise on our normative account of legal proof [32]. Our account
implies that only the probability of the charge as a whole should matter. It does not only require that
the elements or claims of an allegation must be proven individually. A defendant should be found liable
or guilty just in case the entire allegation, the charge as a whole, meets the respective standard of proof.</p>
      <p>The conjunction paradox is only a problem for descriptive legal probabilism. If existing law requires
only that each element of a case meets the relevant standard of proof, there may be findings of liability
(guilt) even though the probability that the defendant is liable (guilty) is below the threshold. If so, our
account provides a normative reason to revise the existing law: the entire allegation must meet the
relevant standard of proof. This would be a sensible and modest improvement to existing legal systems,
where only each element must be proven in isolation.</p>
    </sec>
    <sec id="sec-9">
      <title>9. Conclusion</title>
      <p>
        We have probabilified the scenario approach to legal proof. The most likely scenario is best: it strikes
the best balance between explaining the available evidence and fitting to the general background beliefs.
Internal coherence is a part of the fit with our background beliefs. Our probabilistic account makes
precise how the three dimensions of the original scenario approach are to be weighted. It is, furthermore,
equivalent to a Bayesian account of legal proof, inheriting a robust normative foundation. Our account
is proof that reconciling the scenario approach and the Bayesian approach is possible. There is no need
to put forth the scenario approach as a normative competitor to the Bayesian approach [
        <xref ref-type="bibr" rid="ref1 ref3">28, 29, 1, 3</xref>
        ]. We
hope our results provide a further refinement of and a normative foundation for the already successfully
applied scenario approach.
      </p>
    </sec>
    <sec id="sec-10">
      <title>Declaration on Generative AI</title>
      <sec id="sec-10-1">
        <title>No generative AI was used in the production of this work.</title>
        <p>[22] C. S. Vlek, H. Prakken, S. Renooij, B. Verheij, Building Bayesian networks for legal evidence with
narratives: a case study evaluation, Artificial Intelligence and Law 22 (2014) 375–421.
[23] B. Verheij, F. Bex, S. T. Timmer, C. S. Vlek, J.-J. C. Meyer, S. Renooij, H. Prakken, Arguments,
scenarios and probabilities: Connections between three normative frameworks for evidential
reasoning, Law, Probability and Risk 15 (2016) 35–70.
[24] S. Vineberg, Dutch book arguments, in: E. N. Zalta, U. Nodelman (Eds.), The Stanford Encyclopedia
of Philosophy, Fall 2022 ed., Metaphysics Research Lab, Stanford University, 2022.
[25] R. Pettigrew, Epistemic utility arguments for epistemic norms, in: E. N. Zalta, U. Nodelman (Eds.),
The Stanford Encyclopedia of Philosophy, Summer 2024 ed., Metaphysics Research Lab, Stanford
University, 2024.
[26] W. C. Thompson, E. L. Schumann, Interpretation of statistical evidence in criminal trials, Law and</p>
        <p>Human Behavior 11 (1987) 167–187.
[27] C. Dahlman, A. R. Mackor, Coherence and probability in legal evidence, Law, Probability and Risk
18 (2019) 275–294.
[28] P. Thagard, Coherence in thought and action, MIT press, 2002.
[29] A. Amaya, Coherence, evidence, and legal proof, Legal theory 19 (2013) 1–43.
[30] M. Redmayne, Exploring the proof paradoxes, Legal Theory 14 (2008) 281–309.
[31] L. J. Cohen, The Probable and the Provable, Clarendon Press, Oxford, 1977.
[32] B. Hedden, M. Colyvan, Legal probabilism: a qualified defence, Journal of Political Philosophy 27
(2019) 448–468.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>P. J. van Koppen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Mackor</surname>
          </string-name>
          ,
          <article-title>A scenario approach to the Simonshaven case</article-title>
          ,
          <source>Topics in Cognitive Science</source>
          <volume>12</volume>
          (
          <year>2020</year>
          )
          <fpage>1132</fpage>
          -
          <lpage>1151</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Mackor</surname>
          </string-name>
          ,
          <article-title>Diferent ways of being naked: a scenario approach to the naked statistical evidence problem</article-title>
          ,
          <source>Journal of Applied Logics</source>
          <volume>8</volume>
          (
          <year>2021</year>
          )
          <fpage>2407</fpage>
          -
          <lpage>2432</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Mackor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jellema</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. J. van Koppen</surname>
          </string-name>
          ,
          <article-title>Explanation-based approaches to reasoning about evidence and proof in criminal trials</article-title>
          , in: B.
          <string-name>
            <surname>Brożek</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Hage</surname>
          </string-name>
          , N. Vincent (Eds.),
          <article-title>Law and Mind: a Survey of Law and the Cognitive Sciences</article-title>
          , Cambridge University Press,
          <year>2021</year>
          , pp.
          <fpage>431</fpage>
          -
          <lpage>470</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Pardo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Allen</surname>
          </string-name>
          ,
          <article-title>Juridical proof and the best explanation</article-title>
          ,
          <source>Law and Philosophy</source>
          <volume>27</volume>
          (
          <year>2008</year>
          )
          <fpage>223</fpage>
          -
          <lpage>268</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Allen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Pardo</surname>
          </string-name>
          ,
          <article-title>Relative plausibility and its critics</article-title>
          ,
          <source>The International Journal of Evidence &amp; Proof</source>
          <volume>23</volume>
          (
          <year>2019</year>
          )
          <fpage>5</fpage>
          -
          <lpage>59</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6] E. Cheng, Reconceptualizing the burden of proof,
          <source>Yale Law Journal</source>
          <volume>122</volume>
          (
          <year>2013</year>
          )
          <fpage>1254</fpage>
          -
          <lpage>1279</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>N.</given-names>
            <surname>Pennington</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hastie</surname>
          </string-name>
          ,
          <article-title>The story model for juror decision making</article-title>
          , Cambridge University Press,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>W. C.</given-names>
            <surname>Salmon</surname>
          </string-name>
          ,
          <article-title>Statistical explanation</article-title>
          , in: R. G. Colodny (Ed.),
          <article-title>The Nature</article-title>
          and
          <article-title>Function of Scientific Theories: Essays in Contemporary Science</article-title>
          and Philosophy, University of Pittsburgh Press,
          <year>1970</year>
          , pp.
          <fpage>173</fpage>
          -
          <lpage>231</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Psillos</surname>
          </string-name>
          ,
          <article-title>Scientific realism: how science tracks truth</article-title>
          , Routledge, New York,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lipton</surname>
          </string-name>
          ,
          <article-title>Inference to the best explanation</article-title>
          , in: W. Newton-Smith (Ed.),
          <article-title>A Companion to the Philosophy of Science</article-title>
          , Blackwell,
          <year>2000</year>
          , pp.
          <fpage>184</fpage>
          -
          <lpage>193</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>T.</given-names>
            <surname>Williamson</surname>
          </string-name>
          , Abductive philosophy,
          <source>Philosophical Forum</source>
          <volume>47</volume>
          (
          <year>2016</year>
          )
          <fpage>263</fpage>
          -
          <lpage>280</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>I. Douven</given-names>
            , Abduction, in: E. N.
            <surname>Zalta</surname>
          </string-name>
          (Ed.),
          <source>The Stanford Encyclopedia of Philosophy</source>
          , summer 2021 ed., Metaphysics Research Lab, Stanford University,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tversky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kahneman</surname>
          </string-name>
          ,
          <article-title>Availability: a heuristic for judging frequency and probability</article-title>
          ,
          <source>Cognitive Psychology 5</source>
          (
          <year>1973</year>
          )
          <fpage>207</fpage>
          -
          <lpage>232</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bar-Hillel</surname>
          </string-name>
          ,
          <article-title>The base-rate fallacy in probability judgments</article-title>
          ,
          <source>Acta Psychologica</source>
          <volume>44</volume>
          (
          <year>1980</year>
          )
          <fpage>211</fpage>
          -
          <lpage>233</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>T.</given-names>
            <surname>Shogenji</surname>
          </string-name>
          , Is coherence truth conducive?,
          <source>Analysis</source>
          <volume>59</volume>
          (
          <year>1999</year>
          )
          <fpage>338</fpage>
          -
          <lpage>345</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>W.</given-names>
            <surname>Blackstone</surname>
          </string-name>
          ,
          <article-title>Commentaries on the laws of England: in four books</article-title>
          , volume
          <volume>2</volume>
          ,
          <string-name>
            <given-names>J.B.</given-names>
            <surname>Lippincott</surname>
          </string-name>
          ,
          <volume>1753</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kaplan</surname>
          </string-name>
          ,
          <article-title>Decision theory and the factfinding process</article-title>
          ,
          <source>Stanford Law Review</source>
          <volume>20</volume>
          (
          <year>1968</year>
          )
          <fpage>1065</fpage>
          -
          <lpage>1092</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Günther</surname>
          </string-name>
          ,
          <article-title>Legal proof should be justified belief of guilt</article-title>
          ,
          <source>Legal Theory</source>
          <volume>30</volume>
          (
          <issue>3</issue>
          ) (
          <year>2024</year>
          )
          <fpage>129</fpage>
          -
          <lpage>141</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>N.</given-names>
            <surname>Fenton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Neil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Lagnado</surname>
          </string-name>
          ,
          <article-title>A general structure for legal arguments about evidence using Bayesian networks</article-title>
          ,
          <source>Cognitive science 37</source>
          (
          <year>2013</year>
          )
          <fpage>61</fpage>
          -
          <lpage>102</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>L. Van Leeuwen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Verbrugge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Verheij</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Renooij</surname>
          </string-name>
          ,
          <source>Building a stronger case: 3rd International Conference on Hybrid Human-Artificial Intelligence</source>
          ,
          <string-name>
            <surname>HHAI</surname>
          </string-name>
          <year>2024</year>
          (
          <year>2024</year>
          )
          <fpage>291</fpage>
          -
          <lpage>299</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>N.</given-names>
            <surname>Fenton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Neil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lagnado</surname>
          </string-name>
          ,
          <article-title>Analyzing the Simonshaven case using Bayesian networks</article-title>
          ,
          <source>Topics in Cognitive Science</source>
          <volume>12</volume>
          (
          <year>2020</year>
          )
          <fpage>1092</fpage>
          -
          <lpage>1114</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>