<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The Weak Completion Semantics and Counter Examples</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Meghna Bhadra</string-name>
          <email>meghnabhadra8@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Steffen Ho¨lldobler</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>North Caucasus Federal University</institution>
          ,
          <addr-line>Stavropol, Russian Federation</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>TU Dresden</institution>
          ,
          <addr-line>01062 Dresden</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <fpage>60</fpage>
      <lpage>73</lpage>
      <abstract>
        <p>An experiment has revealed that if the antecedent of a conditional sentence is denied, then most participants conclude that the negation of the consequent holds. However, a significant number of participants answered nothing follows if the antecedent of the conditional sentence was non-necessary. The weak completion semantics correctly models the answers of the majority, but cannot explain the number of nothing follows answers. In this paper we extend the weak completion semantics by counter examples. The extension allows to explain the experimental findings.</p>
      </abstract>
      <kwd-group>
        <kwd>Conditional Reasoning</kwd>
        <kwd>Denial of Antecedent</kwd>
        <kwd>Weak Completion Semantics</kwd>
        <kwd>Counter Examples</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>
        Conditional sentences are propositions of the form if A then C where A and C are
atomic sentences called antecedent and consequent, respectively. Four kinds of
conditional inference tasks have been a common area of research by psychologists till date:
1. Affirmation of the antecedent (AA): if A then C and A, therefore C.
2. Denial of the antecedent (DA): if A then C and ¬A, therefore ¬C.
3. Affirmation of the consequent (AC): if A then C and C, therefore A.
4. Denial of the consequent (DC): if A then C and ¬C, therefore ¬A.
In classical, two-valued propositional logic, conditional sentences are taken to mean
material implications and biconditionals to mean (material) equivalence. The
conclusion for the DA and the AC are hence considered to be logical fallacies (invalid) for
a conditional sentence whereas they are considered valid for a biconditional. When
replacing the above abstract conditional sentences with everyday ones, however, the
inferences largely depend on the semantics and pragmatics of human communication,
culture, and context. In this paper, we therefore discuss how everyday conditional
sentences can be categorized into four proposed semantic categories. We also share the
results of an experiment reported in [
        <xref ref-type="bibr" rid="ref4 ref5">5,4</xref>
        ] and (with particular regard to the DA)
demonstrate how such classifications can help model an average human (DA) reasoner.
? The authors are mentioned in alphabetical order.
      </p>
      <p>Copyright © 2021 for this paper by its authors. Use permitted under
Creative Commons License Attribution 4.0 International (CC BY 4.0).</p>
      <p>
        The Weak Completion Semantics (WCS) is a three-valued, non-monotonic
cognitive theory, which can not only adequately model the suppression task by [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] as shown
by [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], human syllogistic reasoning as shown by [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], and DC inferences as shown by
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] but also the AA, AC, and the majority ¬C answers of the DA as shown by [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
While the existing framework of the WCS adequately models the general consensus of
the ¬C responses generated in case of the DA inference task, it did not however, seem
adequate to model the number of nothing follows (nf ) responses, which is especially
significant in case of conditional sentences with non-necessary antecedents. Here,
nothing follows denotes no new inference or specific conclusion can be drawn with regard
to the consequent of the conditional sentence.
      </p>
      <p>In order to elaborate on what it really means for a conditional sentence to have a
non-necessary antecedent and to propose a solution to the aforementioned problem, we
begin by considering the following DA inference tasks:
1. If Maria is drinking alcoholic beverages in a pub, then Maria must be over 19 years
of age and Maria is not drinking alcoholic beverages in a pub.
2. If the plants get water, then they will grow and the plants get no water.
Both of these examples appeared in the aforementioned experiment, and as was the
case for every conditional sentence that was included in the experiment, accompanied
with a small background story. A curious reader may find the background stories in the
Appendix. In the first example, 28 out of 56 participants answered Maria must not be
over 19 years of age, whereas 25 answered nf. In this example the antecedent is
nonnecessary; it is not considered necessary for a person to drink alcohol in order for her to
be older than 19. There are many people who do not drink alcoholic beverages although
they are over 19 years of age. In the second example, 47 out of 56 participants answered
the plants will not grow whereas only 8 answered nf. In this case, the antecedent is
necessary. Plants do not grow without water. Table 3 gives a complete account of this
experimental data.</p>
      <p>Based on this observation, we propose an extension which allows WCS to account
for the nf answers. In Example 1, the existing framework of the WCS creates a model
where given that Maria is not drinking alcoholic beverages, it can be concluded that
Maria is not older than 19 years of age. With the proposed extension, however, a
counter example can be constructed based on a possible observation that Maria is not
drinking alcoholic beverages and yet Maria is older than 19 years of age. This leads to
an alternative model, which when compared to the former model and reasoned
skeptically, leads to the conclusion that it is unknown whether Maria is older than 19 years of
age. In Example 2, WCS creates a model where given that the plants do not get water,
it can be concluded that they will not grow. But in this case, a counter example does not
readily exist.</p>
      <p>
        The paper is organized as follows: In the next section we formally introduce the
WCS. A classification of conditional sentences is given in Section 3. The experiment is
described in Section 4. We demonstrate how the WCS models the general consensus in
Section 5. The search for counter examples is presented in Section 6. Counter Examples
are modeled in the WCS in Section 7. Finally, in Section 8 we conclude and outline
further possible research.
We assume the reader to be familiar with logic and logic programming as presented in
e.g. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Let &gt;, ? , and U be truth constants denoting true, false, and unknown,
respectively. A (logic) program is a finite set of clauses of the form B body , where B
is an atom and body is &gt;, or ? , or a finite, non-empty set of literals. Clauses of the form
B &gt; , B ? , and B L1, . . . , Ln are called facts, assumptions, and rules,
respectively, where Li, 1  i  n, are literals. We restrict our attention to propositional
programs although the WCS extends to first-order programs as well [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>Throughout this paper, P will denote a program. An atom B is defined in P iff P
contains a clause of the form B body . As an example consider the program
Pc = {C</p>
      <p>
        A ^ ¬ ab, ab
?}
where A, C, and ab are atoms. C and ab are defined, whereas A is undefined. ab is
an abnormality predicate which is assumed to be false. In the WCS, this program
represents the conditional sentence if A then C. In their everyday lives humans are often
required to reason in situations where the information of all factors affecting the
situation might not be complete. They still reason, unless new information which needs
consideration comes to light. The abnormality predicate in the program serves the
purpose of this (default) assumption, as was suggested in [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
      </p>
      <p>
        Consider the following transformation: (1) For all defined atoms B occurring in P,
replace all clauses of the form B body 1, B body 2, . . . by B body 1 _ body 2 _
. . . . (2) Replace all occurrences of by $ . The resulting set of equivalences is called
the weak completion of P. It differs from the program completion defined in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] in
that undefined atoms in the weakly completed program are not mapped to false, but to
unknown instead. Weak completion is necessary for the WCS framework to adequately
model the suppression task (and other reasoning tasks) as demonstrated in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        As shown in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], each weakly completed program admits a least model under
the three-valued Łukasiewicz logic [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] (see Table 1). This model will be denoted by
MP . It can be computed as the least fixed point of a semantic operator introduced
in [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. Let P be a program and I be a three-valued interpretation represented by the
pair hI&gt;, I? i, where I&gt; and I? are the sets of atoms mapped to true and false by
I, respectively, and atoms which are not listed are mapped to unknown. We define
P I = hJ &gt;, J ? i,4 where
      </p>
      <p>J &gt; = {B | there is B
J ? = {B | there is B
for all B
body 2 P
body 2 P
body 2 P
and I body = &gt;},
and
we find I body = ?} .</p>
      <p>
        Following [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] we consider an abductive framework hP, AP , IC, |=wcsi, where P is a
program, AP = {B &gt; | B is undefined in P} [ { B ? | B is undefined in P}
is the set of abducibles, IC is a finite set of integrity constraints, and MP |=wcs F iff
MP maps the formula F to true. Let O be an observation, i.e., a finite set of literals
each of which does not follow from MP . We apply abduction to explain O, where O is
called explainable in the abductive framework hP, AP , IC, |=wcsi iff there exists a
nonempty X ✓ A P called an explanation such that MP[X |=wcs L for all L 2 O and
MP[X satisfies IC. We have assumed that explanations are non-empty as otherwise
the observation already follows from the weak completion of the program. Formula F
follows credulously from P and O iff there exists an explanation X for O such that
MP[X |=wcs F . F follows skeptically from P and O, iff O can be explained and for
all explanations X for O we find MP[X |=wcs F . The latter is an application of the
socalled Gricean implicature [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]: humans normally do not quantify over things which do
not exist. Meaning, (unlike classical logic) all explanations for an observation O may
only be taken into account to skeptically decide on a formula F , when O is explainable
and these so-called explanations exist in the first place. If a formula F does not follow
skeptically from P and O, we conclude nothing follows. Furthermore, one should also
observe that if an observation O cannot be explained, then nothing follows credulously
as well as skeptically. In all examples discussed in this paper the set of integrity
constraints is empty. Integrity constraints are not relevant to the goal of this paper. However
they are needed in other applications of the WCS like human disjunctive reasoning [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        Given premises, general knowledge, and observations, reasoning in the WCS is
currently modeled in five steps:
1. Reasoning towards a logic program P following [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
2. Weakly completing the program.
3. Computing the least model MP of the weak completion of P under the
threevalued Łukasiewicz logic.
4. Reasoning with respect to MP .
5. If observations cannot be explained, then applying skeptical abduction using the
specified set of abducibles.
      </p>
      <p>
        In Section 5 we will explain how these five steps work in the case of the DA reasoning
tasks considered in this paper. More examples can be found, for example, in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] or [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]
or [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
4 Whenever we apply a unary operator like P to an argument like I, then we omit parenthesis
and write P I instead. Likewise, we write I body instead of I(body).
3
3.1
      </p>
    </sec>
    <sec id="sec-2">
      <title>A Classification of Conditional Sentences</title>
      <sec id="sec-2-1">
        <title>Obligation versus Factual Conditionals</title>
        <p>
          Following [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], we call a conditional sentence an obligation conditional if the truth of the
consequent appears to be obligatory given that its antecedent is true. For each obligation
conditional there are two initial possibilities humans think about. The first possibility
is the conjunction of the antecedent and the consequent which is permitted. The
second possibility is the conjunction of the antecedent and the negation of the consequent
which is forbidden. Exceptions are possible but unlikely. This can be exemplified by
Example 1. In many countries the law demands that a person may only drink alcohol
publicly when they are above a certain age group (for example, 19 years). This implies
that Maria is drinking alcoholic beverages in a pub and she is older than 19 years is a
permitted possibility, whereas Maria is drinking alcoholic beverages in a pub and she
is not older than 19 years is a forbidden one. Hence, if Maria is drinking alcoholic
beverages in a pub, then Maria must be over 19 years of age is an obligation conditional.
Concerning Example 2, plants getting water and plants are growing is a permitted
possibility. But plants getting water and plants are not growing is also possible; it holds
in particular if a plant is watered too much, but there are many other factors like, for
example, lack of light, pest infestation, etc. which may hinder their growth. Hence, if
the plants get water, then they will grow, is not an obligation conditional.
        </p>
        <p>
          Obligation conditionals may have different sources. They may be based on legal
laws like Example 1 and are often called deontic conditionals, in which case words
like must, should or ought may be explicitly used in the conditional sentence. Their
usage however, does not seem mandatory in everyday communication and is skipped
on many occasions. Knowledge or awareness that the consequent is obligatory given
the antecedent suffices in these cases, and yields the same responses as when explicitly
denoting the obligation. Obligation conditionals may also express moral or social
obligations like if somebody’s parents are elderly, then he/she should look after them [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
Other obligation conditionals are based on causal or physical laws which hold on our
planet like if an object is not supported, then it will fall to the ground. In each case, the
conjunction of the antecedent and the consequent is permitted, whereas the conjunction
of the antecedent and the negation of the consequent is forbidden.
        </p>
        <p>On the other end of the spectrum, if the consequent of a conditional sentence is
not obligatory given the antecedent, then it is called a factual conditional. In particular,
the truth of the antecedent is inconsequential to that of the consequent; that is (even) if
the antecedent is true, the consequent may or may not be true. This has already been
exemplified using Example 2. The conditional if the plants get water, then they will
grow is a factual one. As another example consider the conditional sentence if Maria
is over 19 years, then she may drink alcoholic beverages in a pub. This sentence is a
factual one, because given the atomic proposition Maria is over 19 years is true, one
can imagine two permitted possibilities, one where Maria drinks alcohol beverages and
another where Maria does not drink alcoholic beverages in a pub.
3.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Necessary versus Non-Necessary Antecedents</title>
        <p>As discussed in the previous section, the obligation or factual nature of a conditional
sentence indicates if the consequent is obligatory or simply possible provided the
antecedent is satisfied. The question that may naturally arise at this point is, what happens
when the antecedent of a conditional sentence is not satisfied? To that end, the
antecedent A of a conditional sentence if A then C is said to be necessary with respect
to the consequent C, if and only if C cannot be true unless A is true. This implies that
if A does not hold, C cannot either. For example, in Example 2 plants get water is a
necessary antecedent for plants will grow. If a plant is not watered at all, it will very
likely die.</p>
        <p>The above does not imply however, that the antecedent need always be a
precondition for the consequent, per se. The antecedent A of a conditional sentence if A then C
is said to be non-necessary with respect to the consequent C, if C can be true
irrespective of the truth or falsity of A. In particular this implies, if A does not hold, C may
or may not hold. In Example 1 the falsity of drinking alcoholic beverages in a pub is
inconsequential to the truth of the consequent older than 19 years. There are plenty
of adults (over 19 years) who do not drink alcohol. The antecedent of the conditional
sentence if Maria is drinking alcoholic beverages in a pub, then Maria must be over 19
years of age, in Example 1 is therefore called non-necessary.
3.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>Pragmatics</title>
        <p>
          Generally, humans may recognize conditional sentences as obligation or factual and
antecedents as necessary or non-necessary. This leads to an informal and pragmatic
classification of four kinds: obligation conditional with necessary antecedent (ON) or
nonnecessary antecedent (ONN) and factual conditional with necessary antecedent (FN) or
non-necessary antecedent (FNN). For an abstract conditional if A then C, without an
everyday context, the classification of the conditional into any of the aforementioned
kinds would be as discussed in the above section. The classification of everyday
conditionals, however, often depend on pragmatics: the context, the background knowledge
and experience of a person. For example, the conditional sentence if it is cloudy, then it
is raining discussed in [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] may be classified as an obligational conditional with
necessary antecedent by people living in Java, whereas it may be classified as a factual
conditional by people living in Central Europe. In another example [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], the authors
conducted an experiment, where they categorized the proposition if it’s heated, then this
butter will melt as a biconditional. In particular they considered if butter is not heated, it
will not melt. This corresponds to a necessary antecedent in our setting. While some of
their subjects also gave it the same classification, many considered it possible that even
if butter is not heated (explicitly), it may still melt. This implies that they considered the
antecedent to be non-necessary.
3.4
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>Handling Classifications in WCS</title>
        <p>The classification of conditional sentences can be taken into account by extending the
definition of the set of abducibles:</p>
        <p>AeP = AP [ A nPn [ A fP ,
where AP is as defined above,</p>
        <p>nn = {C
AP</p>
        <p>f
AP
= {ab
&gt; | C is the head of a rule occurring in P representing a</p>
        <p>conditional sentence with non-necessary antecedent,
&gt; | ab occurs in the body of a rule occurring in P</p>
        <p>representing a factual conditional}.</p>
        <p>nn contains facts for the consequents of conditional sentences with
nonThe set AP
necessary antecedents. As mentioned earlier, if an antecedent of a conditional sentence
is non-necessary, then the truth of the consequent does not depend on the truth of the
antecedent. The abducible C &gt; therefore implies that there may be other unknown
reasons for establishing the consequent of the conditional sentence.</p>
        <p>f contains facts for the abnormality predicates occurring in the bodies of
The set AP
the (logic program) representation of factual conditionals. Owing to the factual nature
of a conditional sentence, the antecedent of the conditional may be true, however its
consequent may not hold, due to various reasons which we might broadly call
abnormalities. As mentioned earlier, considerations of other plausible factors at play might
override our default assumption that these abnormalities are false. Once we weakly
complete our program, the abducible ab &gt; shall cause the abnormality predicate to
become true and its negation to become false. Hence, the body of the clause containing
its negation will be false, causing the consequent to be false in turn.5 Table 2 illustrates
how the set of abducibles can be extended for each classification.
4</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>An Experiment</title>
      <p>
        In [
        <xref ref-type="bibr" rid="ref4 ref5">5,4</xref>
        ] an experiment concerning conditional reasoning is described, where 56
logically naive participants were tested on an online website (Prolific, prolific.co).
The participants were restricted to Central Europe and Great Britain to have a similar
background knowledge about weather etc. It was also assumed that the participants had
not received any education in logic beyond high school training. The participants were
first presented with a story followed by a first assertion (a conditional premise), and a
second assertion (a (possibly negated) atomic premise). Finally for each problem they
had to answer the question “What follows?”. Both parts were presented simultaneously.
5 This technique is used in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] to represent an enabling relation and model the suppression effect.
      </p>
      <p>In particular, a library not being open prevents a person from studying in it.
The participants responded by clicking one of the answer options. They could take as
much time as they needed. Participants acted as their own controls.</p>
      <p>The participants carried out 48 problems consisting of the 12 conditionals listed in
the Appendix and solved all four inference types (AA, DA, AC, DC). They could select
one of three responses: nothing follows (nf), the fact that had not been presented in the
second premise, and the negation of this fact. E.g., in the case of DA, the first assertion
was of the form if A then C, the second assertion was ¬A, and they could answer C,
¬C, or nf . It should also be mentioned that the classification of the conditional
sentences into the four aforementioned kinds was done by the authors of the experiment.</p>
      <p>As an example consider the following short scenario from the experiment: Peter has
a lawn in front of his house. He is keen to make sure that the grass on the lawn does not
dry out, so whenever it has been dry for multiple days, he turns on the sprinkler to water
the lawn. Along with this context the conditional sentence if it rains, then the lawn is
wet and the negated atomic proposition it does not rain were provided. The participants
were given three choices of answers: the lawn is wet, the lawn is not wet, and nothing
follows.</p>
      <p>
        As mentioned earlier, the WCS could well explain the findings of the experiment
in the cases AA, AC, and DC (see [
        <xref ref-type="bibr" rid="ref4 ref5">5,4</xref>
        ]), but failed to explain the findings in the case
of DA. The data is shown in Table 3, where the total number of selected responses as
well as the median response time (in milliseconds) for ¬C (Mdn ¬C) and nf (Mdn nf )
responses are listed.
      </p>
      <p>Everyday contexts for the DA inference task elicited a high response rate of about
78% (525 out of 672) for ¬C, but in case of nf the rate varied from 8% (14 out of 168)
up to 33% (56 out of 168). The number of participants answering C seems irrelevant.
Until the present, the WCS could predict the ¬C answered by the majority of the
participants, but it could not yet model the significant number of nf responses. We now
propose a solution to the latter. Before we elaborate further, one might first observe that
as per our data nf was answered much more often in case of conditional sentences with
non-necessary antecedents than in the case of conditional sentences with necessary ones
(30% vs. 8%, Wilcoxon signed rank, W = 0, p &lt; .001). More importantly, the reader
may observe that when the classification of the antecedents changed from necessary to
non-necessary the number of ¬C responses decreased to 225 and nf increased to (a
significant) 101. The goal of this paper is to extend the WCS in order to model this
observed phenomenon.
5</p>
    </sec>
    <sec id="sec-4">
      <title>Modeling the General Consensus</title>
      <p>As shown in Table 3 the majority of the participants always answered ¬C when given
the premises if A then C and ¬A no matter how the conditional sentence was classified.
To illustrate how WCS models the majority consensus, let us consider Example 2 ((8) in
Table 3). Assuming it is known that the plants do not get water we obtain the program
P1 = {g
w ^ ¬ ab1, ab1 ? , w
?}
Obligation Conditional (O)
Factual Conditional (F)
Necessary Antecedent (N)
Non-Necessary Antecedent (NN) 10 3%
where g and w denote that the plants will grow and the plants get water, respectively,
and ab1 is an abnormality predicate. Weakly completing P1 we obtain:
{g $ w ^ ¬ ab1, ab1 $ ? , w $ ?} ,</p>
      <p>MP1 = h; , {g, ab1, w}i,
whose least model is
where nothing is true, and g, ab1, and w are all false. As mentioned earlier, because
water (the antecedent) is generally considered to be necessary for the growth of a plant
(the consequent), the falsity of w allows us to falsify g. Hence, we conclude that the
plants will not grow.
6</p>
    </sec>
    <sec id="sec-5">
      <title>Extending WCS to Search for Counter Examples</title>
      <p>But the general consensus to answer ¬C when given the premises if A then C and ¬A
is sometimes only barely met. Reconsider again Example 1 ((5) in Table 3): 28 out of 56
participants answered ¬C, whereas 25 participants answered nf. In general, the increase
in the number of nf responses occurs when the classification of the antecedent of the
conditional sentence changes from necessary to non-necessary. This is because unlike
a necessary antecedent, a non-necessary one makes room for counter examples where
even if the antecedent does not hold, the consequent might still hold. For example, if
Maria is not drinking alcoholic beverages in a pub she may nevertheless be over 19
years of age. Maria may simply abstain from alcohol. One should observe that this
cannot be modeled within the WCS so far, even if the set of abducibles is extended
e , as in the case of DA no abductive reasoning takes place.
from AP to AP</p>
      <p>
        In this paper we propose to extend WCS by adding a sixth step to the procedure
presented in Section 2:
1. Reasoning towards a logic program P following [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
2. Weakly completing the program.
3. Computing the least model MP of the weak completion of P under the
threevalued Łukasiewicz logic.
4. Reasoning with respect to MP .
5. If observations cannot be explained, then applying skeptical abduction using the
specified set of abducibles.
      </p>
      <sec id="sec-5-1">
        <title>6. Search for counter examples.</title>
        <p>
          The sixth step corresponds to the validation step in the mental model theory [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] in that
alternative models falsifying a putative conclusion are searched for. In the case of DA
¬C may be considered as the putative conclusion generated due to steps 1 to 5. In the
e , the extended procedure searches
sixth step using the extended set of abducibles AP
for models where ¬A is true, but ¬C is not. If such models are found, then skeptical
reasoning with respect to all constructed models is applied. This will be illustrated in
the next section.
7
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Modeling Counter Examples</title>
      <p>In order to illustrate how WCS along with its extension can explain the significant
number of nf answers in case of the non-necessary antecedents, we return to Example 1
and assume that Maria is not drinking alcoholic beverages in a pub. In the WCS this is
formalized by</p>
      <p>P2 = {o
where o and a denote that Maria is over 19 years old and she is drinking alcoholic
beverages, respectively, and ab2 is an abnormality predicate which is initially assumed
to be false. As the weak completion of P2 we obtain
whose least model is
{o $ a ^ ¬ ab2, ab2 $ ? , a $ ?} ,</p>
      <p>MP2 = h; , {a, ab2, o}i.</p>
      <p>Here, a, ab2, and o are all false. An average reasoner following this approach will draw
the conclusion Maria is not over 19 years old and stop reasoning at this point. This
accounts for the 28 ¬C responses for this conditional in our data.</p>
      <p>Classifying an antecedent as non-necessary, however, would allow the consequent
to be true or false despite the falsity of the former. In other words, recognizing an
antecedent as non-necessary, might allow humans to consider two possibilities: Maria
does not drink alcohol in a pub and she is younger than 19 years, and Maria does not
drink alcohol in a pub but she is older than 19. Hence, for such a reasoner the extended
WCS not only creates the previous model MP2 , where o is mapped to false, but also
searches for a counter example by considering o as a possible observation that needs to
be explained. As mentioned earlier in Section 3.4, because the conditional sentence is
classified as an obligation conditional with non-necessary antecedent, the extended set
of abducibles for P2 is</p>
      <p>e
AP2 = {a
&gt; , a
? , o
&gt;} .</p>
      <p>The abducible {o &gt;} can be used as a minimal explanation for the observation o.
Hence, adding this abducible to P2 leads to</p>
      <p>P3 = {o
whose least model is
{o $ (a ^ ¬ ab2) _ &gt; , ab2 $ ? , a $ ?} ,</p>
      <p>MP3 = h{o}, {a, ab2}i.</p>
      <p>Here o is true, whereas a and ab2 are false. As o is false in the model MP2 but true in
MP3 , reasoning skeptically WCS concludes nf . This accounts for the 25 nf responses
for this conditional sentence in our data. Similar counter examples can be constructed
for the examples (4), (6), and (10)-(12) depicted in Table 3 which explain the nf answers
given by a significant number of participants. But similar counter examples cannot be
constructed for the remaining examples (1)-(3) and (7)-(9).
8</p>
    </sec>
    <sec id="sec-7">
      <title>Conclusion</title>
      <p>
        In this paper, we have presented how the WCS along with its proposed extension can
adequately model the average human reasoner in case of the DA inference task. Whereas
the majority consensus of ¬C suggests reasoners who might not have considered the
necessity or non-necessity of the antecedent, the significant number of nf answers
suggests reasoners who might have. Accordingly, we have discussed how the classification
of conditional sentences and their antecedents help gain an insight into how humans
understand or recognize conditional sentences. This not only allows us to model the
DA reasoning task but also model the average human reasoner in case of the AA, AC,
and DC. In case of the AC (like in the DA) reasoners might recognize the antecedent
as non-necessary which influences their response. In case of the DC, it is possibly the
obligation or factual nature of the conditional sentence which is taken into consideration
(see [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]).
      </p>
      <p>The case for the AA seems to be a ceiling effect however, as an overwhelming
majority of our responses were C (640 out of 672). The WCS can well model this
majority which indicates that the conditional sentences were taken to be obligatory by
most reasoners, meaning, when A was affirmed, they simply concluded C. Nonetheless,
we also realize that in case of factual conditionals where although A holds, C may
or may not, a reasoner might respond nothing follows. This, however, does not reflect
significantly on the current data. Consider a seemingly strange yet everyday conditional
sentence uttered by humans, viz. if I take an umbrella then it will not rain. Affirming the
antecedent, I take an umbrella the conclusion it will not rain seems arguable. Given the
factual nature of the conditional it seems plausible that skeptical reasoners may respond
with nothing follows. WCS can also account for these reasoners. Such a discussion
motivates further research about the AA and why humans accord with the response C
so easily. Coming back to the DA, if we were to deny the antecedent on the other hand,
this is, I do not take an umbrella, then although most reasoners might respond ¬C
meaning, it will rain, once again, WCS with the extension proposed in this paper can
account for the former as well as for reasoners who choose to respond with skepticism
that nothing follows.</p>
      <p>Acknowledgement</p>
      <p>We thank Marcos Cramer for many fruitful discussions.</p>
    </sec>
    <sec id="sec-8">
      <title>Appendix: Conditionals of the Experiment</title>
      <sec id="sec-8-1">
        <title>Obligation Conditionals with Necessary Antecedent (ON)</title>
        <p>(1) If it rains, then the roofs must be wet.
(2) If water in the cooking pot is heated over 99 C, then the water starts boiling.
(3) If the wind is strong enough, then the sand is blowing over the dunes.</p>
      </sec>
      <sec id="sec-8-2">
        <title>Obligation Conditionals with Non-Necessary Antecedent (ONN)</title>
        <p>(4) If Paul rides a motorbike, then Paul must wear a helmet.
(5) If Maria is drinking alcoholic beverages in a pub, then Maria must be over 19 years
of age.
(6) If it rains, then the lawn must be wet.</p>
      </sec>
      <sec id="sec-8-3">
        <title>Factual Conditionals with Necessary Antecedent (FN)</title>
        <p>(7) If the library is open, then Sabrina is studying late in the library.
(8) If the plants get water, then they will grow.
(9) If my car’s start button is pushed, then the engine will start running.</p>
      </sec>
      <sec id="sec-8-4">
        <title>Factual Conditionals with Non-Necessary Antecedent (FNN)</title>
        <p>(10) If Nancy rides her motorbike, then Nancy goes to the mountains.
(11) If Lisa plays on the beach, then Lisa will get sunburned.
(12) If Ron scores a goal, then Ron is happy.</p>
        <p>
          The classification was done by the authors of [
          <xref ref-type="bibr" rid="ref4 ref5">5,4</xref>
          ]. One should observe that for
each obligation conditional the conjunction of the antecedent and the negation of the
consequent is usually considered to be a forbidden possibility, whereas this does not
hold for each factual conditional. Likewise, in each case of a non-necessary antecedent
one can easily come up with a different reason for the consequent to hold, whereas this
is not the case for each of the necessary antecedents.
        </p>
      </sec>
      <sec id="sec-8-5">
        <title>8.1 Short background story for Example 1</title>
        <p>Maria and her friends are visiting a local pub to enjoy the evening with drinks and good
food. Maria knows the local rules and regulations and obeys them.</p>
      </sec>
      <sec id="sec-8-6">
        <title>8.2 Short background story for Example 2</title>
        <p>The Presleys have moved into their newly built house and have hired a gardener to lay
out the garden. They are sitting on their terrace and are looking at the bushes, small
trees, and shrubs which were planted by the gardener two months ago.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Byrne</surname>
            ,
            <given-names>R.M.J.:</given-names>
          </string-name>
          <article-title>Suppressing valid inferences with conditionals</article-title>
          .
          <source>Cognition</source>
          <volume>31</volume>
          (
          <issue>1</issue>
          ),
          <fpage>61</fpage>
          -
          <lpage>83</lpage>
          (
          <year>1989</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Byrne</surname>
            ,
            <given-names>R.M.J.:</given-names>
          </string-name>
          <article-title>The Rational Imagination: How People Create Alternatives to Reality</article-title>
          . MIT Press, Cambridge, MA, USA (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Clark</surname>
            ,
            <given-names>K.L.</given-names>
          </string-name>
          :
          <article-title>Negation as failure</article-title>
          . In: Gallaire,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Minker</surname>
          </string-name>
          ,
          <string-name>
            <surname>J</surname>
          </string-name>
          . (eds.) Logic and Databases, pp.
          <fpage>293</fpage>
          -
          <lpage>322</lpage>
          . Plenum, New York (
          <year>1978</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Cramer</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Ho¨lldobler,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Ragni</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          :
          <article-title>Human conditional reasoning</article-title>
          . https: //tu-dresden.de/ing/informatik/ki/krr/ressourcen/dateien/ chr2021b.pdf/view (
          <year>2021</year>
          ), accepted at NMR2021
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Cramer</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Ho¨lldobler,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Ragni</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          :
          <article-title>When are humans reasoning with modus tollens?</article-title>
          <source>Proceedings of the Annual Conference of the Cognitive Science Society</source>
          ,
          <volume>43</volume>
          ,
          <fpage>2337</fpage>
          -
          <lpage>2343</lpage>
          (
          <year>2021</year>
          ), retrieved from https://escholarship.org/uc/item/9x33q50g
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Dietz</surname>
            ,
            <given-names>E.A.</given-names>
          </string-name>
          , Ho¨lldobler,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Ragni</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.:</surname>
          </string-name>
          <article-title>A computational logic approach to the suppression task</article-title>
          .
          <source>Proceedings of the Annual Conference of the Cognitive Science Society</source>
          ,
          <volume>34</volume>
          ,
          <fpage>1500</fpage>
          -
          <lpage>1505</lpage>
          (
          <year>2012</year>
          ), retrieved from https://escholarship.org/uc/item/2sd6d61q
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Fitting</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <string-name>
            <surname>First-Order Logic</surname>
          </string-name>
          and
          <source>Automated Theorem Proving</source>
          . Springer-Verlag, Berlin, 2nd edn. (
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Grice</surname>
            ,
            <given-names>H.P.</given-names>
          </string-name>
          :
          <article-title>Logic and conversation</article-title>
          . In: Cole,
          <string-name>
            <surname>P.</surname>
          </string-name>
          , Morgan,
          <string-name>
            <surname>J.L</surname>
          </string-name>
          . (eds.)
          <source>Syntax and Semantics</source>
          , vol.
          <volume>3</volume>
          , pp.
          <fpage>41</fpage>
          -
          <lpage>58</lpage>
          . Academic Press, New York (
          <year>1975</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Hamada</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          , Ho¨lldobler, S.:
          <article-title>On disjunctions and the weak completion semantics</article-title>
          . https://tu-dresden.de/ing/informatik/ki/krr/ressourcen/ dateien/hh2021.pdf/view (
          <year>2021</year>
          ), accepted at MathPsych/ICCM2021
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. Ho¨lldobler, S.:
          <article-title>Weak completion semantics and its applications in human reasoning</article-title>
          . In: Furbach,
          <string-name>
            <given-names>U.</given-names>
            ,
            <surname>Schon</surname>
          </string-name>
          , C. (eds.)
          <article-title>Bridging 2015 - Bridging the Gap between Human and Automated Reasoning</article-title>
          .
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>1412</volume>
          , pp.
          <fpage>2</fpage>
          -
          <lpage>16</lpage>
          . CEUR-WS.org (
          <year>2015</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>1412</volume>
          /
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. Ho¨lldobler,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Kencana Ramli</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.D.P.</surname>
          </string-name>
          :
          <article-title>Logic programs under three-valued Łukasiewicz's semantics</article-title>
          . In: Hill,
          <string-name>
            <given-names>P.M.</given-names>
            ,
            <surname>Warren</surname>
          </string-name>
          , D.S. (eds.)
          <source>Logic Programming. Lecture Notes in Computer Science</source>
          , vol.
          <volume>5649</volume>
          , pp.
          <fpage>464</fpage>
          -
          <lpage>478</lpage>
          . Springer-Verlag Berlin Heidelberg (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Johnson-Laird</surname>
            ,
            <given-names>P.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Byrne</surname>
            ,
            <given-names>R.M.J.</given-names>
          </string-name>
          : Deduction. Lawrence Erlbaum Associates, Hove and London (UK) (
          <year>1991</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Johnson-Laird</surname>
            ,
            <given-names>P.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Byrne</surname>
            ,
            <given-names>R.M.J.:</given-names>
          </string-name>
          <article-title>Conditionals: A theory of meaning, pragmatics, and inference</article-title>
          .
          <source>Psychological Review</source>
          <volume>109</volume>
          ,
          <fpage>646</fpage>
          -
          <lpage>678</lpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Kakas</surname>
            ,
            <given-names>A.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kowalski</surname>
            ,
            <given-names>R.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Toni</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Abductive Logic Programming</article-title>
          .
          <source>Journal of Logic and Computation</source>
          <volume>2</volume>
          (
          <issue>6</issue>
          ),
          <fpage>719</fpage>
          -
          <lpage>770</lpage>
          (
          <year>1992</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Khemlani</surname>
            ,
            <given-names>S.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Byrne</surname>
            ,
            <given-names>R.M.J.</given-names>
          </string-name>
          , Johnson-Laird,
          <string-name>
            <surname>P.N.</surname>
          </string-name>
          :
          <article-title>Facts and possibilities: A model-based theory of sentenial reaoning</article-title>
          . Cognitive Science pp.
          <fpage>1</fpage>
          -
          <lpage>38</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Lloyd</surname>
            ,
            <given-names>J.W.</given-names>
          </string-name>
          :
          <article-title>Foundations of Logic Programming</article-title>
          . Springer-Verlag (
          <year>1984</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Łukasiewicz</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>O logice tro´jwartos´ciowej</article-title>
          .
          <source>Ruch Filozoficzny</source>
          <volume>5</volume>
          ,
          <fpage>169</fpage>
          -
          <lpage>171</lpage>
          (
          <year>1920</year>
          ),
          <article-title>english translation: On Three-Valued Logic</article-title>
          . In: Jan Łukasiewicz Selected Works. (L. Borkowski, ed.), North Holland,
          <fpage>87</fpage>
          -
          <lpage>88</lpage>
          ,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Oliviera da Costa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dietz Saldanha</surname>
            ,
            <given-names>E.A.</given-names>
          </string-name>
          , Ho¨lldobler,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Ragni</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.:</surname>
          </string-name>
          <article-title>A computational logic approach to human syllogistic reasoning</article-title>
          .
          <source>Proceedings of the Annual Conference of the Cognitive Science Society</source>
          ,
          <volume>39</volume>
          ,
          <fpage>883</fpage>
          -
          <lpage>888</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Stenning</surname>
          </string-name>
          , K.,
          <string-name>
            <surname>van Lambalgen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Semantic interpretation as computation in nonmonotonic logic: The real meaning of the suppression task</article-title>
          .
          <source>Cognitive Science</source>
          <volume>29</volume>
          (
          <fpage>919</fpage>
          -
          <lpage>960</lpage>
          ) (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Stenning</surname>
          </string-name>
          , K.,
          <string-name>
            <surname>van Lambalgen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Human Reasoning and Cognitive Science</article-title>
          . MIT Press (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>