<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Monadic Reasoning using Weak Completion Semantics</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>1International Center for Computational Logic, TU Dresden, Germany</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>2North-Caucasus Federal University</institution>
          ,
          <addr-line>Stavropol, Russian Federation</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>A recent meta-analysis carried out by Khemlani and Johnson-Laird showed that the conclusions drawn by humans in psychological experiments about syllogistic reasoning deviate from the conclusions drawn by classical logic. Moreover, none of the current cognitive theories predictions fit the empirical data. In this paper a Computational Logic analysis clarifies seven principles necessary to draw the inferences. We propose a modular approach towards these principles and show how human syllogistic reasoning can be modeled under a new cognitive theory, the Weak Completion Semantics.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Natural language First-order logic
affirmative universal all a are b
affirmative existential some a are b
negative universal no a are b
negative existential some a are not b
∀X(a(X) → b(X))
∃X(a(X) ∧ b(X))
∀X(a(X) → ¬b(X))
∃X(a(X) ∧ ¬b(X))</p>
      <p>Short
Aab
Iab
Eab
Oab
a-b
b-a
a-b
b-a
In experiments, participants are normally expected to complete the syllogism by drawing a logical consequence
from the first two premises, e.g. in this example ‘some a are not c’. The participants’ given response – the
conclusion – is evaluated as true if it can be derived in classical first-order logic (FOL), otherwise as false. The
four quantifiers and their formalization in FOL are given in Table 1. The entities can appear in four different
orders called figures as shown in Table 2. Hence, a problem can be completely specified by the quantifiers of the
first and second premise and the figure. The example discussed above is denoted by IE4. Altogether, there are
64 syllogisms and, if formalized in FOL, we can compute their logical consequence in classical logic. However,
the meta-analysis by Khemlani and Johnson-Laird [KJ12] based on six experiments has shown that humans
do not only systematically deviate from the predictions of FOL but from any other of 12 cognitive theories.
In the case of IE4, besides the above mentioned logical consequence, a significant number of humans answered
‘no valid conclusion’, which does not follow from IE4 in FOL, as ‘some a are not c’ follows from IE4.</p>
      <p>After we discuss our solution for syllogistic reasoning, we compare the predictions under WCS with the
results of FOL, the syntactic rule based theory PSYCOP [Rip94], the Verbal Model Theory [PN95] and the
Mental Model Theory [JL83]. The two model-based theories performed the best in the meta-analysis [KJ12].
2
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>Weak Completion Semantics</title>
      <sec id="sec-2-1">
        <title>Logic Programs</title>
        <p>A (logic) program P is a finite set of clauses of the form A ← &gt;, A ← ⊥ or A ← B1 ∧ . . . ∧ Bn, n &gt; 0, where A
is an atom, Bi, 1 ≤ i ≤ n, are literals, and &gt; and ⊥ denote truth and falsehood, respectively. Clauses are
assumed to be universally closed. A is called head and &gt;, ⊥ as well as B1 ∧ . . . ∧ Bn are called body of the
corresponding clause. Clauses of the form A ← &gt; and A ← ⊥2 are called facts and assumptions,3 respectively.
¬A is assumed in P iff P contains an assumption with head A and no other clause with head A occurs in P.
We restrict terms to be constants or variables only, i.e. we consider so-called data logic programs. For each P
the underlying alphabet consists precisely of the symbols occurring in P and that non-propositional programs
contain at least one constant.</p>
        <p>gP denotes the set of all ground instances of clauses occurring in P, where a ground instance of
clause C is obtained from C by replacing each variable occurring in C by a constant. A ground atom
A is defined in gP iff gP contains a clause whose head is A; otherwise A is said to be undefined.
def (A, P) = { A ← Body | A ← Body ∈ g P} is called definition of A in P. The interested
reader is referred to e.g. [H¨ol09, Llo84] for more details about classical logic and logic programs
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Three-Valued Lukasiewicz Logic</title>
        <p>We consider the three-valued Lukasiewicz logic [Luk20] for which the corresponding truth values are true (&gt;),
false (⊥) and unknown (U). A three-valued interpretation I is a mapping from the set of formulas to the
set {&gt;, ⊥, U}. The truth value of a given formula under I is determined according to the truth tables in Table 3.
We represent an interpretation as a pair I = hI&gt;, I⊥i of disjoint sets of ground atoms, where I&gt; is the set of all
atoms that are mapped to &gt; by I, and I⊥ is the set of all atoms that are mapped to ⊥ by I. Atoms which do
not occur in I&gt; ∪ I⊥ are mapped to U. Let I = hI&gt;, I⊥i and J = hJ &gt;, J ⊥i be two interpretations: I ⊆ J iff
I&gt; ⊆ J &gt; and I⊥ ⊆ J ⊥. I(F ) = &gt; means that a formula F is mapped to true under I. M is a model of P if
it is an interpretation, which maps each clause occurring in gP to &gt;. I is the least model of P iff for any other
model J of P it holds that I ⊆ J .
2.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>Integrity Constraints</title>
        <p>An integrity constraint is an expression of the form U ← Body, where Body is a conjunction of literals and U
denotes the unknown. An interpretation I maps an integrity constraint U ← Body to &gt; iff I(Body) ⊆ {⊥, U}.
Given an interpretation I and a finite set IC of integrity constraints, I satisfies IC iff all clauses occurring in IC
are true under I.
2.4</p>
      </sec>
      <sec id="sec-2-4">
        <title>Forms of Reasoning</title>
        <p>The philosopher Peirce identified three forms of reasoning [PHW74]: deduction, induction and abduction. We
focus here on deduction and abduction. For deduction we use the semantic operator ΦP , defined below. For
abduction we search explanations for some observation inspired by ideas introduced in Logic Programming.
2.4.1</p>
      </sec>
      <sec id="sec-2-5">
        <title>Reasoning with Respect to Least Models</title>
        <p>For a given program P, consider the following transformation: (1) For each ground atom A which is defined
in gP, replace all clauses of the form A ← Body1, . . . , A ← Bodym occurring in gP by A ← Body1 ∨ . . . ∨ Bodym.
(2) Replace all occurrences of ← by ↔. The obtained set of formulas is called weak completion of P or wcP.4</p>
        <p>It has been shown by [HK09b] that programs as well as their weak completions admit a least model under
three-valued Lukasiewicz logic. Moreover, the least model of wcP can be obtained as the least fixed point of
the following semantic operator, which is due to Stenning and van Lambalgen [SvL08]: Let I = hI&gt;, I⊥i be an
interpretation. ΦP (I) = hJ &gt;, J ⊥i, where</p>
        <p>J &gt;
J ⊥
=
=
{A | A ← Body ∈ def (A, P) and I(Body) = &gt;},
{A | def (A, P) 6= ∅ and I(Body) = ⊥ for all A ← Body ∈ def (A, P)}.</p>
        <p>The Weak Completion Semantics (WCS) is the approach to consider weakly completed programs, to compute
their least models, and to reason with respect to these models. We write P |=wcs F iff formula F holds in the
least fixed point of ΦP (which is the least model of wcP).
2.4.2</p>
      </sec>
      <sec id="sec-2-6">
        <title>Backward Reasoning with Abduction</title>
        <p>Abduction is a reasoning process that searches for explanations given a program and some observations, which do
not follow from the program [KKT93]. Explanations are usually restricted to certain formulas called abducibles.
The set of abducibles w.r.t. P is</p>
        <p>AP = {A ← &gt; | A is undefined in gP} ∪ {A ← ⊥ | A is undefined in gP} ∪ {A ← &gt; | ¬A is assumed in gP}.
An abductive framework consists of a program P, a finite set AP of abducibles, a finite set IC of integrity
constraints, and an entailment relation. Let hP, AP , IC, |=wcsi be an abductive framework, E ⊆ AP , and O a
non-empty set of literals called observation. An observation O = {o1, . . . , on} is explained by E given P and IC iff
P ∪ E |=wcs o1 ∧ . . . ∧ on and P ∪ E |=wcs IC. O is explained given P and IC iff there exists an E such that O
is explained by E given P and IC. We prefer subset-minimal explanations. An explanation E is subset-minimal
iff there is no explanation E 0 such that E 0 ⊂ E .</p>
        <p>2We consider the weak completion of programs and, hence, a clause of the form A ← ⊥ is turned into A ↔ ⊥ provided that this
is the only clause in the program where A is the head of.</p>
        <p>3A ← ⊥ is called an assumption because it can be overwritten under the Weak Completion Semantics, as we will discuss later.
4If P = {A ← ⊥, A ← &gt;} then wcP = {A ↔ ⊥ ∨ &gt;}. This is semantically equivalent to wcP = {A ↔ &gt;}. A ← ⊥ is overwritten.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Reasoning with Monadic Quantified Assertions</title>
      <p>In their meta-study [KJ12], Khemlani and Johnson-Laird enumerated some tasks related to monadic reasoning
that have been investigated experimentally. Those include: given two premises and a conclusion, the task is to
evaluate if the conclusion follows necessarily or possibly from the premises; given two premises, the task is to
check if they are consistent or to formulate a conclusion that follows from them; and, given two premises and an
invalid conclusion, the task is to formulate counterexamples, which refute this syllogism. They argue that not
only most individuals can understand those tasks, but are also able to develop procedures to carry them out.
Our approach to model monadic reasoning provides tools to derive conclusions similar to conclusions drawn by
humans.</p>
      <p>We identify two types of reasoning: reasoning towards a representation and reasoning from a representation.
The first has the representation of assertions as output and the second has a representation of assertions as input.
In this work a representation of assertions is a three-value interpretation. In the principles discussed below we
define one logic program for the first type of reasoning and one logic program for the second type. Principles
can be combined independently.
3.1</p>
      <sec id="sec-3-1">
        <title>Reasoning Principles</title>
        <p>Our reasoning principles are based on findings in Cognitive Science and Logical Programming. Note that
assertions can be either positive or negative. We explain only the encoding of positive assertions. The encoding
of negative assertions follows analogously. Table 4 shows which clauses need to be added to the logic program,
which encodes either the task to reason towards a representation or to reason from a representation, when one
of our principles is considered.
3.1.1</p>
      </sec>
      <sec id="sec-3-2">
        <title>Assertion as Rule (rule)</title>
        <p>Under WCS we can only encode assertions as rules with two or more distinct predicates. Therefore, the encoding
towards a representation of an assertion that establishes, for example, a relation from the predicate y to the
predicate z includes the rule z(X) ← y(X). In the case that we reason from a representation we have nothing
to check related to this principle and encode this principle as the fact ruleyz ← &gt;.
3.1.2</p>
      </sec>
      <sec id="sec-3-3">
        <title>Licenses for Inferences (licenses)</title>
        <p>Stenning and van Lambalgen [SvL08] propose to formalize conditionals by licenses for inferences. For example,
the conditional for all X, if p(X) then q(X) is represented by the program {q(X) ← p(X)∧¬ab(X), ab(X) ← ⊥}.
Its first clause states that for all X, q(X) holds if p(X) holds and nothing abnormal for X is known. Clauses
are assumed to be universally closed and, hence, the universal quantifier can be omitted. Licenses are encoded
by abnormality predicates, which are usually of the form ab(X). This principle is encoded in the same way for
both types of reasoning.
3.1.3</p>
      </sec>
      <sec id="sec-3-4">
        <title>Existential Import and Gricean Implicature (import)</title>
        <p>Humans seem to understand quantifiers differently due to a pragmatic understanding of language. For instance,
in natural language we normally do not quantify over things that do not exist. Consequently, for all implies there
exists. This appears to be in line with human reasoning and has been called the Gricean Implicature [Gri75].
Several theories like the theory of mental models [JL83] or mental logic [Rip94] assume that the sets we quantify
about are not empty. Likewise, Stenning and van Lambalgen [SvL08] have shown that humans require existential
import for a conditional to be true.</p>
        <p>Consider the conditional for all X, if y(X) then z(X) encoded by the clause z(X) ← y(X). The principle
import implies that there is an object that belongs to both predicates y and z. In reasoning towards a
representation, we encode import by adding the fact y(o) ← &gt;. If in the current reasoning task we consider
licenses as well, then we need to add the assumption abyz(o) ← ⊥ to assert that nothing abnormal between
these two predicates is known for this object. In reasoning from a representation we just need to check that
in the current representation there exists this object that is in both predicates. We encode it with the rule
exist yz ← y(X) ∧ z(X).
Principle with licenses
rule
rule negative
import
import negative
rule with import
rule with import
negative
norefutation
norefutation negative
unknownGen
unknownGen negative
doubleNeg
transformation</p>
        <p>Towards a Representation
z(X) ← y(X) ∧ ¬abyz(X)
z0(X) ← y(X) ∧ ¬abyz(X)
abyz(o) ← ⊥
y(o) ← &gt;
abyz(o) ← ⊥
y(o) ← &gt;
z(X) ← y(X) ∧ ¬abyz(X)
abyz(o) ← ⊥
y(o) ← &gt;
z0(X) ← y(X) ∧ ¬abyz(X)
abyz(o) ← ⊥
y(o) ← &gt;
abyz(X) ← ⊥
abyz(X) ← ⊥
y(o) ← &gt;
y(o) ← &gt;
abnzz(o) ← ⊥
z(X) ← ¬z0(X) ∧ ¬abnzz(X)
−
−
ruleyz ← ¬abyz(X)
ruleyz0 ← ¬abyz(X)
exist y ← y(X) ∧ z(X) ∧ ¬abyz(X)
existnegyz ← y(X) ∧ ¬z(X) ∧ ¬abyz(X)
ruleyz ← exist yz
exist yz ← y(X) ∧ z(X) ∧ ¬abyz(X)
ruleyz ← existnegyz
existnegyz ← y(X) ∧ ¬z(X) ∧ ¬abyz(X)
norefuteyz ← ¬refuteyz
refuteyz ← y(X) ∧ ¬z(X) ∧ ¬abyz(X)
norefutenegyz ← ¬refutenegyz
refutenegyz ← y(X) ∧ z(X) ∧ ¬abyz(X)
genyz ← y(X) ∧ uz(X) ∧ ¬abyz(X)
genyz ← y(X) ∧ ¬z(X) ∧ ¬abyz(X)
gennegyz ← y(X) ∧ uz(X) ∧ ¬abyz(X)
gennegyz ← y(X) ∧ z(X) ∧ ¬abyz(X)
Humans seem to distinguish between ‘some y are z ’ and ‘all y are z ’. Accordingly, if we observe that an object o
belongs to y and z then we do not want to conclude both, ‘some y are z ’ and ‘all y are z ’. In order to prevent
such unwanted conclusions we introduce the following principle: if we know that ‘some y are z ’ then there must
not only be an object o1 which belongs to y and z (by Gricean implicature) but there must be another object
o2 which belongs to y and for which it is unknown whether it belongs to z.</p>
        <p>In reasoning to a representation, the encoding of this principle is only possible when the principle licenses is
used in the current reasoning task, too. Consider the same clause used as example in previous principles extended
with abnormalities, q(X) ← p(X) ∧ ¬ ab(X). We encode it by adding the fact p(o) ← &gt; and not adding a
clause with ab(o) in the head. ab(o) will be then evaluated to unknown, and as consequence q(o) is unknown,
too. In reasoning from a representation we check if there exist an object that it is in p and that either it is not
in q or it is unknown if it belongs to q.
3.1.5</p>
      </sec>
      <sec id="sec-3-5">
        <title>Refutation by Counterexample (norefutation)</title>
        <p>Empirical findings support the hypothesis that people spontaneously use counterexamples in monadic
reasoning [KJ12]. When we reason towards a representation we use this principle by explicitly stating which
objects are not expected to be part of a refutation. This is done using abnormality predicates. For example,
if we don’t expect the object o to be used in a counterexample for a rule z(X) ← y(X) ∧ ab(X) we add the
assumption ab(o) ← ⊥ to the program. We generalize this to universally quantified assertions by assuming
that no counterexamples are expected for any object, i.e. we add to our program ab(X) ← ⊥. Note again that
the principle norefutation can only be used in reasoning towards a representation if licenses are considered, too.
Finally, when we reason from an interpretation we add rules with head refute that describe our counterexamples.
This principle is then defined by the rule norefuteyz ← ¬refuteyz.
3.1.6</p>
      </sec>
      <sec id="sec-3-6">
        <title>Converse Interpretation (converse)</title>
        <p>Although there appears to be some evidence that humans seem to distinguish between ‘some y are z ’ and ‘some
z are y ’ (see the results reported in [KJ12]) we propose that Iab implies Iba and vice versa. If there is an object
which belongs to y and z, then there is also an object which belongs to z and y. Consider the conditional for
all X, if p(X) then q(X). Then, by converse we encode for all X, if q(X) then p(X), too. This applies to both
types of reasoning.
3.1.7</p>
      </sec>
      <sec id="sec-3-7">
        <title>No Derivation by Double Negation (doubleNeg)</title>
        <p>Consider the following two negative sentences (i.e. including negation) and the positive one: ‘ If not a, then b. If
not b then c. a is true.’ The program representing these sentences is P = {b ← ¬a, c ← ¬b, a ← &gt;}. The weak
completion of P is wc P = {b ↔ ¬a, c ↔ ¬b, a ↔ &gt;}. Its least model is h{a, c}, {b}i, and thus a and c are true:
a is true because it is a fact and c is true by the negation of b. b is derived to be false because the negation of
a is false. This example shows that under WCS, a positive conclusion (c being true) can be derived from two
clauses, which include negation. Considering the results of the participants’ responses in [KJ12], they seem not
to draw conclusions through double negatives. Accordingly, we block them through abnormalities.
3.1.8</p>
      </sec>
      <sec id="sec-3-8">
        <title>Negation by Transformation (transformation)</title>
        <p>A negative literal cannot be the head of a clause in a program. In order to represent a negative conclusion ¬p(X)
an auxiliary atom p0(X) is introduced together with a clause p(X) ← ¬p0(X) and the integrity constraint
U ← p(X) ∧ p0(X). This is a widely used technique in logic programming. Together with the principle licences
for inferences, the additional clause becomes p(X) ← ¬p0(X) ∧ ¬ab(X).
3.2</p>
      </sec>
      <sec id="sec-3-9">
        <title>Encoding of a Representation</title>
        <p>When reasoning from a representation we need to encode the representation, i.e. the three-value interpretation,
as a logic program. We start by defining predicates and objects that we are interested to reason about, as not
all elements in our interpretation are relevant to derive conclusions. Ground atoms defined by those predicates
and objects are called relevant atoms. Then, given an interpretation, we build a logic program by adding clauses
that encode the evaluation of the relevant atoms. Thus, if the relevant atoms are evaluated to true or false we
add facts or assumptions to our program, respectively. If some relevant atom is unknown we consider a new
predicate that is defined by the original atom predicate name prefixed by u (from unknown). We then add a fact
to the program with that new atom in the head. For example, if the relevant atom y(o) is evaluated to unknown
we add the fact uy(o) ← &gt; to our program.
3.3</p>
      </sec>
      <sec id="sec-3-10">
        <title>Entailment of Quantified Assertions</title>
        <p>The goal of reasoning from a representation is to entail conclusions. A conclusion is a quantified assertion. For
each of the four possible moods we define the principles to be considered in their encoding. A conclusion is
entailed if all principles defined for the mood of the conclusion are satisfied in the current representation.</p>
        <p>There is a rule for each conclusion. The head of that rule is the predicate related to that conclusion; and its
body is a conjunction of atoms related to the principles used in the encoding of that conclusion. The predicate
related to conclusion Azy, Izy, Ezy or Ozy is the predicate azy, izy, ezy or ozy, respectively. Therefore, a conclusion
is entailed iff its related atom is entailed. We add the rule abyz(X) ← uy(X) to avoid atoms that are evaluated
to unknown interfering with the entailment of conclusions. For example, to entail the conclusion Aac we don’t
consider ground atoms with predicate a that are evaluated to unknown. In the next chapter we present our
encoding of syllogisms and we show how the entailment rules are defined for each mood.
3.4</p>
      </sec>
      <sec id="sec-3-11">
        <title>Search Alternative Conclusions to NVC : Abduction</title>
        <p>Our hypothesis is that when participants are faced with a NVC conclusion (‘no valid conclusion’ ), they might
not want to accept this conclusion and proceed to check whether there exists unknown information that is
relevant. This information may be explanations to facts in our program, and we model such repair mechanism
Mood
Ayz
Eyz
Iyz
Oyz
rule with import and norefutation
licenses
rule with import negative and norefutation negative
licenses
rule with import, unknownGen and converse
licenses
rule with import negative and unknownGen negative
licenses
ayz ← ruleyz ∧ norefuteyz ∧ ¬abayz
abayz ← ⊥
eyz ← rulenegyz ∧ norefutenegyz ∧ ¬abeyz
abeyz ← ⊥
iyz ← ruleyz ∧ genyz ∧ rulezy ∧ genzy ∧ ¬abiyz
abiyz ← ⊥
oyz ← rulenegyz ∧ gennegyz ∧ ¬aboyz
aboyz ← ⊥
using skeptical abductive reasoning. Facts in our programs come either from an existential import or from
unknown generalization. We use only the first as source for observations, since they are used directly to infer
new information.</p>
        <p>Each head of an existential import generates a single observation. We apply abduction sequentially to each
of them. To prevent empty explanations we remove from the current program the fact that generated the
observation. For each observation and each of its minimal explanations we compute the least model of the
weak completion of the program extended with the explanation and collect all entailed syllogistic conclusions.
Observations that cannot be explained are filtered out. Let Answers consist of all entailed conclusions obtained
in that way. The final conclusion is obtained by following a skeptical reasoning, i.e. the final answer to the
current syllogism is given by FinalAnswer = ∩A∈AnswersA. In the case that FinalAnswer is empty, we entail the
NVC conclusion.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Syllogisms: Use Case</title>
      <p>The principles used in our syllogistic encoding are enumerated in Table 5. Note that we are only interested in
conclusions between predicates a and c. When such a conclusion is not possible we entail NVC. As explained
before, a NVC conclusion triggers the use of abduction.
4.1</p>
      <sec id="sec-4-1">
        <title>Accuracy of Predictions</title>
        <p>We follow the evaluation proposed by [KJ12]: There are nine different answers for each of the 64 syllogisms that
can be ordered in a list: Aac, Eac, Iac, Oac, Aca, Eca, Ica, Oca, and NVC. For each answer (e.g., Aac) we assign
a 1 to it, if this is predicted under WCS and else a 0. Analogously, for the percentages of participants’ responses
we can use a threshold-function like [KJ12] that any value above 16% 5assigned a 1 and else a 0. Both lists can
then be compared for their congruency as follows, where i is the ith element of both lists:
comp(i) =</p>
        <p>The matching percentage of this syllogism is then computed by P9
i=1 comp(i)/9. Note that the percentage of the
match does not only take in account when WCS correctly predicts a conclusion, but also whenever it correctly
rejected a conclusion. The average percentage of accuracy is then simply the average of the matching percentage
of all 64 syllogisms.</p>
        <p>5Given that there are nine different conclusion possibilities the chance that a conclusion has been chosen randomly is 1/9 = 11.1%;
moreover, a binomial test shows that if a conclusion is drawn in more than 16% of the cases by the participants it is unlikely that
has been chosen by just random guesses. The statistical analysis is elaborately explained by [KJL12].
4.2
The two syllogistic premises of IA2 are as follows:</p>
        <p>Some b are a. (Iba)</p>
        <p>All c are b. (Acb)
First we develop a program to reason towards a representation by considering the principles listed in Table 5.
After that, we need to encode their respective rules that are presented in the second column in Table 4.
Program PIA2 consists of the following clauses:
The least model of wc PIA2 = hI&gt;, I⊥i is
h{a(o1), a(o3), a(o4), b(o1), b(o2), b(o3), b(o5), c(o5)}, {abba(o1), abab(o3)} ∪ {abcb(oi) | i ∈ {1, 2, 3, 4, 5}}i.
(1)
Next, in order to derive from this model some conclusion, we construct a new program to reason from a
representation, i.e. to reason from the least model. We consider again the principles in Table 5, but now
we use the third column in Table 4. Additionally, we need to encode the representation of our relevant
atoms by adding facts and assumptions according to the least model of wc PIA2. The relevant atoms are
X = {a(o), c(o) | o ∈ {o1, o3, o4, o5}}. The new program PIA2 consists of the following clauses:
∗
a(X) ← b(X) ∧ ¬abba(X).
abba(o1) ← ⊥.
b(o1) ← &gt;.
b(o2) ← &gt;.
b(X) ← a(X) ∧ ¬abab(X).
abab(o3) ← ⊥.
a(o3) ← &gt;.
a(o4) ← &gt;.
b(X) ← c(X) ∧ ¬abcb(X).
abcb(X) ← ⊥.
c(o5) ← &gt;.</p>
        <p>(rule&amp;licenses)
(import&amp;licenses)</p>
        <p>(import)
(unknownGen)
(converse&amp;rule&amp;licenses)
(converse&amp;import&amp;licenses)</p>
        <p>(converse&amp;import)
(converse&amp;unknownGen)</p>
        <p>(rule&amp;licenses)
(norefutation&amp;licenses)
(import)
{ A ← &gt; | PIA2 |=wcs A and A ∈ X } ∪
{ A ← ⊥ | PIA2 |=wcs ¬A and A ∈ X } ∪
{ uA ← &gt; | PIA2 6|=wcs (A ∨ ¬A) and A ∈ X } ∪
{ abyz(X) ← uy(X) | y ∈ {a, c} and z ∈ {a, c}} ∪
{ ayz ← ruleyz ∧ norefuteyz ∧ ¬abayz, abayz ← ⊥,
eyz ← ruleneg yz ∧ norefuteneg yz ∧ ¬abeyz, abeyz ← ⊥,
iyz ← ruleyz ∧ genyz ∧ rulezy ∧ genzy ∧ ¬abiyz, abiyz ← ⊥,
oyz ← ruleneg yz ∧ genneg yz ∧ ¬aboyz, aboyz ← ⊥</p>
        <p>| y ∈ {a, c} and z ∈ {a, c} } ∪
{ exist yz ← y(X) ∧ z(X) ∧ ¬abyz(X), existneg yz ← y(X) ∧ ¬z(X) ∧ ¬abyz(X),
ruleyz ← exist yz, exist yz ← y(X) ∧ z(X) ∧ ¬abyz(X),
ruleyz ← existneg yz, existneg yz ← y(X) ∧ ¬z(X) ∧ ¬abyz(X),
norefuteyz ← ¬refuteyz, refuteyz ← y(X) ∧ ¬z(X) ∧ ¬abyz(X),
norefuteneg yz ← ¬refuteyz, refuteneg yz ← y(X) ∧ ¬z(X) ∧ ¬abyz(X),
genneg yz ← y(X) ∧ uz(X) ∧ ¬abyz(X), genneg yz ← y(X) ∧ ¬z(X) ∧ ¬abyz(X)
genyz ← y(X) ∧ uz(X) ∧ ¬abyz(X), genyz ← y(X) ∧ ¬z(X) ∧ ¬abyz(X),
| y ∈ {a, c} and z ∈ {a, c} }.</p>
        <p>(import&amp;negative)</p>
        <p>(rule with import)
(rule with import negative)</p>
        <p>(norefutation)
(norefutation negative)
(unknownGen negative)
(unknownGen)
For y ∈ {a, c} and z ∈ {a, c}, PI∗A2 6|=wcs ayz ∨ eyz ∨ iyz ∨ oyz, because PI∗A2 6|=wcs exist yz and PI∗A2 6|=wcs existneg yz.
Thus, we entail ‘no valid conclusion’ (NVC). However, a significant percentage of participants answered Iac
and Ica, despite IA2 being an invalid syllogism in classical FOL. According to our sixth principle, abduction, our
hypothesis is that these participants might have searched for alternatives to NVC.</p>
        <p>The observations are O1 = {b(o1)}, O2 = {a(o3)} and O3 = {c(o5)}. If we examine Oi = {o} with i ∈ {1, 2, 3},
then we will try to find an explanation for Oi with respect to PIA2 \ {o ← &gt;}.6 The set of abducibles is:
6We remove the fact from the program that generated the observation, because otherwise the explanation would be empty.
(Ayz)
(Eyz)
(Iyz)
(Oyz)
OA4
IE4
IA2
Oca
Oac
NVC</p>
        <p>Oca,
Oac, Iac, Ica</p>
        <p>NVC
77%</p>
        <p>Oca, NVC
Oac, NVC
Ica, NVC
84%
Oca, Oac, NVC</p>
        <p>Oac, NVC
Eac, Eca, Oca
Iac, Ica, NVC
83%
Oca</p>
        <p>Oac
Iac, Ica
89%
= {abba(oi) ← &gt;, abba(oi) ← ⊥ | i ∈ {2, 3, 4, 5}}
∪ {abab(oi) ← &gt;, abab(oi) ← ⊥ | i ∈ {1, 2, 4, 5}}
∪ {c(oi) ← &gt;, c(oi) ← ⊥ | i ∈ {1, 2, 3, 4}}
∪ {abcb(o5) ← &gt; | i ∈ {1, 2, 3, 4, 5}}
∪ {abba(o1) ← &gt;, abab(o3) ← &gt; }.</p>
        <p>E1 = {c(o1) ← &gt;} and E2 = {c(o3) ← &gt;, abba(o3) ← ⊥} are the minimal explanations for O1 and O2,
respectively. Note that for O3 there is no explanation.
1</p>
        <p>Consider O1 = {b(o1)}, where the program to be taken into account is PIA2 = (PIA2 \ {b(o1) ← &gt;}) ∪ E1. Given
the least model of wc PIA2 = hI&gt;, I⊥i as defined in (1), the least model of wc (PI1A2) is I = hI&gt; ∪ {c(o1)}, I⊥i,
i.e. c(o1) is newly entailed to be true after applying abduction. This model entails what participants concluded,
namely Iac and Ica.
2
For the observation O2 = {a(o3)} we consider the program PIA2 = (PIA2 \ {a(o3) ← &gt; }) ∪ E2. The least
2
model of PIA2 also entails the conclusions Iac and Ica.</p>
        <p>Answers(PIA2) = {{Iac, Ica}, {Iac, Ica}} is the set of all conclusions. FinalAnswer(PIA2) = {Iac, Ica} consists
of the skeptically entailed conclusions, i.e. it is the intersection of all conclusions, which in this case are
‘some a are c’ (Iac) and ‘some c are a’ (Ica).
4.3</p>
      </sec>
      <sec id="sec-4-2">
        <title>Overall Accuracy of 89%</title>
        <p>The results of the three examples formalized under WCS are summarized and compared to FOL, PSYCOP, the
Verbal, and the Mental Model Theory in Table 6. For some syllogisms the conclusions drawn by the participants
and WCS are identical and for some syllogisms the conclusions drawn by the participants and WCS overlap.
WCS differs from the other cognitive theories. Combining the syllogistic premises representation and entailment
rules for all 64 syllogistic premises and applying abduction when NVC was entailed (which happened in 43 cases),
we accomplished an average of 89% accuracy in our predictions. In 18 cases we have a perfect match, in 30 cases
the match is 89%, in 13 cases the match is 78%, and in the remaining three cases the match is 67%. Compared
to the other cognitive theories, we achieve the best performance, as their best results were accomplished by the
Verbal Models Theory (84%) and the Mental Model Theory (83%).
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Final Remarks</title>
      <p>We presented a theory that is modular, i.e. each of our encoding principles can be considered independently.
This feature allow us to consider any combination of principles in the encoding of quantified assertion, and this is
particularly relevant if we want to encode the reasoning processes as preform by an individual our by a group of
individuals. Moreover, in our approach the task to represent information and the task to derive new conclusions
are decoupled. This means that we can consider different principles for each of those tasks and further investigate
differences and similarities between them.</p>
      <p>
        Following this approach we encoded syllogistic reasoning and compared to other cognitive theories. We perform
the best with an overall accuracy of 89% in our predictions.
S. Ho¨lldobler. Weak completion semantics and its applications in human reasoning. In U. Furbach and Claudia
Schon, editors, Proceedings of the Workshop on Bridging the Gap between Human and Automated Reasoning
on the 25th International Conference on Automated Deduction, pages 2–16. C
        <xref ref-type="bibr" rid="ref4 ref5">EUR-WS.org, 2015</xref>
        .
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [CDHR16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Costa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.-A.</given-names>
            <surname>Dietz</surname>
          </string-name>
          , S. Ho¨lldobler, and
          <string-name>
            <given-names>M.</given-names>
            <surname>Ragni</surname>
          </string-name>
          .
          <article-title>Syllogistic reasoning under the weak completion semantics</article-title>
          . In U. Furbach and C. Schon, editors,
          <source>Bridging 2016 - Bridging the Gap between Human and Automated Reasoning</source>
          , volume
          <volume>1651</volume>
          <source>of CEUR Workshop Proceedings</source>
          , pages
          <fpage>5</fpage>
          -
          <lpage>19</lpage>
          . CEUR-WS.org,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [CDHR17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Costa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.-A.</given-names>
            <surname>Dietz</surname>
          </string-name>
          , S. Ho¨lldobler, and
          <string-name>
            <given-names>M.</given-names>
            <surname>Ragni</surname>
          </string-name>
          .
          <article-title>A computational logic approach to human syllogistic reasoning</article-title>
          .
          <source>In Proceedings of the 39th Annual Conference of the Cognitive Science Society</source>
          ,
          <year>2017</year>
          . accepted.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>[DH15] [DHH15] [DHP15] [DHR12] [DHR13] [Die17] [Gri75] [HK09a] [HK09b] [Ho¨l09] [Ho¨l15] [JL83] [KJ12] [KJL12] [KKT93] [Llo84]</source>
          [Luk20]
          <string-name>
            <given-names>E.-A.</given-names>
            <surname>Dietz</surname>
          </string-name>
          and
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Ho¨lldobler. A new computational logic approach to reason with conditionals</article-title>
          . In F. Calimeri, G. Ianni, and M. Truszczynski, editors,
          <source>Logic Programming and Nonmonotonic Reasoning</source>
          , 13th International Conference, volume
          <volume>9345</volume>
          <source>of Lecture Notes in Artificial Intelligence</source>
          , pages
          <fpage>265</fpage>
          -
          <lpage>278</lpage>
          . Springer,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>E.-A.</given-names>
            <surname>Dietz</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Ho¨lldobler, and</article-title>
          <string-name>
            <given-names>R.</given-names>
            <surname>Ho</surname>
          </string-name>
          <article-title>¨ps. A computational logic approach to human spatial reasoning</article-title>
          .
          <source>In IEEE Symposium Series on Computational Intelligence</source>
          , pages
          <fpage>1627</fpage>
          -
          <lpage>1634</lpage>
          . IEEE,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>E.-A.</given-names>
            <surname>Dietz</surname>
          </string-name>
          , S. Ho¨lldobler, and
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Pereira</surname>
          </string-name>
          .
          <article-title>On conditionals</article-title>
          . In G. Gottlob,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Sutcliffe, and</article-title>
          <string-name>
            <surname>A</surname>
          </string-name>
          . Voronkov, editors,
          <source>Global Conference on Artificial Intelligence</source>
          , Epic Series in Computing,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>E.-A.</given-names>
            <surname>Dietz</surname>
          </string-name>
          , S. Ho¨lldobler, and
          <string-name>
            <given-names>M.</given-names>
            <surname>Ragni</surname>
          </string-name>
          .
          <article-title>A computational logic approach to the suppression task</article-title>
          . In N. Miyake,
          <string-name>
            <given-names>D.</given-names>
            <surname>Peebles</surname>
          </string-name>
          , and
          <string-name>
            <surname>R. P</surname>
          </string-name>
          . Cooper, editors,
          <source>Proceedings of the 34th Annual Conference of the Cognitive Science Society</source>
          , pages
          <fpage>1500</fpage>
          -
          <lpage>1505</lpage>
          , Austin, TX,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>E.-A.</given-names>
            <surname>Dietz</surname>
          </string-name>
          , S. Ho¨lldobler, and
          <string-name>
            <given-names>M.</given-names>
            <surname>Ragni</surname>
          </string-name>
          .
          <article-title>A computational logic approach to the abstract and the social case of the selection task</article-title>
          .
          <source>In 11th International Symposium on Logical Formalizations of Commonsense Reasoning</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>E.-A.</given-names>
            <surname>Dietz</surname>
          </string-name>
          .
          <article-title>A computational logic approach to the belief bias in human syllogistic reasoning</article-title>
          .
          <source>In 10th International and Interdisciplinary Conference on Modeling and Using Context</source>
          , volume
          <volume>10257</volume>
          of Lecture Notes in Computer Science. Springer,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <given-names>H. P.</given-names>
            <surname>Grice</surname>
          </string-name>
          .
          <article-title>Logic and conversation</article-title>
          . In Peter Cole and Jerry L. Morgan, editors,
          <source>Syntax and semantics</source>
          , volume
          <volume>3</volume>
          . New York: Academic Press,
          <year>1975</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Hill</surname>
            and
            <given-names>D. S</given-names>
          </string-name>
          . Warren, editors,
          <source>25th International Conference on Logic Programming</source>
          , volume
          <volume>5649</volume>
          of Lecture Notes in Computer Science, pages
          <fpage>464</fpage>
          -
          <lpage>478</lpage>
          , Heidelberg,
          <year>2009</year>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <given-names>S.</given-names>
            <surname>Ho</surname>
          </string-name>
          <article-title>¨lldobler and</article-title>
          <string-name>
            <surname>C. D. Kencana Ramli</surname>
          </string-name>
          .
          <article-title>Logics and networks for human reasoning</article-title>
          .
          <source>In International Conference on Artificial Neural Networks</source>
          , pages
          <fpage>85</fpage>
          -
          <lpage>94</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <given-names>S.</given-names>
            <surname>Ho</surname>
          </string-name>
          <article-title>¨lldober</article-title>
          .
          <source>Logik und Logikprogrammierung</source>
          , volume
          <volume>1</volume>
          : Grundlagen. Synchron Publishers, Heidelberg,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          Harvard University Press,
          <year>1983</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <given-names>S.</given-names>
            <surname>Khemlani</surname>
          </string-name>
          and
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Johnson-Laird</surname>
          </string-name>
          .
          <article-title>Theories of the syllogism: A meta-analysis</article-title>
          .
          <source>Psychological Bulletin</source>
          , pages
          <fpage>427</fpage>
          -
          <lpage>457</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <given-names>Sangeet</given-names>
            <surname>Khemlani</surname>
          </string-name>
          and
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Johnson-Laird</surname>
          </string-name>
          .
          <article-title>Theories of the Syllogism: A Meta-Analysis</article-title>
          .
          <source>Psychological Bulletin</source>
          ,
          <volume>138</volume>
          :
          <fpage>427</fpage>
          -
          <lpage>457</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Kakas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Kowalski</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          .
          <article-title>Abductive logic programming</article-title>
          .
          <source>Journal of Logic and Computation</source>
          ,
          <volume>2</volume>
          (
          <issue>6</issue>
          ):
          <fpage>719</fpage>
          -
          <lpage>770</lpage>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Lloyd</surname>
          </string-name>
          .
          <source>Foundations of Logic Programming</source>
          . Springer-Verlag New York, Inc., New York, NY, USA,
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <given-names>Jan</given-names>
            <surname>Lukasiewicz</surname>
          </string-name>
          .
          <article-title>O logice tro´jwarto´sciowej</article-title>
          .
          <source>Ruch Filozoficzny</source>
          ,
          <volume>5</volume>
          :
          <fpage>169</fpage>
          -
          <lpage>171</lpage>
          ,
          <year>1920</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>[PDH14a] L. M. Pereira</surname>
            ,
            <given-names>E.-A.</given-names>
          </string-name>
          <string-name>
            <surname>Dietz</surname>
            , and
            <given-names>S.</given-names>
          </string-name>
          <article-title>Ho¨lldobler. An abductive reasoning approach to the belief-bias effect</article-title>
          . In C. Baral, G. De Giacomo, and T. Eiter, editors,
          <source>Principles of Knowledge Representation and Reasoning: Proceedings of the 14th International Conference</source>
          , pages
          <fpage>653</fpage>
          -
          <lpage>656</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>[PDH14b] L. M. Pereira</surname>
            ,
            <given-names>E.-A.</given-names>
          </string-name>
          <string-name>
            <surname>Dietz</surname>
            , and
            <given-names>S.</given-names>
          </string-name>
          <article-title>Ho¨lldobler. Contextual abductive reasoning with side-effects</article-title>
          . In I. Niemela¨, editor,
          <source>Theory and Practice of Logic Programming</source>
          , volume
          <volume>14</volume>
          , pages
          <fpage>633</fpage>
          -
          <lpage>648</lpage>
          . Cambridge University Press,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [PHW74]
          <string-name>
            <given-names>C.S.</given-names>
            <surname>Peirce</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hartshorne</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Weiss</surname>
          </string-name>
          .
          <article-title>Collected Papers of Charles Sanders Peirce</article-title>
          .
          <article-title>Collected papers of Charles Sanders Peirce</article-title>
          . Belknap Press of Harvard University Press,
          <year>1974</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [PN95]
          <article-title>[Rip94] [SvL08] Thad A Polk and Allen Newell. Deduction as verbal reasoning</article-title>
          .
          <source>Psychological Review</source>
          ,
          <volume>102</volume>
          (
          <issue>3</issue>
          ):
          <fpage>533</fpage>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <given-names>L. J.</given-names>
            <surname>Rips</surname>
          </string-name>
          .
          <article-title>The psychology of proof: Deductive reasoning in human thinking</article-title>
          . MIT Press, Cambridge, MA,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <given-names>K.</given-names>
            <surname>Stenning and M. van Lambalgen</surname>
          </string-name>
          .
          <article-title>Human Reasoning and Cognitive Science. A Bradford Book</article-title>
          . MIT Press, Cambridge, MA,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>