=Paper= {{Paper |id=Vol-1994/Bridging2017_paper3 |storemode=property |title=The Weak Completion Semantics |pdfUrl=https://ceur-ws.org/Vol-1994/Bridging2017_paper3.pdf |volume=Vol-1994 |authors=Emmanuelle-Anna Dietz Saldanha,Steffen Hölldobler,Isabelly Lourêdo Rocha |dblpUrl=https://dblp.org/rec/conf/cogsci/SaldanhaHR17 }} ==The Weak Completion Semantics== https://ceur-ws.org/Vol-1994/Bridging2017_paper3.pdf
               The Weak Completion Semantics

           Emmanuelle-Anna Dietz Saldanha and Ste↵en Hölldobler and
                           Isabelly Lourêdo Rocha

International Center for Computational Logic, TU Dresden, 01062 Dresden, Germany
     dietz@iccl.tu-dresden.de and sh@iccl.tu-dresden.de and isabellylr@gmail.com



        Abstract This is a gentle introduction to the weak completion seman-
        tics, a novel cognitive theory which has been successfully applied to a
        number of human reasoning tasks. In this paper we do not focus on for-
        malities but rather on principles and examples. The reader is assumed
        to be familiar with classical propositional logic and the suppression task.


1     Introduction
The weak completion semantics is a novel cognitive theory, which recently has
outperformed twelve established cognitive theories on syllogistic reasoning [19,
23]. It is based on ideas first expressed in [27], viz. to encode knowledge as logic
programs and, in particular, to use licenses for inferences when encoding condi-
tionals, to make assumptions about the absense of abnormalities, to interprete
programs under a three-valued (Kleene) logic [20], to compute a supported model
for each program as least fixed point of an appropriate semantic operator, and to
reason with respect to these least fixed points. But the weak completion seman-
tics di↵ers from the approach presented in [27] in that all concepts are formally
specified, it is based on a di↵erent three-valued (Lukasiewics) logic [21],1 all re-
sults are rigorously proven, and it has been extended in many di↵erent ways. In
particular, the weak completion semantics has been applied to the suppression
task [6], to the selection task [7, 9], to the belief bias e↵ect [24], to reasoning
about conditionals [3, 5, 9], to human spatial reasoning [4], to syllogistic rea-
soning [22, 23], and to contextual reasoning [10, 25]. Furthermore, there exists
a connectionist encoding of the weak completion semantics based on the core
method [8, 15, 17].
    Modeling human reasoning tasks under the weak completion semantics is
done in three stages. Firstly, the background knowledge is encoded as the weak
completion of a logic program, i.e. a finite set of facts, rules, and assumptions.
The program is specified with respect to certain principles, some of which have
been identified in cognitive science and computational logic, others are new prin-
ciples which need to be confirmed in future experiments. Secondly, a supported
model for the weak completion of the program is computed. It turns out that
under the Lukasiewicz logic this model is unique and can be obtained as the least
fixed point of an appropriate semantic operator. Thirdly, reasoning is done with
1
    Alternatively, the three-valued logic S3 [26] could be applied as well.
2        Authors Suppressed Due to Excessive Length

respect to the unique supported model. This three-stage process is augmented
by abduction if needed.
    In this paper a gentle introduction to the weak completion semantics is pro-
vided. We will give an informal introduction into the three stages focussing on
the suppression task in Sections 2 and 3 and on reasoning about indicative con-
ditionals in Section 4. In each case, we will discuss how the programs, i.e. the
sets of facts, rules, and assumptions are obtained, how they are weakly com-
pleted, how their unique supported models are generated, and how reasoning is
performed with respect to these models. We will avoid formal definitions, theo-
rems, and proofs; they can be found in [14] and the referenced technical papers.
However, we assume the reader to be familiar with classical propositional logic.

2     Reasoning with respect to Least Models
2.1     Modus Ponens
Knowledge is encoded as positive facts, negative assumptions, and rules. Con-
sider the statements she has an essay to write and if she has an essay to write,
then she will study late in the library from the suppression task [1]. The first
statement will be encoded in propositional logic as the fact e         >, where e
denotes that she has an essay to write and > is a constant denoting truth. The
second statement is a conditional which will be encoded as a license for inferences
`    e^¬ab1 following [27], where ` denotes that she will study late in the library
and ab1 is an abnormality predicate. As in the given context nothing abnormal
is known about the conditional, the assumption ab1        ? is added, where ? is
a constant denoting falsehood. This expression is called an assumption because
– as illustrated later – it can be overriden if more knowledge becomes available.
The given implications – a logic program – are weakly completed by adding the
only-if-halves to obtain the set
                       K1 = {e $ >, ` $ e ^ ¬ab1 , ab1 $ ?}.
The left- and the right-hand-sides of the equivalences are considered as definien-
dum and definiens, respectively. In particular, the propositional variables e, `,
and ab1 are defined by >, e ^ ¬ab1 , and ?, respectively. In other words, the set
K1 is a set of definitions which encode the given background knowledge.
    If a subject is asked whether she will study late in the library, then a model
for this set is constructed. In a model, propositional variables are mapped to the
truth values true, false, and unknown such that all equivalences occurring in K1
are simultaneously mapped to true. In fact, there is always a unique least model
if a set like K1 is interpreted under the three-valued Lukasiewicz logic [16, 21],2
whose truth tables are depicted in Table 1.
2
    This does not hold if a set is interpreted under Kleene logic [20]. For example, the
    equivalence a $ b has two minimal models. In the first minimal model both, a and b,
    are mapped to true. In the second minimal model both, a and b, are mapped to false.
    The interpretation, where both, a and b, are mapped to unknown is not a model for
    a $ b.
                                            The Weak Completion Semantics          3


             ¬        ^>U?          _>U?              >U?       $>U?
            >?        >>U?          >>>>            > >U?       > >U?
            UU        UUU?          U>UU            U U>>       U U>U
            ??        ????          ?>U?            ? ?U>       ? ?U>
Table1. The truth tables of the Lukasiewicz logic, where true, false, and unknown are
abbreviated by >, ?, and U, respectively.



   In the example, the model is constructed in two steps.3 In the first step,
e $ > and ab1 $ ? are satisfied by the following mapping:

                                     true   false
                                       e     ab1

In the second step, because the right-hand-side of the equivalence ` $ e ^ ¬ab1
is evaluated to true under the given mapping, its left-hand-side ` must also be
true and will be added to the model:
                                     true   false
                                       e     ab1
                                       `

The query whether she will study late in the library can now be answered posi-
tively given this model.

2.2     Alternative Arguments
If the statement if she has a textbook to read, then she will study late in the
library is added to the example discussed in Section 2.1, then this statement will
be encoded by the rule `    t^¬ab2 and the assumption ab2      ?, where t denotes
that she has a textbook to read. Weakly completing the given implications we
obtain the set

        K2 = {e $ >, ` $ (e ^ ¬ab1 ) _ (t ^ ¬ab2 ), ab1 $ ?, ab2 $ ?}.4


    If a subject is asked whether she will study late in the library, then a model
for K2 is constructed as follows. In the first step, e $ >, ab1 $ ?, and ab2 $ ?
are satisfied by the following mapping:

                                     true   false
                                       e     ab1
                                             ab2
3
    In [16, 27], a function is defined which computes this model.
4
    The set does not include the equivalence t $ ?. In logic programming this equiva-
    lence is added under the completion semantics [2].
4        Authors Suppressed Due to Excessive Length

Because e ^ ¬ab1 is true under this mapping, so is the right-hand side of the
equivalence ` $ (e ^ ¬ab1 ) _ (t ^ ab2 ) and, consequently, ` must be true as well:

                                   true   false
                                     e     ab1
                                           ab2
                                    `

The query whether she will study late in the library can now be answered posi-
tively given this model.


2.3    Additional Arguments

If the statement if the library is open, then she will study late in the library
is added to the example discussed in Section 2.1, then this statement will be
encoded by the rule `     o ^ ¬ab3 and the assumption ab3    ?, where o denotes
that the library is open. As argued in [27] a subject being confronted with the
additional statement may become aware that not being open is an exception for
the rule `    e ^ ab1 . This can be encoded by the rule ab1   ¬o. Likewise, she
may not go to the library without a reason and the only reason mentioned so far
is writing an essay. Thus, not having an essay to write is an exception for the
rule `    o ^ ¬ab3 . This can be encoded by adding the rule ab3     ¬e. Weakly
completing all implications we obtain the set

    K3 = {e $ >, ` $ (e ^ ¬ab1 ) _ (o ^ ¬ab3 ), ab1 $ ? _ ¬o, ab3 $ ? _ ¬e}.

The example shows how the intial assumption ab1        ? is overriden by ab1   ¬o.
In K3 the definition of ab1 is now ? _ ¬o which is semantically equivalent to ¬o.
Likewise ab3     ? is overriden by ab3    ¬e.
    If a subject is asked whether she will study late in the library, then a model
for K3 is constructed as follows. In the first step, e $ > is satisfied by the
following mapping:
                                   true false
                                     e

Because the right-hand-side of the equivalence ab3 $ ? _ ¬e is mapped to false,
ab3 must be mapped to false as well:

                                   true   false
                                     e
                                           ab3

The remaining propositional variables `, ab1 , and o are neither forced to be true
nor false and, hence, remain unknown. The constructed mapping is a model
for K3 . As ` is not mapped to true, suppression is taking place.
                                            The Weak Completion Semantics          5

2.4   The Denial of the Antecedent
Now suppose that in the example discussed in Section 2.1 the fact that she has
an essay to write is replaced by she does not have an essay to write. This denial
of the antecedent is encoded by e      ? instead of e     >. Weakly completing
the implications we obtain the set

                     K4 = {e $ ?, ` $ e ^ ¬ab1 , ab1 $ ?}.

   If a subject is asked whether she will study late in the library, then a model for
K4 is constructed as follows. In the first step, e $ ? and ab1 $ ? are satisfied
by the following mapping:
                                   true false
                                              e
                                             ab1
Under this mapping the right-hand-side of the equivalence ` $ e ^ ¬ab1 is
mapped to false and, consequently, ` will be mapped to false as well:

                                    true    false
                                              e
                                             ab1
                                              `

The query whether she will study late in the library can now be answered nega-
tively given this model.

   The cases, where the denial of the antecedent is combined with alternative
and additional arguments can be modelled in a similar way, but now the alter-
native argument leads to suppression [6].


3     Skeptical Abduction
3.1   The Affirmation of the Consequent
Consider the conditional if she has an essay to write, then she will study late in
the library. As before, it is encoded by the rule ` e ^ ¬ab1 and the assumption
ab1     ?. Their weak completion is

                         K5 = {` $ e ^ ¬ab1 , ab1 $ ?}.

As the least model of this set we obtain:
                                    true    false
                                             ab1

Under this model the propositional variables ` and e are mapped to unknown.
Hence, if we observe that she will study late in the library, then this observation
6       Authors Suppressed Due to Excessive Length

cannot be explained by this model. We propose to use abduction [13] in order to
explain the observation. Because e is the only undefined propositional letter in
this context, the set of abducibles is {e >, e    ?}. The observation ` can be
explained by selecting e     > from the set of abducibles, weakly completing it
to obtain e $ >, and adding this equivalence to K5 . Thus, we obtain K1 again
and conclude that she has an essay to write.

3.2   Alternative Arguments and the Affirmation of the Consequent
Consider the conditionals if she has an essay to write, then she will study late
in the library and if she has a textbook to read, then she will study late in the
library. As in Section 2.2 they are encoded by two rules and two assumptions,
which are weakly completed to obtain

            K6 = {` $ (e ^ ¬ab1 ) _ (t ^ ¬ab2 ), ab1 $ ?, ab2 $ ?}.

As the least model of this set we obtain:
                                     true   false
                                             ab1
                                             ab2

Under this model the propositional variables `, e, and t are mapped to unknown.
Hence, if we observe that she will study late in the library, then this observation
cannot be explained by this model. In order to explain the observation we con-
sider the set {e   >, e    ?, t     >, t    ?} of abducibles because e and t are
undefined in K6 . There are two minimal explanations, viz. e        > and t      >.
Both are weakly completed to obtain e $ > and t $ >, and are added to K6
yielding K2 and

       K7 = {t $ >, ` $ (e ^ ¬ab1 ) _ (t ^ ¬ab2 ), ab1 $ ?, ab2 $ ?},

respectively. We can now construct the least models for K2 and K7 :

                     true    false                  true   false
                       e      ab1                     t     ab1
                              ab2                           ab2
                       `                             `

Both models explain `, but they give di↵erent reasons for it, viz. e and t. More
formally, the literals `, e, t, ¬ab1 , and ¬ab2 follow credulously from the back-
ground knowledge K6 and the observation ` because for each of the literals there
exists a minimal explanation such that the literal is true in the least model of the
background knowledge and the explanation. But only the literals `, ¬ab1 , and
¬ab2 follow skeptically from the background knowledge K6 and the observation `
because all literals are true in the least models of the background knowledge and
each minimal explanation. Hence, if a subject is asked whether she will study
late in the library then a subject constructing only the first model and, thus,
                                           The Weak Completion Semantics          7

reasoning credulously, will answer positively. On the other hand, a subject con-
structing both models and, thus, reasoning skeptically, will not answer positively.
As reported in [1] only 16% of the subjects answer positively. It appears that
most subjects either reason credulously and construct only the second model or
they reason skeptically.


4     Indicative Conditionals
In this section we will extend the weak completion semantics to evaluate in-
dicative conditionals. In particular, we will consider obligation and factual con-
ditionals. Consider the conditionals if it rains, then the streets are wet and if
it rains, then she takes her umbrella taken from [9]. The conditionals have the
same structure, but their semantics appears to be quite di↵erent.

4.1   Obligation Conditionals
The first conditional is an obligation conditional because its consequence is oblig-
atory. We cannot easily imagine a case, where the condition it rains is true and
its consequence the streets are wet is not. Moreover, the condition appears to be
necessary as we cannot easily imagine a situation where the consequence is true
and the condition is not. We may be able to imagine cases where a flooding or
a tsunami has occurred, but we would expect that such an extraordinary event
would have been mentioned in the context. We are also not reasoning about a
specific street or a part of a street, where the sprinkler of a careless homeowner
has sprinkled water on the street while watering the garden.

4.2   Factual Conditionals
The second conditional is a factual conditional. Its consequence is not obligatory.
We can easily imagine the case, where the condition it rains is true and its
consequence she takes her umbrella is false. She may have forgotten to take her
umbrella or she has decided to take the car and does not need the umbrella.
Moreover, the condition does not appear to be necessary as she may have taken
the umbrella for many reasons like, for example, protecting her from sun. The
condition is sufficient. The circumstance where the condition is true gives us
adequate grounds to conclude that the consequence is true as well, but there is
no necessity involved.

4.3   Encoding Obligation and Factual Conditionals
When we consider the two conditionals as background knowledge, then their
di↵erent semantics should be reflected in di↵erent encodings. Following the prin-
ciples developed in Section 2 we obtain

           K8 = {s $ r ^ ¬ab4 , u $ r ^ ¬ab5 , ab4 $ ?, ab5 $ ?},
8       Authors Suppressed Due to Excessive Length

where s, r, and u denote that the streets are wet, it rains, and she takes her
umbrella, respectively. Its least model is:

                                   true     false
                                             ab4
                                             ab5

The propositional variables s, r, and u are unknown. Because r is undefined
in K8 , the set of abducibles contains r     > and r      ?. Because the second
conditional is a factual one, it should not necessarily be the case that r being
true implies u being true as well. This can be prevented by adding ab5      > to
the set of abducibles because this fact can be used to override the assumption
ab5     ?. Moreover, because the condition of the second conditional is sufficent
but not necessary, observing u may not be explained by r being true but by some
other reason. Hence, u     > is also added to the set of abducibles. Alltogether,
we obtain the set

                    A8 = {r    >, r       ?, ab5    >, u    >}

of abducibles for K8 .


4.4   The Evaluation of Indicative Conditionals

Let if X then Y be a conditional, where the condition X and the consequence Y
is a literal. We would like to evaluate the conditional with respect to some
background knowledge. The background knowledge is represented by a finite
set K of definitions and a finite set A of abducibles. As discussed in Section 2.1,
each set of definitions has a unique least model; let M be this model. Considering
the sets K8 and A8 , then let M8 be the least model of K8 , i.e. the mapping, where
ab4 and ab5 are mapped to false and all other propositional letters occurring in
the example are mapped to unknown.
    Because M is a mapping assigning a truth value to each formula, we can
simply write M(X) or M(Y ) to obtain the truth values for the literals X and Y ,
respectively. The given conditional if X then Y shall be evaluated as follows:

 1. If M(X) is true, then the conditional is assigned to M(Y ).
 2. If M(X) is false, then the conditional is assigned to true.
 3. If M(X) is unknown, then the conditional is evaluated with respect to the
    skeptical consequences of K given A and considering X as an observation.

The first case is the standard one: The condition X of the conditional is true and,
hence, the value of the conditional hinges on the value of the consequence Y .
If Y is mapped to true, then the conditional is true; if Y is mapped to unknown,
then the conditional is unknown; if Y is mapped to false, then the conditional
is false.
    The second case is also standard if conditionals are viewed from a purely
logical point: if X is mapped to false, then the conditional is true independent
                                            The Weak Completion Semantics       9

of the value of the consequence Y . However, humans seem to treat conditionals
whose condition is false di↵erent. In particular, the conditional may be viewed
as a counterfactual. In this case, the background knowledge needs to be revised
such that the condition becomes true. This case has been considered in [5], but
it is beyond the scope of this introduction to discuss it here.
     The third case is interesting: If the condition of a conditional is unknown,
then we view the condition as an observation which needs to be explained. More-
over, we consider only skeptical consequences computed with respect to minimal
explanations.

4.5   The Denial of the Consequent
As a first example consider the conditional if the streets are not wet, then it
did not rain (if ¬s then ¬r). Its condition ¬s is unknown under M8 . Applying
abduction we find the only minimal explanation r      ? for the observation ¬s.
Together with the background knowledge K8 we obtain

       K9 = {s $ r ^ ¬ab4 , u $ r ^ ¬ab5 , ab4 $ ?, ab5 $ ?, r $ ?}.

Its least model is:
                                     true   false
                                             ab4
                                             ab5
                                              r
                                              s
                                              u
It explains ¬s. Moreover, the consequence ¬r of the conditional is mapped to
true making the conditional true as expected.
    As a second example consider the conditional if she did not take her umbrella,
then it did not rain (if ¬u then ¬r). Its condition ¬u is unknown under M8 .
Applying abduction we find two minimal explanations for the observation ¬u,
viz. r   ? and ab5     >. Together with the background knowledge K8 we obtain
K9 and

        K10 = {s $ r ^ ¬ab4 , u $ r ^ ¬ab5 , ab4 $ ?, ab5 $ ? _ >},

respectively. Their least models are:

                      true   false                  true   false
                              ab4                    ab5    ab4
                              ab5                            u
                               r
                               s
                               u

Whereas the first explanation explains ¬u by stating that it did not rain, the
second explanations explains ¬u by stating that the abnormality ab5 is true.
10      Authors Suppressed Due to Excessive Length

She may have simply forgotten her umbrella when she left home. Whereas the
first explanation entails that it did not rain, the background knowledge and the
second explanation does neither entail r nor ¬r. Hence, ¬r follows credulously,
but not skeptically from the background knowledge and the observation ¬u.
Because conditionals are evaluated skeptically, the conditional is evaluated to
unknown as expected.

4.6   The Affirmation of the Consequent
As another example consider the conditional if the streets are wet, then it rained
(if s then r). Its condition s is unknown under M8 . Applying abduction we find
the only minimal explanation r        > for the observation s. Together with the
background knowledge K8 we obtain:

      K11 = {s $ r ^ ¬ab4 , u $ r ^ ¬ab5 , ab4 $ ?, ab5 $ ?, r $ >}.

Its least model is:
                                      true   false
                                        r     ab4
                                              ab5
                                       s
                                       u
It explains s. Moreover, the consequence r of the conditional is mapped to true
making the conditional true as well.
    As final example consider the conditional if she took her umbrella, then it
rained (if u then r). Its condition u is again unknown under M8 . Applying
abduction we find two minimal explanations, viz. r     > and u     >. Together
with the background knowledge K8 we obtain K11 and

       K12 = {s $ r ^ ¬ab4 , u $ (r ^ ¬ab5 ) _ >, ab4 $ ?, ab5 $ ?, },

respectively. Their least models are:

                      true    false                  true   false
                        r      ab4                     u     ab4
                               ab5                           ab5
                       s
                       u

Whereas the first explanation explains u by stating that it rained, the second
explanation explains u by stating that she took her umbrella for whatever reason.
As before, r follows credulously but not skeptically. Hence, the conditional is
evaluated to unknown. Skeptical reasoning yields the expected answer again,
whereas a creduluous approach does not.

   In [9] it is also shown that the approach adequately models the abstract as
well as social version of the selection task [12, 28]. The conditional if there is the
                                           The Weak Completion Semantics         11

letter D on one side of the card, then there is the number 3 on the other side
is considered as a factual one with necessary condition, whereas the conditional
if a person is drinking beer, then the person must be over 19 years of age is
considered as an obligation with sufficient condition. Reasoning skeptically yields
the adequate answers.

5   Conclusion
The weak completion semantics is a novel cognitive theory which has been ap-
plied to adequately model various human reasoning tasks. Background knowl-
edge is encoded as a set of definitions based on the following principles:
  – positive information is encoded as facts,
  – negative information is encoded as assumptions,
  – conditionals are encoded as licenses for inferences, and
  – the only-if halves of definitions are added.
For each set of definitions a set of abducibles is constructed as follows:
  – all facts and assumptions for the propositional letters which are undefined
    in the background knowledge are added,
  – the abnormalities of factual conditionals are added as facts, and
  – the conclusions of conditionals with sufficient condition are added as facts.
The background knowledge admits a least supported model under Lukasiewicz
logic, which can be computed as the least fixed point of an appropriate semantic
operator. Reasoning is performed with respect to the least supported model.
If an observation is unknown under the least supported model, then skeptical
abduction using minimal explanations is applied. There exists a connectionist
realization.
    The approach presented in this paper is restricted to propositional logic and
does neither consider counterfactuals nor contextual abduction. These extensions
are presented in [5, 10, 22, 23]. In particular, if the weak completion semantics is
extended to first-order logic, then additonal principles are applied in the con-
struction of the background knowledge like
  – existential import and Gricean implicature,
  – unknown generalization,
  – search for alternative models,
  – converse interpretation,
  – blocking of conclusions by double negatives,
  – negation by transformation,
but it is beyond the scope of this introduction to discuss these principles.
    There are a variety of open problems and questions. For example, skepti-
cal abduction is exponential [11, 18]. Hence, it is infeasible that humans reason
skeptically if the reasoning episodes become larger. We hypothesize that humans
generate some, but usually not all minimal explanations and reason skeptically
with respect to them. Which explanations are generated? Are short or simple
explanations preferred? Are more explanations generated if more time is avail-
able? Is the generation of explanations biased and, if so, how is it biased? Does
attention play a role?
12      Authors Suppressed Due to Excessive Length

Acknowledgements      The authors would like to thank Ana Oliveira da Costa,
Luı́s Moniz Pereira, Tobias Philipp, Marco Ragni, and Christoph Wernhardt for
many useful discussions and comments.


References

 1. R. Byrne. Suppressing valid inferences with conditionals. Cognition, 31:61–83,
    1989.
 2. K. Clark. Negation as failure. In H. Gallaire and J. Minker, editors, Logic and
    Databases, pages 293–322. Plenum, New York, 1978.
 3. E.-A. Dietz and S. Hölldobler. A new computational logic approach to reason with
    conditionals. In F. Calimeri, G. Ianni, and M. Truszczynski, editors, Logic Pro-
    gramming and Nonmonotonic Reasoning, 13th International Conference, LPNMR,
    volume 9345 of Lecture Notes in Artificial Intelligence, pages 265–278. Springer,
    2015.
 4. E.-A. Dietz, S. Hölldobler, and R. Höps. A computational logic approach to human
    spatial reasoning. In IEEE Symposium Series on Computational Intelligence, pages
    1637–1634, 2015.
 5. E.-A. Dietz, S. Hölldobler, and L. M. Pereira. On conditionals. In G. Gottlob,
    G. Sutcli↵e, and A. Voronkov, editors, Global Conference on Artificial Intelligence,
    volume 36 of Epic Series in Computing, pages 79–92. EasyChair, 2015.
 6. E.-A. Dietz, S. Hölldobler, and M. Ragni. A computational logic approach to the
    suppression task. In N. Miyake, D. Peebles, and R. P. Cooper, editors, Proceedings
    of the 34th Annual Conference of the Cognitive Science Society, pages 1500–1505.
    Cognitive Science Society, 2012.
 7. E.-A. Dietz, S. Hölldobler, and M. Ragni. A computational logic approach to
    the abstract and the social case of the selection task. In Proceedings Eleventh
    International Symposium on Logical Formalizations of Commonsense Reasoning,
    2013. commonsensereasoning.org/2013/proceedings.html.
 8. E.-A. Dietz Saldanha, S. Hölldobler, C. D. P. Kencana Ramli, and L. Palacios
    Medinacelli. A core method for the weak completion semantics with skeptical
    abduction. Technical report, TU Dresden, International Center for Computational
    Logic, 2017. (submitted).
 9. E.-A. Dietz Saldanha, S. Hölldobler, and I. Lourêdo Rocha. Obligation versus fac-
    tual conditionals under the weak completion semantics. In S. Hölldobler, A. Ma-
    likov, and C. Wernhard, editors, Proceedings of the Second Young Scientists’ In-
    ternational Workshop on Trends in Information Processing, volume 1837, pages
    55–64. CEUR-WS.org, 2017. http://ceur-ws.org/Vol-1837/.
10. E.-A. Dietz Saldanha, S. Hölldobler, and L. M. Pereira. Contextual reasoning:
    Usually birds can abductively fly. In Logic Programming and Nonmonotonic Rea-
    soning, 14th International Conference, LPNMR, 2017. (to appear).
11. E.-A. Dietz Saldanha, S. Hölldobler, and T. Philipp. Contextual abduction and its
    complexity issues. In Proceedings of the 4th International Workshop on Defeasible
    and Ampliatice Reasoning, 2017. (to appear).
12. R. Griggs and J. Cox. The elusive thematic materials e↵ect in the Wason selection
    task. British Journal of Psychology, 73:407–420, 1982.
13. C. Hartshorne and A. Weiss, editors. Collected Papers of Charles Sanders Peirce.
    Belknap Press, 1932.
                                             The Weak Completion Semantics           13

14. S. Hölldobler. Weak completion semantics and its applications in human reasoning.
    In U. Furbach and C. Schon, editors, Bridging 2015 – Bridging the Gap between
    Human and Automated Reasoning, volume 1412 of CEUR Workshop Proceedings,
    pages 2–16. CEUR-WS.org, 2015. http://ceur-ws.org/Vol-1412/.
15. S. Hölldobler and Y. Kalinke. Towards a new massively parallel computational
    model for logic programming. In Proceedings of the ECAI94 Workshop on Com-
    bining Symbolic and Connectionist Processing, pages 68–77. ECCAI, 1994.
16. S. Hölldobler and C. D. P. Kencana Ramli. Logic programs under three-valued
    Lukasiewicz’s semantics. In P. M. Hill and D. S. Warren, editors, Logic Pro-
    gramming, volume 5649 of Lecture Notes in Computer Science, pages 464–478.
    Springer-Verlag Berlin Heidelberg, 2009.
17. S. Hölldobler and C. D. P. Kencana Ramli. Logics and networks for human reason-
    ing. In C. Alippi, M. M. Polycarpou, C. G. Panayiotou, and G. Ellinasetal, editors,
    Artificial Neural Networks – ICANN, volume 5769 of Lecture Notes in Computer
    Science, pages 85–94. Springer-Verlag Berlin Heidelberg, 2009.
18. S. Hölldobler, T. Philipp, and C. Wernhard. An abductive model for human reason-
    ing. In Proceedings Tenth International Symposium on Logical Formalizations of
    Commonsense Reasoning, 2011. commonsensereasoning.org/2011/proceedings.
    html.
19. S. Khemlani and P. N. Johnson-Laird. Theories of the syllogism: A meta-analysis.
    Psychological Bulletin, 138(3):427–457, 2012.
20. S. Kleene. Introduction to Metamathematics. North-Holland, 1952.
21. J. Lukasiewicz. O logice trójwartościowej. Ruch Filozoficzny, 5:169–171, 1920.
    English translation: On Three-Valued Logic. In: Jan Lukasiewicz Selected Works.
    (L. Borkowski, ed.), North Holland, 87-88, 1990.
22. A. Oliviera da Costa, E.-A. Dietz Saldanha, and S. Hölldobler. Monadic reasoning
    using weak completion semantics. In S. Hölldobler, A. Malikov, and C. Wernhard,
    editors, Proceedings of the Second Young Scientists’ International Workshop on
    Trends in Information Processing, volume 1837. CEUR-WS.org, 2017. http://
    ceur-ws.org/Vol-1837/.
23. A. Oliviera da Costa, E.-A. Dietz Saldanha, S. Hölldobler, and M. Ragni. A
    computational logic approach to human syllogistic reasoning. In Proceedings of the
    39th Annual Conference of the Cognitive Science Society, 2017. (to appear).
24. L. M. Pereira, E.-A. Dietz, and S. Hölldobler. An abductive reasoning approach to
    the belief-bias e↵ect. In C. Baral, G. D. Giacomo, and T. Eiter, editors, Principles
    of Knowledge Representation and Reasoning: Proceedings of the 14th International
    Conference, pages 653–656, Cambridge, MA, 2014. AAAI Press.
25. L. M. Pereira, E.-A. Dietz, and S. Hölldobler. Contextual abductive reasoning
    with side-e↵ects. In I. Niemelä, editor, Theory and Practice of Logic Programming
    (TPLP), volume 14, pages 633–648, Cambridge, UK, 2014. Cambridge University
    Press.
26. N. Rescher. Many-valued logic. McGraw-Hill, New York, NY, 1969.
27. K. Stenning and M. van Lambalgen. Human Reasoning and Cognitive Science.
    MIT Press, 2008.
28. P. C. Wason. Reasoning about a rule. The Quarterly Journal of Experimental
    Psychology, 20:273–281, 1968.