A Computational Logic Approach to Syllogisms in Human Reasoning Emmanuelle-Anna Dietz dietz@iccl.tu-dresden.de International Center for Computational Logic, TU Dresden, 01062 Dresden, Germany Abstract. Psychological experiments on syllogistic reasoning have shown that participants did not always deduce the classical logically valid con- clusions. In particular, the results show that they had difficulties to rea- son with syllogistic statements that contradicted their own beliefs. This paper discusses syllogisms in human reasoning and proposes a formal- ization under the weak completion semantics. 1 Introduction Evans, Barston and Pollard [10] made a psychological study about deductive rea- soning, which demonstrated possibly conflicting processes in human reasoning. Participants were presented di↵erent syllogisms, for which they had to decide whether these were (classical) logically valid. Consider Svit : Premise 1 No nutritional things are inexpensive. Premise 2 Some vitamin tablets are inexpensive. Conclusion Therefore, some vitamin tablets are not nutritional. The conclusion necessarily follows from the premises. However, approximately half of the participants said that this syllogism was not logically valid. They were explicitly asked to logically validate or invalidate various syllogisms. Ta- ble 1 gives four examples of syllogisms, which have been tested in [10]. If par- ticipants judged that “the conclusion necessarily follows from the statements in the passage, [you]” they ”should answer ‘yes,’ otherwise ‘no’.” The last column shows the percentage of the participants that believed the syllogism to be valid. Evans, Barston and Pollard asserted that the participants were influenced by their own beliefs, their so-called belief bias, where we distinguish between the negative and the positive belief bias [11]. The negative belief bias, i.e., when a support for the unbelievable conclusion is suppressed, happens for 56% of the participants in Svit . A positive belief bias, i.e., when the acceptance for the be- lievable conclusion is raised, happens for 71% of the participants in Scig . As pointed out in [14], Wilkins [32] already observed that syllogisms, which conflict with our beliefs are more difficult to solve. People reflectively read the instruc- tions and understand well that they are required to reason logically from the premises to the conclusion. However, the results show that their intuitions are stronger and deliver a tendency to say ‘yes’ or ‘no’ depending on whether it 2 Emmanuelle-Anna Dietz dietz@iccl.tu-dresden.de Type Case % No police dogs are vicious. valid and Sdog Some highly trained dogs are vicious. 89 believable Therefore, some highly trained dogs are not police dogs. No nutritional things are inexpensive. valid and Svit Some vitamin tablets are inexpensive. 56 unbelievable Therefore, some vitamin tablets are not nutritional. No millionaires are hard workers. invalid and Srich Some rich people are hard workers. 10 unbelievable Therefore, some millionaires are not rich people. No addictive things are inexpensive. invalid and Scig Some cigarettes are inexpensive. 71 believable Therefore, some addictive things are not cigarettes. Table 1. Examples of four kinds of syllogisms. The percentages are summarized results over three experiments and show the rate that the conclusion is accepted to be valid [10]. is believable [9]. Various theories have tried to explain this phenomenon. Some conclusions can be explained by converting the premises [2] or by assuming that the atmosphere of the premises influences the acceptance for the conclusion [33]. Johnson-Laird and Byrne [20] proposed the mental model theory [19], which additionally supposes the search for counterexamples when validating the con- clusion. These theories have been partly rejected or claimed to be incomplete. Evans et al. [10, 12] proposed a theory, which is sometimes referred to as the selective scrutiny model [1, 14]. First, humans heuristically accept any syllogism having a believable conclusion, and only check on the logic if the conclusion contradicts their belief. Adler and Rips [1] claim that this behavior is rational because it efficiently maintains our beliefs, except in case if there is any evidence to change them. It results in an adaptive process, for which we only make an e↵ort towards a logical evaluation when the conclusion is unbelievable. It would take a lot of e↵ort if we would constantly verify them even though there is no reason to question them. As people intend to keep their beliefs as consistent as possible, they invest more e↵ort in examining statements that contradict them, than the ones that comply with them. However, this theory cannot fully explain all classical logical errors in the reasoning process. Yet another approach, the selective processing model [8], accounts only for a single preferred model. If the conclusion is neutral or believable, humans attempt to construct a model that supports it. Otherwise, they attempt to construct a model, which rejects it. As summarized in [14], there are several stages in which a belief bias can take place. First, beliefs can influence our interpretation of the premises. Second, in case a statement contradicts our belief, we might search for alternative models and check whether the conclusion is plausible. Stenning and van Lambalgen [30] explain why certain aspects influence the in- terpretations made by humans when evaluating syllogisms and discuss this in the context of mental models. They propose to model human reasoning in a A Computational Logic Approach to Syllogisms in Human Reasoning 3 two step procedure. First, human reasoning should be modeled towards an ad- equate representation. Second, human reasoning should be adequately modeled with respect to this representation. In our context, the first step is about the representational part, that is, which our beliefs influence the interpretation of the premises. The second step is about the procedural part, that is, whether we search for alternative models and whether the conclusion is plausible. After we have specified some preliminaries, we explain in Section 3 how the just discussed four cases of the syllogistic reasoning task can be represented in logic programs. Based on this representation, Section 4 discusses how beliefs and back- ground knowledge influences the reasoning process and shows that the results can be modeled by computing the least models of the weak completion. 2 Preliminaries The general notation, which we will use in the paper, is based on [15, 22]. 2.1 Logic Programs We restrict ourselves to datalog programs, i.e., the set of terms consists only of constants and variables. A logic program P is a finite set of clauses of the form A L1 ^ . . . ^ Ln , (1) where n 0 with finite n. A is an atom and Li , 1  i  n, are literals. A is called head of the clause and the subformula to the right of the implication sign is called body of the clause. If the clause contains variables, then they are implicitly universally quantified within the scope of the entire clause. A clause that does not contain variables, is called a ground clause. In case n = 0, the clause is a positive fact and denoted as A >. A negative fact is denoted as A ?, where true ,>, and false, ?, are truth-value constants. The notion of falsehood appears counterintuitive at first sight, but programs will be interpreted under their (weak) completion where we replace the implication by the equivalence sign. We assume a fixed set of constants, denoted by CONSTANTS, which is nonempty and finite. constants(P) denotes the set of all constants occurring in P. If not stated otherwise, we assume that CONSTANTS = constants(P). g P denotes ground P, which means that P contains exactly all the ground clauses with respect to the alphabet. atoms(P) denotes the set of all atoms occurring in P. If atom A is not the head of any clause in P, then A is undefined in P. The set of all atoms that are undefined in P, is denoted by undef(P). 4 Emmanuelle-Anna Dietz dietz@iccl.tu-dresden.de F ¬F ^ >U ? _ >U ? L >U ? $L > U ? > ? >>U ? >>>> > >>> > >U? ? > UUU? U >U U U U >> U U>U U U ???? ?>U ? ? ?U > ? ?U> Table 2. >, ?, and U denote true, false, and unknown, respectively. 2.2 Three-Valued Lukasiewicz Semantics We consider the three-valued Lukasiewicz Semantics [23], for which the corre- sponding truth values are >, ? and U, which mean true, false and unknown, respectively. A three-valued interpretation I is a mapping from formulas to a set of truth values {>, ?, U}. The truth value of a given formula under I is deter- mined according to the truth tables in Table 2. We represent an interpretation as a pair I = hI > , I ? i of disjoint sets of atoms where I > is the set of all atoms that are mapped to > by I, and I ? is the set of all atoms that are mapped to ? by I. Atoms, which do not occur in I > [ I ? , are mapped to U. Let I = hI > , I ? i and J = hJ > , J ? i be two interpretations: I ✓ J i↵ I > ✓ J > and I ? ✓ J ? . I(F ) = > means that a formula F is mapped to true under I. M is a model of g P if it is an interpretation, which maps each clause occurring in g P to >. I is the least model of g P i↵ for any other model J of g P it holds that I ✓ J. 2.3 Reasoning with Respect to Least Models Consider following transformation for g P: 1. Replace all clauses in g P with the same head A Body1 , A Body2 , . . . by the single expression A Body1 _ Body2 , _ . . . . 2. If A 2 undef(g P), then add A ?. 3. Replace all occurrences of by $. The resulting set of equivalences is called the completion of g P [3]. If Step 2 is omitted, then the resulting set is called the weak completion of g P (wc g P). In contrast to completed programs, the model intersection property holds for weakly completed programs [17]. This guarantees the existence of a least model for every program. Stenning and van Lambalgen [30] devised such an operator, which has been generalized for first-order programs by [16]: Let I be an interpretation in SvL,P (I) = hJ > , J ? i, where J > = {A | there exists a clause A Body 2 g P with I(Body) = >}, J ? = {A | there exists a clause A Body 2 g P and for all clauses A Body 2 g P we find I(Body) = ?}. As shown in [16] the least fixed point of SvL,P is identical to the least model of the weak completion of g P under three-valued Lukasiewicz semantics. In the A Computational Logic Approach to Syllogisms in Human Reasoning 5 following, we will denote the least model of the weak completion of a given program P by lmL wc g P. From I = h;, ;i, lmL wc g P is computed by iterat- ing SvL,P . Given a program P and a formula F , P |=Llmwc F i↵ lmL wc g P(F ) = > for formula F . Notice that SvL di↵ers in a subtle way from the well-known Fitting operator F , introduced in [13]: The definition of F is like that of ? SvL , except that in the specification of J the first line “there exists a clause A Body 2 g P and” is dropped. The least fixed point of F,P corresponds to the least model of the completion of g P. If an atom A is undefined in g P, then, for arbitrary interpretations I it holds that A 2 J ? in F,P (I) = hJ > , J ? i, whereas if SvL is applied instead of F , this does not hold for any interpreta- tion I. The correspondence between weak completion semantics and well-founded se- mantics [31] for tight programs, i.e. those without positive cycles, is shown in [6]. 2.4 Integrity Constraints A set of integrity constraints IC comprises clauses of the form ? Body, where Body is a conjunction of literals. Under three-valued semantics, there are several ways on how to understand integrity constraints [21], two of them being the theoremhood view and the consistency view. Consider IC: ? ¬p ^ q. The theoremhood view requires that a model only satisfies the set of integrity constraints if for all its clauses, Body is false under this model. In the example, this is only the case if p is true or if q is false in the model. In the consistency view, the set of integrity constraints is satisfied by the model if Body is unknown or false in it. Here, a model satisfies IC already if either p or q is unknown. Given P and set IC, P satisfies IC i↵ there exists I, which is a model for g P, and for each ? Body 2 IC, we find that I(Body) 2 {?, U}. 2.5 Abduction We extend two-valued abduction [21] for three-valued semantics. The set of abducibles AP may not only contain positive but can also contain negative facts: {A > | A 2 undef(P)} [ {A ? | A 2 undef(P)}. Let hP, AP , IC, |=Llmwc i be an abductive framework, E ⇢ AP and observation O a non-empty set of literals. O is explained by E given P and IC i↵ P 6|=Llmwc O, P [ E |=Llmwc O and lmL wc g (P [ E) satisfies IC. O is explained given P and IC i↵ there exists an E such that O is explained by E given P and IC. 6 Emmanuelle-Anna Dietz dietz@iccl.tu-dresden.de We assume that explanations are minimal, that means, there is no other expla- nation E 0 ⇢ E for O. In case abducibles are not abduced as positive or negative facts, they stay unknown in the least model of the weak completion. We distin- guish between skeptical and credulous abduction as follows: F follows skeptically from P, IC and O i↵ O can be explained given P and IC, and for all minimal E for O, given P and IC, it holds that P [ E |=Llmwc F . F follows credulously from P, IC and O i↵ there exists a minimal E for O, given P and IC, and it holds that P [ E |=Llmwc F . 3 Reasoning Towards an Appropriate Logical Form Let us specify the syllogisms from the introduction in logic programs. We first discuss a technical aspect that allows us to encode the negative consequences of the premises. Section 3.2 covers the representational part and show how the beliefs, which might influence the interpretation of the premises, are encoded. 3.1 Positive Encoding of Negative Consequences The first premise of Sdog is No police dogs are vicious. and is equivalent to If something is vicious, then it is not a police dog. and If something is a police dog, then it is not vicious. The consequences in both inferences are the negation of it is a police dog and the negation of it is vicious, respectively. As the weak completion semantics does not allow negative heads in clauses, we cannot represent these inferences in a logic program straightaway. For every negative conclusion ¬p(X) we introduce an auxiliary formula p0 (X) together with the clause p(X) ¬p0 (X). We obtain the following preliminary representation of the first premise of Sdog wrt vicious:1 police dog 0 (X) vicious(X), police dog(X) ¬police dog 0 (X), where police dog(X), police dog 0 (X), and vicious(X) denote that X is a police dog, X is not a police dog, and X is vicious, respectively. A model I = hI > , I ? i that contains both police dog(X) and police dog 0 (X) in I > should be invalidated. This condition can be represented by the integrity constraint ICpolice dog = {? police dog(X) ^ police dog 0 (X)}, and is to be understood as discussed in Section 2.4. For the following examples, whenever there exists a p(X) and its p0 (X) counterpart in P, we implicitly assume ICp = {? p(X) ^ p0 (X)}. 1 In the following we will only encode one of the inferences. A Computational Logic Approach to Syllogisms in Human Reasoning 7 3.2 Abnormality Predicates and Background Knowledge Newstead and Griggs [25] have shown, that the universal quantifiers in natural language are often understood as fuzzy quantifiers, which allow exceptions. In some circumstances for all is understood as for almost all. They argue that the statement all Germans are hardworking seems to permit exceptions and is understood as a generalization about all Germans and not a statement, which is true for each one. This fuzzy interpretation of quantifiers seems to be in line with Stenning and van Lambalgen’s suggestion to implement conditionals by default licenses for implications [29, 30]. They propose to introduce abnormality predicates, which should be added to the antecedent of the implication, where the abnormality predicate is initially assumed to be false. Consider again Premise 1 in Sdog , which can be understood as If something is vicious and not abnormal (in that respect), then it is not a police dog. Nothing (by default) is abnormal (regarding the previous sentence). This information together with the previously introduced clauses for Premise 1 in Sdog can now be encoded as: police dog 0 (X) vicious(X) ^ ¬abdog 0 (X), police dog(X) ¬police dog 0 (X), abdog 0 (X) ?. Sdog Premise 2 states that there are some highly trained dogs that are vicious. This statement presupposes that there actually exists something, let us say a new reserved (Skolem) constant a, for which the following is true: highly trained (a) > and vicious(a) >. Pdog represents the first two premises of Sdog : police dog 0 (X) vicious(X) ^ ¬abdog 0 (X), police dog(X) ¬police dog 0 (X), abdog 0 (X) ?, highly trained (a) >, vicious(a) >. We encode the first two premises of the other syllogisms similarly. Svit Premise 2 states that there are some vitamin tablets, which are inexpen- sive. We presuppose that there exists something, a, for which these facts are true: vitamin(a) > and inex (a) >. 8 Emmanuelle-Anna Dietz dietz@iccl.tu-dresden.de Additionally, it is commonly known that The purpose of vitamin tablets is to aid nutrition. This belief and the clause representing Premise 1 leads to If something is a vitamin tablet, then it is abnormal (regarding Premise 1 of Svit ). The program Pvit represents Premise 1 and Premise 2 together with the background knowledge: nutritional 0 (X) inex (X) ^ ¬ab(X), nutritional (X) ¬nutritional 0 (X), ab(X) ?, ab(X) vitamin(X), vitamin(a) >, inex (a) >. nutritional (X), nutritional 0 (X) denote X is nutritional, not nutritional, resp. Srich Premise 2 states that there are some hard workers who are rich. We presuppose that there is someone, let us say, a, for which these facts are true: hard worker (a) > and rich(a) >. Prich represents Premise 1 and Premise 2 of Srich : mil 0 (X) hard worker (X) ^ ¬ab(X), mil (X) ¬mil 0 (X), ab(X) ?, rich(a) >, hard worker (a) >. mil (X) and mil 0 (X) denote X is a millionaire and not a millionaire, resp. Scig Premise 2 states that there are some cigarettes, which are inexpensive. Again, we presuppose that there is something, a, for which these facts are true: cig(a) > and inex (a) >. Additionally, it is commonly known that Cigarettes are addictive. This belief and the clause representing Premise 1 leads to A Computational Logic Approach to Syllogisms in Human Reasoning 9 If something is a cigarette, then it is abnormal (regarding Premise 1 of Scig ). As discussed by Evans et al. [10], humans seem to have a background knowledge or belief, which might provide the motivation on whether to validate a syllogism. A direct representation of Premise 2 is There exists a cigarette, which is inexpensive. (1) Additionally, in the context of Premise 1, we assume that Compared to other addictive things, cigarettes are inexpensive. (2) which implies (1) and biases the reasoning towards a representation. Note that (2) only implies (1) because we understand quantifiers with existential import, i.e., for all implies there exists. This is a reasonable assumption when modeling hu- man reasoning, as in natural language we normally do not quantify over things that don’t exist. Furthermore, Stenning and an Lambalgen [30] have shown that humans require existential import for the conditional to be true. The belief bias represented by (2), together with the idea to represent condition- als by a normal default permission for implication, leads to the conditional If something is a cigarette and not abnormal, then it is inexpensive. (3) Nothing (as a rule) is abnormal (regarding (3)). Pcig represents the first two premises and the background knowledge in Scig as follows: addictive 0 (X) inex (X) ^ ¬abadd 0 (X), addictive(X) ¬addictive 0 (X), abadd 0 (X) ?, abadd 0 (X) cig(X), inex (X) cig(X) ^ ¬ab inex (X), ab inex (X) ?, cig(a) >, inex (a) >, addictive(X) and addictive 0 (X) denote X is addictive and not addictive, resp. 4 Reasoning with Respect to Least Models This section deals with Stenning and van Lambalgen’s second step, and discusses where a possible belief bias during the reasoning procedure can influence the re- sult. We show how to compute the least model for each case and discuss whether it represents the participants’ conclusions shown in the introduction. 10 Emmanuelle-Anna Dietz dietz@iccl.tu-dresden.de 4.1 Valid Arguments Pdog represents Sdog . Its weak completion, wc g Pdog , is: police dog 0 (a) $ vicious(a) ^ ¬abdog 0 (a), police dog(a) $ ¬police dog 0 (a), abdog 0 (a) $ ?, highly trained (a) $ >, vicious(a) $ >. Its least model is: h{highly trained (a), vicious(a), police dog 0 (a)}, {police dog(a), abdog 0 (a)}i. This model entails the Conclusion of Sdog , some highly trained dogs are not police dogs. According to [10], Sdog is logically valid and psychologically believ- able. No conflict arises neither at the psychological nor at the logical level, and the majority concludes that this syllogism holds, which complies with the least model of wc g Pdog . The psychological results of the second syllogism, Svit , indicate that there seems to be two kinds of participants each taking a di↵erent interpretation of the state- ments. The group, which validated the syllogism, was not influenced by the bias with respect to nutritional things. Accordingly, the logic program that represents their view, corresponds to Pvit \ {ab(X) vitamin(X)}. The weak completion of g Pvit \ {ab(a) vitamin(a)} is: nutritional 0 (a) $ inex (a) ^ ¬ab(a), nutritional (a) $ ¬nutritional 0 (a), ab(a) $ ?, vitamin(a) $ >, inex (a) $ >. The corresponding least model is: h{vitamin(a), inex (a), nutritional 0 (a)}, {nutritional (a), ab(a)}i, which entails the conclusion, that some vitamin tables are not nutritional, and indeed we can conclude that this syllogism is valid. The other interpretation, where participants’ chose not to validate the syllo- gism, is the group who has apparently been influenced by their belief. Their interpretation of Svit is represented by Pvit . Its weak completion, wc g Pvit , is: nutritional 0 (a) $ inex (a) ^ ¬ab(a), nutritional (a) $ ¬nutritional 0 (a), ab(a) $ ? _ vitamin(a), vitamin(a) $ >, inex (a) $ >. A Computational Logic Approach to Syllogisms in Human Reasoning 11 Its least model is: h{vitamin(a), inex (a),nutritional (a), ab(a)}, {nutritional 0 (a)}i. The Conclusion of Svit is not entailed. According to [10], Svit is logically valid but psychologically unbelievable. There arises a conflict at the psychological level because we generally assume that the purpose of vitamin tablets is to aid nu- trition. The participants who have been influenced by this belief concluded that the syllogism does not hold, which complies with the least model of lmL wc g Pvit . 4.2 Invalid Arguments The third and the fourth cases of the syllogistic reasoning task cannot be mod- eled straightforwardly as the first two cases. We assume that the belief has an influence on the procedural part, that is, the reasoning process is biased. We can model this by abduction, which has been explained in Section 2.5. Prich represents Srich . Its weak completion, wc g Prich , is: mil 0 (a) $ hard worker (a) ^ ¬ab(a), mil (a) $ ¬mil 0 (a), ab(a) $ ?, rich(a) $ >, hard worker (a) $ >. Its least model is: h{hard worker (a),rich(a), mil 0 (a)}, {ab(a),mil (a)}i, and states nothing about the Conclusion, some millionaires are not rich peo- ple. Actually, the Conclusion in Srich states something, which contradicts Premise 2, and thus needs to be about something that cannot be the pre- viously introduced constant a. According to our background knowledge, we know that millionaires exist. Let us formulate this as an observation, let’s say about b: O = {mil (b)}. If we want to allow to suppose truth or falsity of some- thing about b with respect to Prich , say about the truth of hard worker (b), we can no longer assume that CONSTANTS = constants(Prich ), because Ag Prich would not contain any facts about b. Therefore, we specify that the new set of constants in consideration is CONSTANTS = {a, b}. g Prich with respect to CONSTANTS contains additionally three more clauses: mil 0 (b) hard worker (b) ^ ¬ab(b), mil (b) ¬mil 0 (b), ab(b) ?. The set of abducibles, Ag Prich , contains the following clauses: hard worker (b) >, hard worker (b) ?. 12 Emmanuelle-Anna Dietz dietz@iccl.tu-dresden.de E = {hard worker (b) ?} is the only explanation for O. wc g (Prich [ E) contains: mil 0 (b) $ hard worker (b) ^ ¬ab(b), mil (b) $ ¬mil 0 (b), ab(b) $ ?, hard worker (b) $ ?. Its least model, where lmL wc g (Prich [ E) = hI > , I ? i, contains: I > = {mil (b)}, I ? = {ab(b), mil 0 (b), hard worker (b)}. As this model does not confirm the Conclusion it does not validate Srich . According to [10] this case is quite easy to solve, because it is neither logically valid nor believable. Almost no one validated Srich , which complies with the least model of wc g (Prich [ E). Pcig represents Scig . Its weak completion, wc g Pcig , is: addictive 0 (a) $ inex (a) ^ ¬abadd 0 (a), addictive(a) $ ¬addictive 0 (a), abadd 0 (a) $ ? _ cig(a), cig(a) $ >, inex (a) $ (cig(a) ^ ¬abinex (a)) _ >, abinex (a) $ ?. Its least model of the weak completion is: h{cig(a), inex (a),addictive(a), abadd 0 (a)}, {addictive 0 (a), abinex (a)}i, which, similarly to the previous case, does not state anything about the Conclu- sion, some addictive things are not cigarettes. Again, the Conclusion of Scig is something, which cannot be about a. According to our background knowledge, we know that addictive things exist. Let us formulate this again as an observa- tion, say about b: O = {addictive(b)}, which needs to be explained. In order to generate an explanation for O, let us define CONSTANTS = {a, b}. g Prich with respect to CONSTANTS now additionally contains five more clauses: addictive 0 (b) inex (b) ^ ¬abadd 0 (b), addictive(b) ¬addictive 0 (b), abadd 0 (b) ?, abadd 0 (b) cig(b), inex (b) cig(b) ^ ¬abinex (b), abinex (b) ?. Given g Pcig , the set of abducibles, Ag Pcig , contains the following clauses: cig(b) >, cig(b) ?. A Computational Logic Approach to Syllogisms in Human Reasoning 13 O is true if addictive 0 (b) is false, which is false if inex (b) is false or abadd 0 (b) is true. inex (b) is false if cig(b) is false and abadd 0 (b) is true if cig(b) is true. For O we have two minimal explanations, E? = {cig(b) ?} and E> = {cig(b) >}. The weak completion of g (Pcig [ E? ) contains: addictive 0 (b) $ inex (b) ^ ¬abadd 0 (b), addictive(b) $ ¬addictive 0 (b), abadd 0 (b) $ ? _ cig(b), inex (b) $ cig(b) ^ ¬abinex (b), abinex (b) $ ?, cig(b) $ ?. Its least model, where lmL wc g (Pcig [ E? ) = hI > , I ? i contains: I > = {addictive(b)}, I ? = {cig(b), inex (b), abadd 0 (b), abinex (b)}, which entails the Conclusion of Scig . As E> is yet another explanation for O, the Conclusion, that b is not a cigarette, only follows credulously. Scig is logically invalid but psychologically believable and therefore causes a con- flict [10]: Scig does not follow logically from the premises; however, people are biased and search for a model, which confirms their beliefs. Therefore, the ma- jority concluded that this syllogism holds, which complies with the least model of wc g (Padd [ E? ). In [26, 27], we show an extension of this case, where the conclusion follows skep- tically. With help of meta predicates, we specify that the first premise describes the usual and the second premise describes the exceptional case. That is, an inexpensive cigarette is meant to be the exception not the rule, in the context of things that are addictive and expensive. 5 Conclusion The weak completion semantics has shown to successfully model various human reasoning episodes [4,5,7,18,26,27]. This paper presents yet another human rea- soning task modeled under the weak completion semantics. As in our previous formalizations, we follow Stenning and van Lambalgen’s two step approach. We motivate our assumptions based on results from Psychology, where syllogisms in human reasoning have been investigated extensively in the past decades. As has been shown in the previous formalizations, the advantage of the weak completion semantics over other logic programming approaches, is, that unde- fined atoms stay unknown, instead of becoming false. The syllogistic reasoning tasks, which have been discussed in the literature so far, have never accounted to give the option ‘I don’t know’ to the participants. As has been discussed in [24], participants who say that no valid conclusion follows, might have problems to ac- tually find a conclusion easily and possibly mean that they simply do not know. 14 Emmanuelle-Anna Dietz dietz@iccl.tu-dresden.de They also point to [28], who suggest that, if a conclusion is stated as being not valid, this could just simply mean that the reasoning process is exhausted. An experimental study, which would allow the participants to distinguish between ‘I don’t know’ and ‘not valid’, might possibly give us more insights about their reasoning processes and identify where exactly the belief bias takes e↵ect. 6 Acknowledgements Many thanks to Ste↵en Hölldobler and Luı́s Moniz Pereira for valuable feedback. References 1. J. Adler and L. Rips. Reasoning: Studies of Human Inference and Its Foundations. Cambridge University Press, 2008. 2. L. J. Chapman and J. P. Chapman. Atmosphere e↵ect re-examined. Journal of Experimental Psychology, 58(3):220–6, 1959. 3. K. L. Clark. Negation as failure. In H. Gallaire and J. Minker, editors, Logic and Data Bases, volume 1, pages 293–322. Plenum Press, New York, NY, 1978. 4. E.-A. Dietz, S. Hölldobler, and M. Ragni. A computational logic approach to the suppression task. In N. Miyake, D. Peebles, and R. P. Cooper, editors, Proceedings of the 34th Annual Conference of the Cognitive Science Society, pages 1500–1505, Austin, TX, 2012. 5. E.-A. Dietz, S. Hölldobler, and M. Ragni. A computational logic approach to the abstract and the social case of the selection task. In 11th International Symposium on Logical Formalizations of Commonsense Reasoning, 2013. 6. E.-A. Dietz, S. Hölldobler, and C. Wernhard. Modeling the suppression task under weak completion and well-founded semantics. Journal of Applied Non-Classical Logics, 2013. 7. E.-A. Dietz, S. Hölldobler, and C. Wernhard. Modeling the suppression task under weak completion and well-founded semantics. Journal of Applied Non-Classsical Logics, 24(1–2):61–85, 2014. 8. J. Evans. Thinking and believing. Mental models in reasoning, 2000. 9. J. Evans. Biases in deductive reasoning. In R. Pohl, editor, Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory. Psychol- ogy Press, 2012. 10. J. Evans, J. L. Barston, and P. Pollard. On the conflict between logic and belief in syllogistic reasoning. Memory & Cognition, 11(3):295–306, 1983. 11. J. Evans, S. Handley, and C. Harper. Necessity, possibility and belief: A study of syllogistic reasoning. Quarterly Journal of Experimental Psychology, 54(3):935– 958, 2001. 12. J. S. Evans. Bias in human reasoning - causes and consequences. Essays in cognitive psychology. Lawrence Erlbaum, 1989. 13. M. Fitting. A Kripke-Kleene semantics for logic programs. Journal of Logic Pro- gramming, 2(4):295–312, 1985. 14. A. Garnham and J. Oakhill. Thinking and Reasoning. Wiley, 1994. 15. S. Hölldobler. Logik und Logikprogrammierung 1: Grundlagen. Kolleg Synchron. Synchron, 2009. A Computational Logic Approach to Syllogisms in Human Reasoning 15 16. S. Hölldobler and C. D. Kencana Ramli. Logic programs under three-valued Lukasiewicz semantics. In P. M. Hill and D. S. Warren, editors, Logic Program- ming, 25th International Conference, ICLP 2009, volume 5649 of Lecture Notes in Computer Science, pages 464–478, Heidelberg, 2009. Springer. 17. S. Hölldobler and C. D. Kencana Ramli. Logics and networks for human reason- ing. In C. Alippi, M. M. Polycarpou, C. G. Panayiotou, and G. Ellinas, editors, International Conference on Artificial Neural Networks, ICANN 2009, Part II, vol- ume 5769 of Lecture Notes in Computer Science, pages 85–94, Heidelberg, 2009. Springer. 18. S. Hölldobler, T. Philipp, and C. Wernhard. An abductive model for human reason- ing. In Logical Formalizations of Commonsense Reasoning, Papers from the AAAI 2011 Spring Symposium, AAAI Spring Symposium Series Technical Reports, pages 135–138, Cambridge, MA, 2011. AAAI Press. 19. P. N. Johnson-Laird. Mental models: towards a cognitive science of language, in- ference, and consciousness. Harvard University Press, Cambridge, MA, 1983. 20. P. N. Johnson-Laird and R. M. Byrne. Deduction. 1991. 21. A. C. Kakas, R. A. Kowalski, and F. Toni. Abductive logic programming. Journal of Logic and Computation, 2(6):719–770, 1993. 22. J. W. Lloyd. Foundations of Logic Programming. Springer-Verlag New York, Inc., New York, NY, USA, 1984. 23. J. Lukasiewicz. O logice trójwartościowej. Ruch Filozoficzny, 5:169–171, 1920. English translation: On three-valued logic. In: Lukasiewicz J. and Borkowski L. (ed.). (1990). Selected Works, Amsterdam: North Holland, pp. 87–88. 24. S. Newstead, S. Handley, and E. Buck. Falsifying mental models: Testing the predictions of theories of syllogistic reasoning. Memory & Cognition, 27(2):344– 354, 1999. 25. S. E. Newstead and R. A. Griggs. Fuzzy quantifiers as an explanation of set inclusion performance. Psychological Research, 46(4):377–388, 1984. 26. L. M. Pereira, E.-A. Dietz, and S. Hölldobler. A computational logic approach to the belief bias e↵ect. In Proceedings of the 14th International Conference on Principles of Knowledge Representation and Reasoning, 2014. 27. L. M. Pereira, E.-A. Dietz, and S. Hölldobler. Contextual abductive reasoning with side-e↵ects. volume 14, pages 633–648, 2014. 28. T. A. Polk and A. Newell. Deduction as verbal reasoning. Psychological Review, 102(3):533–566, 1995. 29. K. Stenning and M. van Lambalgen. Semantic interpretation as computation in nonmonotonic logic: The real meaning of the suppression task. Cognitive Science, 6(29):916–960, 2005. 30. K. Stenning and M. van Lambalgen. Human Reasoning and Cognitive Science. A Bradford Book. MIT Press, Cambridge, MA, 2008. 31. A. Van Gelder, K. A. Ross, and J. S. Schlipf. The well-founded semantics for general logic programs. Journal of the ACM, 38(3):619–649, 1991. 32. M. Wilkins. he e↵ect of changed material on the ability to do formal syllogistic reasoning. 16(102), 1928. 33. R. S. Woodworth and S. B. Sells. An atmosphere e↵ect in formal syllogistic rea- soning. Journal of Experimental Psychology, 18(4):451–60, 1935.