=Paper= {{Paper |id=Vol-1193/paper_17 |storemode=property |title=Generating Comprehensible Explanations in Description Logic |pdfUrl=https://ceur-ws.org/Vol-1193/paper_17.pdf |volume=Vol-1193 |dblpUrl=https://dblp.org/rec/conf/dlog/EngstromNS14 }} ==Generating Comprehensible Explanations in Description Logic== https://ceur-ws.org/Vol-1193/paper_17.pdf
    Generating Comprehensible Explanations in
                Description Logic

    Fredrik Engström1 , Abdul Rahim Nizamani2 , and Claes Strannegård13
          1
              Department of Philosophy, Linguistics and Theory of Science
                             University of Gothenburg
                 2
                    Department of Applied Information Technology
                             University of Gothenburg
                 3
                    Department of Applied Information Technology
                        Chalmers University of Technology




      Abstract. We propose a method for generating comprehensible expla-
      nations in description logic. Such explanations could be of potential use
      for e.g. engineers, doctors, and users of the semantic web. Users com-
      monly need to understand why a logical statement follows from a set
      of hypotheses. Then, automatically generated explanations that are eas-
      ily understandable could be of help. A proof system for description logic
      that can be used for generating comprehensible explanations is proposed.
      Similar systems have been proposed for propositional logic [30] and first-
      order logic [28].



1   Introduction

When faced with the problem of explaining a logically valid entailment for a
fellow human, mathematicians and logicians tend to give (or at least outline) a
proof of the entailment in an informal model-theoretic framework or a proper
deductive system. Examples of such formal systems include natural deduction,
sequent calculus, and analytic tableaux. However, many such formal systems
have an associated notion of deduction that is better suited for machines to
check than for humans to understand.
    In this paper, a proof formalism for a description logic that is based on
a simple cognitive model of a human user is proposed as a formal framework
for explanations. By using this formal proof system, complexity measures on
entailments in description logics are defined and evaluated against empirical
data of a small experiment that was reported in [13].
    Description logic has been used as the logical framework of the semantic web
and its associated range of ontologies, but also in other knowledge representa-
tion systems. Description logic systems tend to be rather expressive, but still
computationally feasible. Description logic can be seen as a guarded fragment of
first-order logic, reminiscent of multi-modal logic.
1.1   Explanations in Description logic
In many situations, a user of an ontology needs to understand why a logical
statement φ follows from a set of hypotheses Γ . For instance, an engineer who
wants to remove an inconsistency from an ontology Γ needs to understand why
Γ ⊢ ⊤ ⊑ ⊥ holds. Similarly, a user of a knowledge representation system, such
as SNOMED-CT, needs to be sure that an entailment holds not because of an
inconsistency or a bug in the ontology.
    An example of an entailment that is not so self-evident for non-experts is the
following: The set of axioms
                               Person ⊑ ¬Movie
                              RRated ⊑ Movie
                RRated ≡ Thriller ⊔ ∀hasViolenceLevel.High
                        ∃hasViolenceLevel.⊤ ⊑ Movie
entails that Person ⊑ ⊥. Written in a more condensed form this becomes
       C1 ⊑ ¬C2 , C3 ⊑ C2 , C3 ≡ C4 ⊔ ∀R.C5 , ∃R.⊤ ⊑ C2 ⊢ C1 ⊑ ⊥              (1)
    Systems like Protégé (a popular ontology editor) only provide the users with
a very basic kind of explanation: A justification of an entailment Γ ⊢ φ is a
minimal subset Γ ′ ⊆ Γ such that Γ ′ ⊢ φ. Such explanations may certainly help
in pinpointing the part of a set of axioms responsible for a certain entailment,
but, as pointed out in [14], it may not be enough for understanding why the
entailment holds. As in all logic systems, the axioms and rules are important to
understandability, but also how they are used in the arguments or deductions.
    A traditional computer-generated proof (for example in a resolution or tab-
leaux system) is not constructed for humans to understand, rather for machines
to check. To machine-generate useful explanations we need to work in a proof
system, whose proofs are easier for humans to understand. One such system,
justification oriented proofs, was suggested in [14], in which an explanation is
a tree of formulas. The root of the tree is the conclusion and the leaves are
the axioms. Each node in the tree is called a lemma, and the set of children
of a certain node is a justification of that node. Also, a node should be easily
deducible from its children. This notion was made precise by means of a measure
of complexity for entailments.
    Other relevant work includes [3] and [19] in which formal proofs are proposed
as explanations, but in which cognitive considerations are not taken seriously.
    In this paper we take a rather different approach, designing a new proof sys-
tem for description logic, reminiscent of proof systems previously designed for
propositional logic [30] and model-checking in first-order logic [28]. The proof
system uses bounded cognitive resources and thus an explicit connection to hu-
man cognition is provided. The proof system was ”optimized” for producing
understandable proofs/explanations mechanically. Using this proof system we
use the minimum length of a proof as a simple cognitive complexity measure on
entailments, thus making it possible to evaluate the approach.
1.2    Logical reasoning
Cognitive architectures such as Soar [18], ACT-R [2], CHREST [10], and NARS
[32] have been used for modeling human reasoning in a wide range of domains.
They typically include models of cognitive resources such as working memory,
sensory memory, declarative memory, and procedural memory. These cognitive
resources are all bounded in various ways, e.g., with respect to capacity, duration,
and access time [17]. Notably, the working memory can typically only hold a
small number of items, or chunks, and is a well-known bottleneck in many types
of human problem solving [31].
    In cognitive psychology, several models of logical reasoning have been pre-
sented, particularly in the mental logic tradition [4] and the mental models tra-
dition [16]. Computational models in the mental logic tradition are commonly
based on natural deduction systems [22]; the PSYCOP system [23] is one ex-
ample. The focus of the mental models tradition is on particular examples of
reasoning rather than general computational models.
    Human reasoning in the domain of logic is considered from several perspec-
tives in [1] and [12]. Stenning and van Lambalgen consider logical reasoning both
“in the lab” and “in the wild” and investigate how reasoning problems formulated
in natural language and situated in the real world are interpreted (reasoning to
an interpretation) and then solved (reasoning from an interpretation) [27].
    Formalisms of logical reasoning include natural deduction [22, 15, 7], sequent
calculus [21], natural deduction in sequent calculus style [21], natural deduction
in linear style [6, 8], and analytic tableaux [26]. Several formalisms have also
emerged in the context of automated reasoning, e.g., Robinson’s resolution sys-
tem [24] and Stålmarck’s system [25]. None of these formalisms represent working
memory explicitly, and most of them were constructed for purposes other than
modeling human reasoning.
    We suspect that working memory is a critical cognitive resource also in the
special case of logical reasoning [9, 11, 31]. Therefore, we define a proof system
that includes an explicit bound on the working memory capacity.
    For the sake of concreteness, we will use a particular description logic, a par-
ticular set of rules for this logic, a particular set of cognitive resources, and a
particular choice of bounds on those cognitive resources. By no means do we
claim those choices to be optimal in any way. We merely use them for illustrat-
ing the general idea of using machine generated proofs with bounded cognitive
resources in order to produce explanations in description logic that are designed
for understandability.


2     Description logic
We have chosen to focus on the description logic ALC in this paper, but hope to
be able to extend our results to more expressive logics in future works. Concepts
in ALC are built up from atomic concepts A and roles R in the following manner:

               C ::= A | ⊤ | ⊥ | ¬C | C ⊔ C | C ⊓ C | ∃R.C | ∀R.C
where A is an atomic concept and R a role. A formula φ is either of the form
C ≡ D or C ⊑ D, where C and D are concepts.
    A model M consists of a domain M and for each atomic concept C an
interpretation C M ⊆ M , and for each role R an interpretation RM ⊆ M × M .
Each concepts gets interpreted as a subset of the domain M using the following
standard recursive definition:
 – ⊤M = M and ⊥M = ∅
 – (C ⊔ D)M = C M ∪ DM .
 – (C ⊓ D)M = C M ∩ DM .
 – (¬C)M = M \ C M .
 – (∀R.C)M = { a ∈ M | ∀x∈M (a RM x → C M x) }
 – (∃R.C)M = { a ∈ M | ∃x∈M (a RM x ∧ C M x) }
Furthermore, M |= φ and Γ |= φ are defined in the standard way; for example,
 – M |= C ⊑ D iff C M ⊆ DM .
    A TBox is a set Γ of formulas of the form A ≡ D or A ⊑ D where A is an
atomic concept and each atomic concept occurs at most once on the left hand
side of a formula in Γ . Furthermore, we restrict our attention to the case of
acyclic TBoxes: An atomic concept A directly uses an atomic concept B in Γ if
B occurs in the right-hand side of any axiom with A on the left-hand side. The
relation uses is the transitive closure of the directly uses relation. Γ is acyclic if
no atomic concept A uses A.

3    Proof system
Several proof systems for description logics exist, including systems originally
designed for modal and first-order logic. All of them, as far as we know, are
shallow, meaning that the rules can only handle the main logical constants of
the formulas. Such systems are not suited for studying human reasoning, since
humans tend to use deep rules, such as substitution, when reasoning. We con-
struct a proof system that is deep and linear, and which allows us later to put
bounds on the length of formulas corresponding to bounds on the working mem-
ory of humans. The system is based on similar systems for propositional logic
[30] and first-order logic [28], on a system for inductive reasoning [29], and also
on introspection.
    The proof system operates with sets of formulas. We let Γ, φ = Γ ∪ { φ } and
identify φ with the singleton set { φ }. It should be noted that the system is not
constructed as a minimal system, many of the rules and axioms are admissible in
the strict formal sense. However, we are are interested in complexity measures
that correspond to cognitive complexity, and those are sensitive to the exact
formulation of the proof system.
    We define the relation Γ ⊢ φ, where Γ is a TBox (or more generally any
set of formulas) and φ a formula of the form C ⊑ D, meaning that there is a
derivation from φ ending with ∅ in the proof system described below using the
set Γ as the non-logical axioms.
    Γ ⊢ C ≡ D means that Γ ⊢ C ⊑ D and Γ ⊢ D ⊑ C.
3.1    Axioms

The axioms consist of two kinds:

 – the non-logical axioms from the set Γ :
  A1. All formulas in Γ .
  A2. A ≡ Ā ⊓ C if A ⊑ C is in Γ . Here Ā is a new atomic concept.
 – the logical axiom schemas (see [5] for a list of the axioms).


3.2    Rules

 – The axiom rule:
                                       ∆, φ
                                            Ax
                                        ∆
   Given that φ is an axiom.
 – The substitution rule:
                                      ∆, φ
                                             Subst
                                   ∆, φ[C/D]
   Given that C ≡ D is an axiom. Here φ[C/D] is the result of substituting
   one of the occurrences of D in φ with C.
 – The strengthening rule:

                                   ∆, A ⊑ B
                                               Str
                                ∆, A[D/C]+ ⊑ B

                                   ∆, A ⊑ B
                                               Str
                                ∆, A ⊑ B[C/D]+

   Both rules having the side condition that C ⊑ D is an axiom. Here A[C/D]+
   is the result of substituting one of the negation free occurrences of D in A
   with C. An occurrence is negation free if it is not in the scope of a negation.
 – The first split rules:

                                ∆, C ⊑ A ⊓ B
                                               Split1
                               ∆, C ⊑ A, C ⊑ B

                                ∆, A ⊔ B ⊑ C
                                               Split1
                               ∆, A ⊑ C, B ⊑ C
 – The second split rules:

                                ∆, C ⊑ D
                                                 Split2
                         ∆, C ⊓ A ⊑ D, C ⊓ B ⊑ D
      Given that ⊤ ⊑ A ⊔ B is an axiom.
                                ∆, C ⊑ D
                                                 Split2
                         ∆, C ⊑ D ⊔ A, C ⊑ D ⊔ B
   Given that A ⊓ B ⊑ ⊥ is an axiom.
 – The rules of contraposition:
                                  ∆, C ⊑ D
                                            Contr
                                 ∆, ¬D ⊑ ¬C

                                ∆, C ⊓ D ⊑ E
                                              Contr
                                ∆, C ⊑ E ⊔ ¬D

                                ∆, C ⊑ D ⊔ E
                                              Contr
                                ∆, C ⊓ ¬D ⊑ E
 – The rules of monotonicity:
                                ∆, ∀R.C ⊑ ∀R.D
                                               Mon
                                   ∆, C ⊑ D

                                ∆, ∃R.C ⊑ ∃R.D
                                               Mon
                                   ∆, C ⊑ D
Example 1. For a simple example of a derivation in the system let us go back
to the example in the introduction. Figure 1 gives a derivation proving that the
axioms

                             RRated ⊑ Movie                                  (2)
               RRated ≡ Thriller ⊔ ∀hasViolenceLevel.High                    (3)

entail Thriller ⊑ Movie.


                       Thriller ⊑ Movie
                                         Str (2)
                      Thriller ⊑ RRated
                                                       Subst (3)
        Thriller ⊑ Thriller ⊔ (∀hasViolenceLevel.High)
                                                       Str (C ⊑ C ⊔ D)
                     Thriller ⊑ Thriller
                                          Ax (C ⊑ C)
                               ∅

                      Fig. 1. The derivation in Example 1.



Proposition 1. The proof system is sound with respect to the standard seman-
tics of ALC, i.e, if Γ ⊢ φ then Γ ⊨ φ.
   The proof is an easy induction on the length of derivations.
Proposition 2. The proof system is complete with respect to acyclic TBoxes,
i.e., if Γ is an acyclic TBox and Γ ⊨ φ then Γ ⊢ φ.
Proof (sketch). The first thing to observe is that the set Γ , the TBox, can be
eliminated. Thus we only need to show that if ⊨ φ then ⊢ φ. The reason is
that we may, by TBox-axioms and the substitution rule substitute in all the
information of Γ in φ. The rest of the proof follows the usual Henkin style proof
of completeness for the modal logic K. We will only give the outline of the proof
here.
    A set ∆ of concepts is said to be consistent if for no C1 , . . . , Ck ∈ ∆,

                             ⊢ C1 ⊓ . . . ⊓ Ck ⊑ ⊥.

Completeness follows from proving that any consistent set of concepts has a
model, meaning a model in which each concept’s interpretation is non-empty.
To construct a model of a consistent set ∆ of concepts, maximal consistent
extensions of ∆ are used as objects. The interpretation of the atomic concept A
is the set of all maximal consistent sets extending ∆ and including A. The role
R is interpreted as holding between a and b if for all concepts C, if ∀R.C ∈ a
then C ∈ b. It is routine to check that this in fact is a model in which every
concept in ∆ is non-empty.

   Observe that we in the proof used the fact that Γ is an acyclic TBox. The
question of completeness in the general case is open.


4     Bounded proof system

In this section we define our proof system with bounded cognitive resources.
Taking the restriction on working memory capacity into account, we use a bound
on the length of formulas that may occur in the proof-steps of our system. To
make the model of working memory load more realistic, we also want to take into
account that only certain parts of the formulas may be relevant to a proof-step,
whereas other parts need not be stored in the working memory. For this purpose
we use “black boxes” for concepts, called abstraction concepts, and denote them
by JCK, in which case the structure of the concept C is “hidden”.
   Also, by starting the derivation witnessing an entailment Γ ⊢ C ⊑ D with
the goal JCK ⊑ JDK instead of C ⊑ D, the process of parsing formulas is modeled
by the rule of Inspection (see below).


4.1   Abstraction concepts

Let L be the base language (atomic concepts, roles and individuals). For every
concept C in L we introduce a new atomic concept JCK. This new language is
denoted by Labs . The concept JCK is extensionally the same concept as C; we
will see that we can derive ⊢ JCK ≡ C. However, since we are interested in the
proof system where we only allow formulas of a certain length or complexity it
may help to regard a complex concept as atomic, and this can be done with the
help of abstraction concepts JCK.
   It should be noted that different intuitions on “black boxes” lead to slightly
different implementations of the abstraction concepts. We have used a version
where two “black boxes” which are syntactically the same are identified.
   We need a rule for unfolding abstraction concepts in the following way:
 – The inspection rule:
                                       ∆, φ
                                                  Inspect
                                  ∆, φ[JCK∗ /JCK]
Here φ[JCK∗ /JCK] is the result of substituting one of the occurrences of JCK in
φ with JCK∗ , where JCK∗ is defined by
 – JAK∗ is A if A is atomic.
 – J¬CK∗ is ¬JCK.
 – JC ⊓ DK∗ is JCK ⊓ JDK.
 – JC ⊔ DK∗ is JCK ⊔ JDK.
 – J∃R.CK∗ is ∃R.JCK.
 – J∀R.CK∗ is ∀R.JCK.
   We also need to modify the axioms (A1) and (A2) slightly in this bounded
version of the proof system:
 – A1′ . A ≡ JCK if A ≡ C is in Γ ; and A ⊑ JCK if A ⊑ C is in Γ .
 – A2′ . A ≡ Ā ⊓ JCK if A ⊑ C is in Γ . Here Ā is a new atomic concept.
   Let us denote provability in this system by ⊢abs : Γ ⊢abs C ⊑ D if there is a
derivation in the system starting with JCK ⊑ JDK and ending with ∅.
   It is easy to show that:
Proposition 3. Γ ⊢ φ iff Γ ⊢abs φ.

4.2     Length of formulas
The length of a concept is defined by
 – |JCK| = |⊤| = |⊥| = |A| = 1
 – |C ⊓ D| = |C ⊔ D| = 1 + |C| + |D|
 – |¬C| = 1 + |C|
 – |∃R.C| = |∀R.C| = 2 + |C|.
      The length of a formula is the sum of the lengths of the constituting concepts:
                           |C ≡ D| = |C ⊑ D| = |C| + |D|.
The length of a set of formulas is the sum of the lengths of the formulas in the
set. In the bounded system, only sets limited to a certain length are allowed to
appear:
Definition 1. Γ ⊢kabs C ⊑ D if there is a derivation (in the system described
above) starting from JCK ⊑ JDK and ending in ∅ in which only sets of length less
than or equal to k appear.
Example 2. In Example 1 formulas of length six appear, however the subfor-
mula ∀hasViolenceLevel.High) is never “used”. In fact we have that Γ ⊢4abs
Thriller ⊑ Movie as the derivation in Figure 2 shows.
                       JThrillerK ⊑ JMovieK
                                            Inspect
                        JThrillerK ⊑ Movie
                                             Str (2)
                      JThrillerK ⊑ JRRatedK
                                             Inspect
                       JThrillerK ⊑ RRated
                                                           Sub (3)
       JThrillerK ⊑ JThriller ⊔ (∀hasViolenceLevel.High)K
                                                            Inspect
      JThrillerK ⊑ JThrillerK ⊔ J(∀hasViolenceLevel.High)K
                                                            Str (C ⊑ C ⊔ D)
                     JThrillerK ⊑ JThrillerK
                                              Ax (C ⊑ C)
                                ∅

                       Fig. 2. The derivation of Example 2.


4.3   A measure of reasoning complexity
Knowing that Γ |= φ we may use the proof formalism to define different complex-
ity measures. We have chosen to focus on the length of the shortest derivation
witnessing
                             Γ ⊢6abs JCK ⊑ JDK.
    The reasons for focusing on the length bound six are both pragmatic, it gives
a measure that is feasibly computable, but also theoretic, the average number
of chunks that fit in the working memory of a human is in the range 5 – 9 [20].


5     Explanation generator
A proof searching computer program was implemented in Haskell that uses the
standard breadth-first search algorithm with parallel computing threads. For
computability reasons, we decided on adding some additional restrictions to the
system including:
 – When a proof step consists of more than one formulas, the program searches
   only for proofs that completely remove one of the formulas before modifying
   the other ones.
 – The program only searches for proofs with a certain subformula property: a
   rule application may not introduce a new concept. For example, the program
   never searches for proofs that apply the substitution rule to the axiom C ⊔
   ⊤ ≡ ⊤ and replace a ⊤ with D ⊔ ⊤, as it introduces a new concept D.
 – Proofs are acyclic: a proof step cannot appear more than once in a proof.
    These modifications are real restrictions, and some justifications become un-
provable. However, for the purposes of this paper, we have managed to find
proofs for all the justifications needed using this restricted system.
    In [13] a simple complexity measure on justifications was presented together
with empirical data from an experiment conducted with 14 students. They were
presented with six correct (even though the participants were not aware of this
fact) justifications (EM1–3, and HM1–3) and asked to decide the correctness of
the justifications. We would like to compare our complexity measure with the
one presented in [13] using the empirical data form that experiment.
    We decided to measure the complexity, i.e., the length of the shortest proof in
the system described above for five of the justifications, the results can be found
in Table 1. One of the justifications, HM1, uses constructors not in ALC, and we
decided to exclude that justification altogether. Also, the justification EM1 uses
an RBox, which we have not implemented in our system; however we decided to
slightly modify EM1 to get EM1’ and compute the complexity of EM1’ instead.
    The empirical data from the experiment can be found in Table 1, together
with the computed complexities.


      Justifacion   Success rate (%)   Mean time (s)    Horridge, et.al.   ⊢6abs
      EM1’                57∗               103∗             654∗           11
      EM2                 57                163              703            14
      EM3                 29                134              675            18
      HM2                  7                100              1395           13
      HM3                 57                157              1406          ≤ 34

Table 1. The results of the experiments in [13] together with the two theoretical
measures of complexity: the one from [13] and the proposed one. Observe that there
is a small mismatch for the justification EM1’, the experiment used the slightly more
complex justification EM1 and the Horridge, et.al. complexity was also calculated with
EM1.



   Two comments should be made regarding the results in the table. By includ-
ing rules for RBoxes in the system the complexity of EM1 should be expected
to be 3 – 4 steps higher than that of EM1’, putting the complexity of EM1 on
par with that of EM2.
   Horridge, et. al. comments on the justification HM3 as follows.
    For item HM3 the explanation is a little different. [...] The high success
    rate was therefore due to an error in reasoning, that is, a failure in
    understanding rather than a failure in the model — the MSc students
    got the right answer, but for the wrong reasons.
Thus we think the correct success rate of HM3 should be considerably lower,
close to zero.
    As expected, our complexity measure assigns a too low measure of complexity
to the hard problems, especially HM2. The explanation for this is probably
that the search complexity for the smallest proof is not part of the complexity
measure. This might indicate that to define a better measure of complexity, that
correlates better with data from empirical studies, we need to engineer a more
deterministic proof system in which the search itself is present.
    Having said that, it should be noted that if the goal is to present a model for
explaining justification (or any entailments), the smallest proof is well-suited. In
particular, a shortened version of the proofs in which only the steps including
axioms from the TBox are presented can, and should, be used as explanations.
    For example, let us focus on the running example again. The shortened ver-
sion of the proof of that justification is presented in Figure 3. In this example,
only the steps including axioms from the TBox are shown. In an interactive
framework, the proof could be presented in such a way making it possible to
show the hidden parts of the deduction. We believe that such interactive presen-
tations would constitute good explanations for entailments in description logics.


                           C1 ⊑ ⊥
                              ..
                               .
                         ⊤ ∧ C1 ⊑ ⊥
                                            C1 ⊑ ¬C2
                        ⊤ ∧ (¬C2 ) ⊑ ⊥
                              ..
                               .
                      ∃R.⊤ ⊔ ¬∃R.⊤ ⊑ C2
                                             ∃R.⊤ ⊑ C2
                       C2 ⊔ ¬∃R.⊤ ⊑ C2
                               ..
                                .
                     C2 ⊔ ∀R.C5 ⊑ C2 ⊔ C2
                                              C3 ⊑ C2
                     C2 ⊔ ∀R.C5 ⊑ C3 ⊔ C2
                                                  C4 ⊔ ∀R.C5 ≡ C3
                C2 ⊔ ∀R.C5 ⊑ (C4 ⊔ ∀R.C5 ) ⊔ C2
                                ..
                                 .
                                ∅


                 Fig. 3. A shortened proof of the justification (1).




6   Conclusion

We have presented a method for automatically generating explanations in de-
scription logic that target human understandability. In fact, the explanations are
based on proofs in proof systems with bounded cognitive resources. An example
of a specific proof system was given and the derived complexity measure was
evaluated against a small set of empirical data. The results point in a positive
direction, but deeper empirical studies are needed to evaluate the understand-
ability of the generated explanations.
    We believe that the approach presented in this paper is promising for gener-
ating understandable explanations intended for non-expert users, e.g., engineers,
doctors, and users of the semantic web. The explanations can be presented ei-
ther in the language of description logic or, using straight-forward translations,
in natural language.
References
 1. Adler, J.E., Rips, L.J.: Reasoning: Studies of Human Inference and its Foundations.
    Cambridge University Press (2008)
 2. Anderson, J.R., Lebiere, C.: The atomic components of thought. Lawrence Erl-
    baum, Mahwah, N.J. (1998)
 3. Borgida, A., Franconi, E., Horrocks, I., McGuinness, D.L., Patel-Schneider, P.F.:
    Explaining ALC subsumption. In: ECAI. pp. 209–213 (2000)
 4. Braine, M.D.S., O’Brien, D.P.: Mental logic. L. Erlbaum Associates (1998)
 5. Engström, F., Nizamani, A.R., Strannegård, C.: Generating comprehensible ex-
    planations in description logic, http://gup.ub.gu.se/publication/199115, full
    version with appendix
 6. Fitch, F.B.: Symbolic Logic: an Introduction. Ronald Press, New York (1952)
 7. Gentzen, G.: Investigation into logical deduction, 1935. In: Szabo, M.E. (ed.) The
    collected papers of Gerhard Gentzen. North-Holland Amsterdam (1969)
 8. Geuvers, H., Nederpelt, R.: Rewriting for Fitch style natural deductions. In:
    Rewriting Techniques and Applications. pp. 134–154. Springer (2004)
 9. Gilhooly, K., Logie, R., Wetherick, N., Wynn, V.: Working memory and strategies
    in syllogistic-reasoning tasks. Memory & Cognition 21(1), 115–124 (1993)
10. Gobet, F., Lane, P.: The CHREST architecture of cognition: the role of perception
    in general intelligence. In: Artificial General Intelligence 2010, Lugano, Switzerland.
    Atlantis Press (2010)
11. Hitch, G., Baddeley, A.: Verbal reasoning and working memory. The Quarterly
    Journal of Experimental Psychology 28(4), 603–621 (1976)
12. Holyoak, K.J., Morrison, R.G.: The Cambridge handbook of thinking and reason-
    ing. Cambridge University Press (2005)
13. Horridge, M., Bail, S., Parsia, B., Sattler, U.: Toward cognitive support for OWL
    justifications. Knowledge-Based Systems 53(0), 66 – 79 (2013)
14. Horridge, M., Parsia, B., Sattler, U.: Justification oriented proofs in owl. In: The
    Semantic Web–ISWC 2010, pp. 354–369. Springer (2010)
15. Jaskowski, S.: The theory of deduction based on the method of suppositions. Studia
    Logica 1, 5–32 (1934)
16. Johnson-Laird, P.N.: Mental models. Harvard University Press (1983)
17. Kosslyn, S., Smith, E.: Cognitive Psychology: Mind and Brain. Upper Saddle River,
    NJ: Prentice-Hall Inc (2006)
18. Laird, J., Newell, A., Rosenbloom, P.: Soar: An architecture for general intelligence.
    Artificial Intelligence 33(3), 1–64 (1987)
19. McGuinness, D.L., Borgida, A.: Explaining subsumption in description logics. In:
    IJCAI (1). pp. 816–821 (1995)
20. Miller, G.A.: The magical number seven, plus or minus two: Some limits on our
    capacity for processing information. Psychological Review 63, 81–97 (1956)
21. Negri, S., von Plato, J.: Structural Proof Theory. Cambridge University Press
    (2001)
22. Prawitz, D.: Natural Deduction. A Proof-Theoretical Study, Stockholm Studies in
    Philosophy, vol. 3. Almqvist & Wiksell, Stockholm (1965)
23. Rips, L.: The Psychology of Proof. Bradford (1996)
24. Robinson, A., Voronkov, A.: Handbook of Automated Reasoning. Elsevier Science
    (2001)
25. Sheeran, M., Stålmarck, G.: A tutorial on Stålmarck’s proof procedure for propo-
    sitional logic. Formal Methods in Systems Design 16(1), 23–58 (Jan 2000)
26. Smullyan, R.M.: First-Order Logic. Dover Publications, New York, second cor-
    rected edn. (1995), first published in 1968 by Springer Verlag, Berlin, Heidelberg,
    New York
27. Stenning, K., van Lambalgen, M.: Human reasoning and cognitive science. Brad-
    ford Books, MIT Press (2008)
28. Strannegård, C., Engström, F., Nizamani, A.R., Rips, L.: Reasoning about truth
    in first-order logic. Journal of Logic, Language and Information 22(1), 115–137
    (2013), doi: 10.1007/s10849-012-9168-y
29. Strannegård, C., Nizamani, A.R., Sjöberg, A., Engström, F.: Bounded kolmogorov
    complexity based on cognitive models. In: Kühnberger, K.U., Rudolph, S., Wang,
    P. (eds.) Artificial General Intelligence, Lecture Notes in Computer Science, vol.
    7999, pp. 130–139. Springer Berlin Heidelberg (2013)
30. Strannegård, C., Ulfsbäcker, S., Hedqvist, D., Gärling, T.: Reasoning processes in
    propositional logic. Journal of Logic, Language and Information 19(3), 283–314
    (2010)
31. Toms, M., Morris, N., Ward, D.: Working memory and conditional reasoning. The
    Quarterly Journal of Experimental Psychology 46(4), 679–699 (1993)
32. Wang, P.: From NARS to a Thinking Machine. In: Proceedings of the 2007 Con-
    ference on Artificial General Intelligence. pp. 75–93. IOS Press, Amsterdam (2007)