<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Argumentation: Reconciling Human and Automated Reasoning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Antonis Kakas</string-name>
          <email>antonis@ucy.ac.cy</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Loizos Michael</string-name>
          <email>loizos@ouc.ac.cy</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesca Toni</string-name>
          <email>f.toni@imperial.ac.uk</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Cyprus</institution>
          ,
          <country country="CY">Cyprus</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computing, Imperial College London</institution>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Open University of Cyprus</institution>
          ,
          <country country="CY">Cyprus</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We study how using argumentation as an alternative foundation for logic gives a framework in which we can reconcile human and automated reasoning. We analyse this reconciliation between human and automated reasoning at three levels: (1) at the level of classical, strict reasoning on which, till today, automated reasoning and computing are based, (2) at the level of natural or ordinary human level reasoning as studied in cognitive psychology and which arti cial intelligence, albeit in its early stages, is endeavouring to automate, and (3) at the level of the recently emerged cognitive computing paradigm where systems are required to be cognitively compatible with human reasoning based on common sense or expert knowledge, machine-learned from unstructured data in corpora over the web or other sources.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The ever increasing demand for smart machines with ordinary human-level
intelligence, attuned to every-day common sense reasoning and simple problem
solving, has reignited the debate on the logical nature of human reasoning and
how this can be appropriately formalized and automated (e.g. see [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]).
      </p>
      <p>
        For over half a century now it has been known that the classical view of logic
from mathematics is at odds with the nature of human reasoning, as identi ed
in many empirical behaviour studies of cognitive psychology (e.g. see [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] for a
relatively recent exposition). Given also that classical logic forms the \calculus
of computer science" and the engineering basis for building computing machines,
this schism between the two views of logic from mathematics and reasoning from
psychology lies at the heart of the problem of automating natural, as opposed
to scienti c, human reasoning.
      </p>
      <p>
        In this paper, we rst re-consider the foundations of logic from an
argumentation perspective, in the spirit of the approach of [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ] (see Section 2), and
then try to understand how the schism can be reconciled using the
argumentation perspective (see Section 3), before nally returning to modern-day cognitive
computing (see Section 4) and concluding (see Section 5).
      </p>
      <p>
        Before analysing how argumentation can be used to reconcile human and
automated reasoning, we note that argumentation has been at the heart of logic
ever since its inception with Aristotle4: logical reasoning according to Aristotle
was to follow certain accepted patterns of inference, called complete deductions,
that were a-priori justi ed, to nd other forms of valid patterns (syllogisms) for
drawing conclusions. Syllogisms, which in modern logic correspond to derivations
in proof theory, are in e ect arguments for supporting the conclusions that they
draw. Moreover, complex arguments can be built from simpler, basic arguments
and indeed Aristotle had attempted to show that all valid arguments can be
reduced to his basic forms of arguments. This reduction task is not easy [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ],
especially when the given complex argument is obtained through impossibility,
namely, in modern terms, its conclusion is drawn via a proof by contradiction
(or Reductio ad Absurdum (RAA)). This observation is important when logic is
formalized through argumentation, as we discuss in Section 2 below.
      </p>
      <p>We also note that dialectical argumentation forms a wider context that
embraces the conception of logic in Aristotle5. A process of dialectic argumentation
is based on common beliefs or reputable views from which arguments are built to
support conclusions. The important di erence with simply demonstrating that a
conclusion follows through an argument, is the (extra-logical) additional
requirement that the process aims to convince or persuade. When logic is formalised
as argumentation, both views of logic (as validity and as dialectics aiming at
persuading) play an important role, as we discuss in Section 2 below.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Argumentation and Logical Reasoning</title>
      <p>From the point of view of argumentation, reasoning can be seen as a double
process: constructing arguments linking premises and conclusions that we want
to draw as well as defend these arguments against (possible or proposed)
counterarguments generated from an attack relation between arguments.</p>
      <p>Arguments are built from constituent parts (e.g. private or common beliefs or
reputable views) that are combined in some way to form a structure supporting
desired positions. In a logical setting the process of constructing an argument
for a desired position can be associated to a logical proof for the position. Hence,
given some language, L, for logical sentences together with a notion of deduction,
`, drawn from some set of inference rules, an argument can be constructed by
choosing some set, , of sentences in L | the argument's premises | that under
the application of (some of the) inference rules deduce, via `, a conclusion that
is identi ed with the desired position. Arguments may also draw intermediate
conclusions (combined, as dictated by the inference rules of `, to give the main
conclusion) and thus combine sub-arguments within a single argument structure.</p>
      <p>
        A logical language would typically also contain some notion of contradiction
or inconsistency (denoted by ?), which can be used, in conjunction with the
inference rules, to provide the notion of counter-argument/the attack relation
4 A concise introduction to Aristotle and logic can be found in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
5 A wider exposition of logic and dialectical argumentation can be found, e.g., in [
        <xref ref-type="bibr" rid="ref46">46</xref>
        ].
between arguments. In the simplest case, two arguments would attack each other
when they, or some of their sub-arguments, support contradictory positions [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Several methods have been proposed in the literature on argumentation in
arti cial intelligence (e.g. see [
        <xref ref-type="bibr" rid="ref3 ref44">3, 44</xref>
        ] for overviews) to determine when arguments
may be deemed to defend against counter-arguments generated from attacks.
We will informally illustrate below how this notion of defence against
counterarguments can be formalized using some standard semantical notions given for
abstract argumentation [
        <xref ref-type="bibr" rid="ref10 ref24">10, 24</xref>
        ]. Within the resulting formalization, logical
entailment of a sentence from a theory T , traditionally de ned in Propositional
Logic (PL) in terms of truth of in all models of T , can be given, informally
put, by the statement \there is an acceptable argument (from T ) that
supports , and any argument that supports : is not acceptable".
Furthermore, this argumentation-based entailment is also de ned when the given
theory T may be classically inconsistent, i.e. without classical models. In the
case of classically inconsistent theories, PL trivializes, whereas the
argumentation semantics continues to di erentiate between sentences that are entailed and
sentences that are not entailed, as we will see below.
      </p>
      <p>For the illustration, let us consider a simple example, where we take as the
underlying language L a standard language of classical PL allowing to
represent the following rules (and all other knowledge/beliefs presented later in this
section):
{ \normally, a seller who delivers on time is trustworthy";
{ \normally, a seller who delivers the wrong merchandize is not trustworthy".</p>
      <p>These rules can be represented6 in PL by the following sentences (for some
seller in the ( nite) domain of discourse):
(1)
(2)
(4)
timely delivery ! trusted
wrong delivery ! :trusted</p>
      <p>Given additional information about a delivery by the seller we can build
arguments for and against trusting the seller. For example if we observe that the
seller delivers on time
timely delivery
(3)
then we can build argument A1 with premises the sentences (1) and (3) and
conclusion that the seller should be trusted. Moreover, if the seller has made a
wrong delivery</p>
      <p>wrong delivery
6 As we will discuss later, under the argumentation-based reformulation of PL the
implication connective used here does not need to be interpreted as classical material
implication. Rather, an implication A ! B may be interpreted, informally, as \given
A we have an argument for B".
then we can build argument A2 with premises the sentences (2) and (4) and
conclusion that the seller should not be trusted. These two arguments are
counterarguments against (or attack) each other when indeed we have both pieces of
information (3) and (4). Furthermore, if we also have the additional rule
{ \if the seller is trusted then one can place large orders",
represented as
trusted ! large orders
(5)
then we can build argument A3 with premises (3), (1) and (5) and conclusion
that large orders can be placed with the seller in question. Note that argument
A2 is still a counter-argument against A3, despite the fact that the ( nal)
conclusions that they support are not contradictory, as A2 undercuts A3 on trusted
on which A3 depends. In other words, A2 attacks A3 because it attacks a
subargument of A3, namely the sub-argument A1.</p>
      <p>
        Given arguments and attacks between them, we can determine which
arguments are (dialectically) acceptable, e.g. in the spirit of admissibility in abstract
argumentation [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], and de ne notions of credulous entailment and sceptical
entailment from the theory from which arguments and attacks are built in terms
of this acceptability. In the earlier example, given a theory T amounting to
sentences (1){(5) above:
{ arguments A1 and A2, as de ned above, are both (individually) acceptable;
for example, A1 is acceptable as it does not attack itself (as sentences (1)
and (3) are consistent) and A1 defends itself against the attack by A2;
{ trusted and :trusted are both credulously entailed by T , since the arguments
      </p>
      <p>A1 and A2 are both acceptable;
{ neither trusted nor :trusted are sceptically entailed by T , since their
(respective) negation is credulously entailed by T .</p>
      <p>Note that it may be useful in some cases to use hypotheses as premises of
arguments whose conclusions are credulously entailed if the hypotheses are
dialectically legitimate and the arguments using them as premises are acceptable. For
example, it may be desirable to hypothesize
:large orders
(6)
to form an argument A4 with premises the sentences (2) and (4) from the given
theory and hypothesis (6) to support the conclusion :large orders while also
including, within the premises of the argument, its defences against attacks.
The argument A4 is acceptable because it can defend against all attacks. For
example, the attacks against A4 by argument A1 and by argument A3 are both
defended against by A4 since they are defended against by the sub-argument
A2 of A4. This is depicted in gure 1 below, showing the dialectic nature of
the argumentation semantics of acceptability. Here, A3 disputes the hypothesis
:large orders in A4, and the sub-argument A1 of A3 disputes the conclusion</p>
      <sec id="sec-2-1">
        <title>A4 (:lar3 ge orders; :trlusted)</title>
        <p>A1 (trusted)
SK</p>
        <sec id="sec-2-1-1">
          <title>A2 (:trusted)</title>
          <p>A3 (large orders; trusted)
SK</p>
        </sec>
        <sec id="sec-2-1-2">
          <title>A2 (:trusted)</title>
          <p>
            would be credulously entailed by the above theory.
:trusted of sub-argument A2 of A4. Notice that the defence by A2 against A3
does not come simply from the symmetric incompatibility of :large orders on
A3. This can be justi ed via an implicit priority of premises from the given theory
over hypotheses: arguments solely supported by the given theory are stronger
than those supported by hypotheses. Notice that priorities on premises can also
be given explicitly [
            <xref ref-type="bibr" rid="ref27">27</xref>
            ], e.g. we may consider the premise wrong delivery !
:trusted to be stronger than late delivery ! trusted.7 This would have the
e ect that A2 would attack A1 but not vice versa and hence only :trusted
          </p>
          <p>
            It is easy to see that credulous entailment of a sentence corresponds to a form
of satis ability, i.e. if a sentence is credulously entailed then there exists a
maximal consistent sub-theory of T with which the sentence is also consistent. More
signi cantly, for appropriate de nitions of arguments, attack and defence [
            <xref ref-type="bibr" rid="ref26 ref28">26,
28</xref>
            ], sceptical entailment corresponds to classical logical entailment PL, if the
theory is consistent.
          </p>
          <p>
            When working with classically inconsistent theories, PL can use the RAA
inference rule (or proof by contradiction) to derive, by means of an indirect
proof, any sentence based on a subset of contradictory premises in the theory.
For this reason, most works in logic-based argumentation, e.g. [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ], impose the
restriction that the premises of arguments be classically consistent. Instead, the
approach that we advocate [
            <xref ref-type="bibr" rid="ref28">28</xref>
            ] rede nes logic in terms of argumentation (rather
than retaining classical logic and building arguments based on this) and ascribes
a di erent nature to the RAA rule, of an argumentative nature rather than as a
building block in the construction of arguments, restricted instead to be direct
proofs, e.g. using a Natural Deduction system [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ] but without any use of the
RAA rule.
          </p>
          <p>
            Separating out the RAA rule and excluding this from being one of the
primary means to construct arguments gives rise to (a form of) Argumentation
Logic (AL) [
            <xref ref-type="bibr" rid="ref27 ref28">27, 28</xref>
            ] and allows us to overcome the technical di culties of working
with inconsistent premises, that Aristotle had to face too. AL o ers a
semantically equivalent reformulation of classical PL in the realm of classically consistent
7 This is analogous to assigning higher strength to associations that refer to a more
speci c subclass in inheritance reasoning such as the famous AI example of \penguins
not y" considered as a stronger property association than \birds y".
theories of premises that smoothly extends into the inconsistent realm without
trivializing as PL does. We will informally describe AL here and illustrate it
by means of examples (the technical details are not essential here and can be
found in [
            <xref ref-type="bibr" rid="ref27 ref28">27, 28</xref>
            ]). AL de nes a form of sceptical entailment, as indicated earlier,
but in terms of recursive notions of acceptability and non-acceptability of
arguments [
            <xref ref-type="bibr" rid="ref24">24</xref>
            ] rather than admissibility as in [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ]. Arguments are identi ed with
their premises, which may be drawn from a given theory as well as
hypothesized, as discussed earlier, and constructed with a notion of direct derivation
based on a subset, `DD, of standard inference rules in Natural Deduction that
does not contain the RAA inference rule. The important technical aspect of AL
is that the inferences from the RAA rule are recovered semantically through the
notion of non-acceptability of arguments. Informally, showing that a
hypothesis, , is inconsistent is replaced by showing that arguments which contain
are non-acceptable and hence such arguments cannot lead to the entailment of
any sentences. This is di erent from the classical interpretation of the RAA
rule which in addition leads to the (sceptical) entailment of : . This additional
step is not present in AL and this absence gives AL exibility to reason with
contradictory information.
          </p>
          <p>In AL, the notions of acceptability and non-acceptability of arguments are
de ned dialectically in terms of notions of attack and defence amongst arguments,
where defence is de ned as a restricted type of attack, ensuring asymmetry (as
discussed earlier).</p>
          <p>Acceptability is de ned as a relative notion between arguments, i.e. that
an argument a is acceptable with respect to another argument a0, informally
meaning that if we a-priori accept a0 then a is an acceptable argument. The
argument a0 helps to render a acceptable. For example, when we want to adopt
an argument this could be used in its defence against counter-arguments and
analogously an argument could render itself non-acceptable by rendering one of
its counter-arguments acceptable. The informal de nition of this acceptability
notion of \a is acceptable with respect to a0" is that for every argument, b,
attacking a there is an argument, d, that defends against b and this defending
argument\d is acceptable with respect to a [ a0". The defending argument d in
e ect renders the attacking argument non-acceptable. Then, non-acceptability
is de ned as the classical negative dual of acceptability. Finally, a sentence (seen
as an argument) is (sceptically) entailed from a theory if it is acceptable with
respect to the empty argument and its negation (seen again as an argument)
is not acceptable with respect to the empty argument. To illustrate this let us
consider the additional statement:</p>
          <p>{ \normally, a seller that delivers late is not trustworthy",
represented as
:timely delivery ! :trusted
(7)
and, further, assume that we have also learned (by analyzing our sale records)
that, for our particular seller,</p>
          <p>{ \normally, when the delivery is on time the wrong item is delivered",
timely delivery ! wrong delivery
(8)
Consider now the theory T consisting of sentences (1), (2), (7) and (8). This
is classically consistent and it classically entails :trusted. Let us see how in
AL we would derive that trusted is non-acceptable, that is needed for AL to
the dialectical process of argumentation depicted in gure 2.8
(sceptically) entail :trusted. The non-acceptability of trusted can be determined
by considering the argument B1 with premises T and hypotheses ftrustedg, and</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>B1 (ftrOustedg)</title>
        <sec id="sec-2-2-1">
          <title>B2 (f:timely deliveryg)</title>
        </sec>
        <sec id="sec-2-2-2">
          <title>B3 (ftimely deliveryg)</title>
          <p>SK
O</p>
        </sec>
        <sec id="sec-2-2-3">
          <title>B1 (ftrustedg)</title>
          <p>attack : T [ftrustedg[f:timely deliveryg`DD?
defence by opposing the hypothesis of B2
attack : T [ftimely deliveryg[ftrustedg`DD?
gument, given T = f(1); (2); (7); (8)g, in order to determine classical entailment of
trusted from T . All arguments have T as their premises and hypotheses as indicated
in brackets.</p>
          <p>The</p>
          <p>gure shows that there is an attacking argument (B2) for which there
is no acceptable defence as the only possible defence (B3) against this attack is
attacked by the proposed argument (B1): the defence is rendered non-acceptable
by the proposed argument and hence the proposed argument is not acceptable.</p>
          <p>
            Note that the example theory T is classically consistent and the above
dialectic proof of the non-acceptability of trusted can be shown [
            <xref ref-type="bibr" rid="ref28">28</xref>
            ] to be connected
to the proof in Natural Deduction that this hypothesis is inconsistent with the
theory via a nested use of the RAA inference rule. In general, AL captures
classical propositional reasoning in dialectic argumentation terms when the premises
are classically consistent, but also that same dialectical process extends into the
case of classically inconsistent theories as a form of para-consistent reasoning.
This is achieved by linking RAA in PL with the recognition of a particular class
of non-acceptable arguments, namely arguments that are self-defeating.
          </p>
          <p>
            The following results of AL [
            <xref ref-type="bibr" rid="ref28">28</xref>
            ] are important for our discussion in this paper:
{ AL can be chosen to be exactly equivalent to classical PL when the theory
is classically consistent
rule.
8 For simplicity we assume here that `DD contains only the Modus Ponens inference
{ AL is a weakening of classical PL that does not trivialize when the theory is
classically inconsistent
{ AL avoids logical paradoxes such as the Barber of Seville paradox
{ The classical interpretation of the implication connective is not forced onto
AL; when the antecedent is false both the implication and its negation are
acceptable
{ AL corresponds to a special form of Natural Deduction where the application
of the RAA rule is restricted
          </p>
          <p>
            But perhaps the most important property of AL, shared by most
argumentation frameworks proposed in AI over the last 25 years, is its dialectic nature
of reasoning that gives a natural form of bounded reasoning by considering
incrementally di erent attacking arguments as they arise in the case of reasoning
at hand through new (evidential) information that is brought in the theory or
to the attention of the reasoner: a form of on-demand reasoning which, as we
will see in the next section, is supported by psychological evidence [
            <xref ref-type="bibr" rid="ref35">35</xref>
            ].
          </p>
          <p>Reconciliation - level 1: Given the above results on AL we have reached
our rst level of reconciliation of human and automated reasoning.
Assuming that argumentation and argumentative dialectic reasoning is closer
to human reasoning than strict classical logic and thus human reasoning can
be automated through argumentation, it is important to know that an
argumentation perspective is not a radical deviation of the status quo in automated
reasoning. Indeed, the whole notion of computation and its formalization and
automation rests on the foundation of PL, Boolean algebra and von Neumann
computer architectures. Adopting the new foundation of argumentation for logic
and reasoning does not abandon the existing frameworks of automated reasoning
but co-exists with the foundation of computation through classical logic as the
\calculus of computer science".</p>
          <p>From now on we will use the term AL to refer, generically, to the re-formulation
of logical reasoning via argumentation.
3</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Argumentation and Human Reasoning</title>
      <p>The aim of this section is to present some of the main features of human reasoning
coming out from empirical studies in the Psychology of Logic (Reasoning) and
to examine how formal argumentation in AI, and in particular AL, conforms
with these features. We will also overview some recent work from Psychology
that gives direct evidence for argumentation in human reasoning. Putting these
results together with the previous section we will argue that human reasoning can
be formalized well through argumentation in a way that facilitates its automation
for building arti cial intelligent systems.</p>
      <p>
        Over the last century, a large amount of research has been carried out in
the area of Psychology of Reasoning with the results suggesting that human
reasoning is failing, in comparison with strict mathematical or classical logic, at
simple logical tasks, committing mistakes in probabilistic reasoning and being
subject to irrational biases in decision making [
        <xref ref-type="bibr" rid="ref12 ref22">12, 22</xref>
        ]. Earlier on, Stoerring [
        <xref ref-type="bibr" rid="ref49">49</xref>
        ]
showed empirically that humans perform with signi cant variations in
successfully drawing conclusions under di erent classical logic syllogisms. One way to
interpret this di erence, as discussed in [
        <xref ref-type="bibr" rid="ref48">48</xref>
        ], is to recognize that humans do not
reason according to the classical entailment of truth in all possible models of the
premises but rather they reason in an intended interpretation of the premises.
The authors of [
        <xref ref-type="bibr" rid="ref48">48</xref>
        ] go further to propose that syllogistic classical reasoning is
addressed by human subjects as constructing a suitable situation model much
like when humans are processing or comprehending a narrative.
      </p>
      <p>
        Humans do well when using Modus Ponens with implications. But they do
not fair well in using Modus Tollens, which also indicates that they have di
culty in reasoning by contradiction (i.e. in applying the classical RAA rule). On
the other hand, humans recognize that the falsi cation of an implication comes
through a case where the condition (antecedent) of an implication holds yet the
conclusion of the implication does not hold [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. This indicates that, although
humans also recognize when the condition (antecedent) of an implication is false
the status of the conclusion is irrelevant, they do not do this by recognizing that
the implication is trivially satis ed, as is the standard view in classical logic, but
rather by recognizing that the implication cannot be argued against, i.e. it is not
possible to falsify the implication in a situation where the antecedent does not
hold.
      </p>
      <p>
        Most work in the Psychology of Reasoning then points to the observation
that human reasoning di ers from reasoning in classical logic and di erent
interpretations and theories on the nature of human reasoning have been proposed.
On the one hand, there are proposals that stay very close to the mathematical
and strict form of logical reasoning, such as the proposal of \The Psychology of
Proof" theory [
        <xref ref-type="bibr" rid="ref45">45</xref>
        ], which proposes a psychological version of a proof system for
human reasoning in the style of Natural Deduction. Despite many criticisms, see
e.g. [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] for a thorough and critical review of this theory, it shows a necessary
departure from the proof systems of classical logic but more importantly it
implicitly indicates that human reasoning is linked to argumentation since proof
systems as that of Natural Deduction have a natural argumentative
interpretation, as we have seen in Section 2. Other proposals, e.g. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], abandon completely
any logical form for human reasoning treating it as the application of
specialized procedures, invoked naturally depending on the situation that people nd
themselves.
      </p>
      <p>
        Importantly, the study of the Psychology of Syllogisms [
        <xref ref-type="bibr" rid="ref18 ref20 ref21">18, 21, 20</xref>
        ] proposes
that humans use mental models to guide them into drawing inferences. Humans,
in general, construct an intended mental model which captures the logic of the
situation at hand and do not consider alternative models, as in classical
reasoning, so as to ensure the \absolute and universal" validity of the inference. As
mentioned above this has also being pointed out in [
        <xref ref-type="bibr" rid="ref47 ref48">47, 48</xref>
        ]. In a modern
manifestation of this position, based on the nature of Computational Logic in Arti cial
Intelligence, such as how Logic Programming can be applied to problems of
human intelligence, the recent book [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] argues that building structures like mental
models is a useful way to capture various features of human reasoning, not least
its defeasible nature, which as we will see below is central in our discussion.
      </p>
      <p>
        But building mental models can be seen as building arguments from the
available evidence currently at hand and general premises of common sense
knowledge that people have acquired. Then, as argued also in [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ], the mental model
approach to deduction can be reconciled with the view of reasoning through
inference rules. Here we go one step further and argue that the process of building
the mental models is the dialectic process of argumentation, based on acceptable
arguments: we will discuss an example below in Section 3.2.
      </p>
      <p>Summarizing, the work in the Psychology of Reasoning has exposed the
following salient features of human reasoning given here from a modern view of
computational logic in Arti cial Intelligence:
{ Human reasoning is able to handle contradictory information without
trivializing. There is no inconsistent state of the human mind that would make
it incapable of drawing any inference.
{ Human reasoning is defeasible. Conclusions drawn can be retracted in the
face of new information. Knowledge on which human reasoning is based is
not absolute.
{ An implication (or a rule) expresses only an association between its
conditions and conclusion, not a necessity. Other properties of an implication
that are implicit in the classical logical interpretation are not prominent in
human reasoning.</p>
      <p>
        Finally, we separate out some recent work from the Psychology of
Reasoning which provides explicit evidence for the argumentative nature of human
reasoning. In [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ] the authors have proposed, based on a variety of empirical
psychological experiments, that human reasoning is a process where humans
provide reasons to accept (or decline) a conclusion that was \raised" by some
incoming inference of the human brain. The main function of reasoning is to lay
out these inferences in detail and form possible arguments that will produce the
nal conclusion and therefore, through the process of reasoning, people will be
able to exchange arguments for assessing new claims: the process of reasoning is
a process of argumentation.
      </p>
      <p>What characterizes the process of reasoning proper is the awareness, not just
of a conclusion, but of an argument that justi es accepting that conclusion. To
validate their argumentation theory for reasoning, the psychologists have carried
out experiments to test how humans form, evaluate and use arguments. One
important conclusion of their study is the fact that humans will come up with
\solid" arguments when they are in an environment where they are motivated
to do so, i.e. in an environment where their position is challenged. Otherwise, if
not challenged the arguments produced could be rather naive. But once
counterarguments or opposing positions are put forward people will produce better and
well justi ed arguments for their position by nding counter-arguments (i.e.
defences) to the challengers. For example, in experiments where mock jurors
were asked to reach a verdict and then were presented with an alternative one
it was observed that almost all of them were able to nd counter arguments
against the alternative verdict (very quickly) strengthening the arguments for
their original verdict.</p>
      <p>This indicates that automating human reasoning through argumentation can
follow a model of computation that has an \on-demand" incremental nature.
This will be well suited in a resource bounded problem environment and more
generally for cognitive systems that we will consider in the next section.
3.1</p>
      <p>
        Human Reasoning and Argumentation in AI
The Psychology of Reasoning had in uenced AI in its e ort to automate human
reasoning. From the early stages of AI an approach based on production rules was
developed, in uenced by the psychological ndings on the nature of implications
in human reasoning. Cognitive architectures [
        <xref ref-type="bibr" rid="ref1 ref30">1, 30</xref>
        ] for systems were proposed
whose baseline computation is given by the application of production rules and
following onto their conclusions when these were drawn. Despite their relative
success (and their re-emergence today in the new era of Cognitive Computing
that we will examine in Section 4) these systems were considered to be lacking a
proper formal foundation. For example, how was the ring of a production rule
to be interpreted? On the one hand when its conditions hold it must re to give
its conclusion and yet the conclusion could be at odds with conclusions of other
production rules that have also red. Attempts to provide formal foundations
for these rules have been made [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        To address these shortcomings but also independently motivated by the
desire to have a clear formal semantics of intelligent computation in AI, new formal
logical frameworks, beyond classical logic, were proposed (starting with the
seminal works [
        <xref ref-type="bibr" rid="ref33 ref34">33, 34</xref>
        ], with several other later approaches) and two areas of AI were
established for this: (1) Non-monotonic Reasoning and (2) Belief Revision. The
emphasis was on formal logics within which conclusions could be withdrawn or
revised when additional information rendered the theory (classically)
inconsistent. Furthermore, the need for non-monotonic logics was also advocated directly
by psychologists from an early stage (see e.g. [
        <xref ref-type="bibr" rid="ref47">47</xref>
        ]) by recognizing these logics
as reasoning in intended models. But despite the wealth of theoretical results
the variety of and di erences amongst the various approaches has prevented a
clear consensus on the question of what is the formal nature (if any) of human
reasoning.
      </p>
      <p>
        Nevertheless, with the introduction of formal argumentation in the early
1990s, to capture in particular the non-monotonic reasoning of negation as
failure in Logic Programming [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], it was shown (e.g. in [
        <xref ref-type="bibr" rid="ref10 ref6">6, 10</xref>
        ]) that argumentation
can capture di erent approaches of non-monotonic reasoning and thus provide a
uniform framework for it. Argumentation is now used as the underlying
framework to study and develop solutions for di erent types of problems in AI [
        <xref ref-type="bibr" rid="ref3 ref44">3, 44</xref>
        ].
In particular, it forms the foundation for a variety of problems in multi-agent
systems (see the workshop series at http://www.mit.edu/ irahwan/argmas)
where agents need to exhibit human-like intelligence, but with emphasis on
autonomy and adaptability rather than human-like reasoning and problem
solving. Recently, argument mining (see [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ] for an overview), aims to provide an
automatic way of analysing, as argumentation frameworks of some sort, human
debates in social media, even by identifying relations not explicit in text [
        <xref ref-type="bibr" rid="ref50">50</xref>
        ].
      </p>
      <p>All these studies of argumentation in AI show that argumentation has the
capacity to address the salient features of human reasoning that the empirical
studies of psychology have pointed out. Argumentative reasoning is a natural
form of reasoning with contradictory information by supporting arguments for
con icting conclusions, such that conclusions are withdrawn when new stronger
arguments come into play. Argumentation gives a form of reasoning in an
intended mental model, the model corresponding to the conclusions that are
supported by the stronger arguments available, which is naturally defeasible as this
model can change in the face of new information.
3.2</p>
      <p>Argumentation for Story Comprehension
The argumentative nature of human reasoning is quite pronounced in the
central task of comprehension of a given situation or narrative. Humans try to
comprehend situations they are faced with through a mental model constructed
by inferring information that is not explicitly present in the situation, yet it
follows from it, explaining why things happened as they did, linking seemingly
unconnected events, and predicting how things will further evolve, by developing
arguments in each case. To construct such an intended comprehension model it
is necessary to be able to draw inferences relating to how information changes
over time (e.g. the story line) whether these are about the state of the physical
or mental (e.g. intentions of protagonists in a story) world in the narrative.</p>
      <p>
        Several works [
        <xref ref-type="bibr" rid="ref13 ref16 ref25 ref39">13, 16, 25, 39</xref>
        ] have shown that argumentation can be used
to capture such reasoning about actions and change, introduced in AI by the
seminal works of the Situation and Event Calculi. Argumentation can address
the three central problems associated with this, namely the frame, rami cation
and quali cation problems, in a natural way by capturing this aspect of human
reasoning in terms of persistence, causal and default world property arguments
and a natural attacking relation between these di erent types of arguments.
Grounding these types of arguments on information that is explicitly given in the
narrative, we can build arguments for and against drawing a certain conclusion
at a certain time point or situation in the world.
      </p>
      <p>
        Recently, combining this argumentation approach to reasoning about actions
and change with empirical knowhow and theoretical models from Cognitive
Psychology, it was possible to show that argumentation can successfully capture
many of the salient features of narrative text comprehension by human readers,
as exposed through many years of empirical and subsequent theoretical studies
from Cognitive Psychology. In particular, the mental comprehension model that
is built by human readers of stories corresponds to the grounded extension of a
corresponding argumentation framework whose arguments are grounded on the
explicit information in the story [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. An associated automated comprehension
system [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], shows how these di erent basic forms of reasoning involved, such
as the persistence of information across time, the change brought about by
actions, and the blocking of change when this violates default properties, can be
automated through argumentation.
      </p>
      <p>Consider, for example, a scenario where we hear Alice saying to her colleague:
\Bob delivered the rst car we ordered on time, but he brought us the wrong
model. This is a big problem. We should cancel the other order with Bob.".
Upon hearing Alice, we seek to bridge her utterances. We may reason that
Alice might be invoking the following premises timely delivery ! trusted and
wrong delivery ! :trusted to build arguments for and against trusting Bob.
From Alice's second sentence, that there is a problem, we might infer that the
second premise and argument built from it is stronger for Alice, and that despite
the con icting inferences of the two arguments, Alice is led to infer that Bob is
not trusted. We may further consider that Bob's trustworthiness related to the
cancelling of the second order through the premise :trusted ! :large order,
and thus we make sense of the situation. We would as easily make sense of the
situation had Alice said \This is a minor problem. We should keep the other
order with Bob.", realizing that the preference on Alice's premises and arguments
built from them are reversed. For more information the reader is referred to the
web site of the STAR system for Story Comprehension through Argumentation
at http://cognition.ouc.ac.cy/star.</p>
      <p>Despite the fact that further work is needed to address fully the important
problems of coherence and abstraction in narrative text comprehension the fact
remains that argumentation can provide a basis on which to build further these
important features of the comprehension model.</p>
      <p>Reconciliation - level 2: Argumentation logic and argumentative
dialectic reasoning is closer to human reasoning than strict classical logic capturing
well the features that crucially characterize and distinguish human level natural
intelligence from mathematical and scienti c reasoning. Argumentation treats
all human knowledge as inherently defeasible building from these arguments to
support conclusions. Argumentation is able to reconcile all approaches to date
in AI for automating human reasoning under one umbrella that conforms to
the empirical observations on the nature of human reasoning in Psychology. As
such Argumentation Logic and argumentation in general provides a solid
foundation for automating human reasoning in AI able to accommodate reasoning
with inconsistent premises and revision of conclusions.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Argumentation for Cognitive Computing</title>
      <p>We now present how argumentation can form a foundational basis for the new
paradigm of Cognitive Computing and for the development of cognitive systems,
which, as we have argued in the Introduction, forms the renewed impetus for
the need to automate human reasoning.</p>
      <p>
        Cognitive systems, as studied for example in [
        <xref ref-type="bibr" rid="ref31 ref42">31, 42</xref>
        ] and the IBM Watson
machine, however they are de ned, have two important characteristics. First,
they are cognitively compatible to humans in the sense that there exists a level of
communication between a cognitive system and a human user analogous to the
communication between humans: cognitive systems need to be able to articulate
their results or decisions to human users while at the same time they need
to comprehend instructions, e.g. personal preferences of human users, not as
programming commands but as overall requirements on the desired solutions of
a problem. At the end the aim of computation in these systems is to persuade
a human user that they have a good or acceptable solution to a problem rather
than to compute an objectively absolute correct solution to the problem.
      </p>
      <p>In e ect, what we are asking is for these systems to be able to reason like
humans and to interact with humans in the form of a dialectic argumentative
dialogue. Given the discussion of the previous sections where we have argued that
human reasoning can be formalized in terms of argumentation and that formal
argumentation from AI captures well the human dialectic process of
argumentation, we believe that argumentation can form the basis for the Reasoning API
of cognitive systems.</p>
      <p>But argumentation can have a much deeper role in the development of the
eld of Cognitive Computing. This new paradigm needs a new foundational
notion of computation that classical logic cannot provide but which formal
argumentation is well suited for. This is linked to the second main characteristic
of cognitive systems, the fact that these systems depend and operate on
knowledge acquired from unstructured data normally through some form of machine
learning, again analogously to the situation that we have in humans who learn
over time the knowledge on which they base their reasoning. For example,
cognitive home or work assistants, analogous to personal assistants, need to have
common sense knowledge so that they can humanly comprehend their
environment, which would normally also include human users, from the explicit (but
sparse) information that this gives to them, which is su cient for a human to
comprehend the environment. This is much like story comprehension where this
needs the integration of the explicit information in the narrative with common
sense background world knowledge.</p>
      <p>
        How is then common sense knowledge to be acquired? The eld of cognitive
computing implicitly assumes that this, or any other form of expert knowledge
whether this is \data science" knowledge, or knowledge from \data analytics",
or even \mined arguments", on which a cognitive system will be built, would
be obtained incrementally through a process of (largely) autonomous learning
from unstructured data. Learning frameworks that integrate symbolic learning
and reasoning have shown that this can be done in a manner that the learned
knowledge is guaranteed to draw inferences that are probably approximately
correct [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ], while exploiting raw text (e.g. from the Web) as their input [
        <xref ref-type="bibr" rid="ref36 ref38">36, 38</xref>
        ],
and accommodating the interaction of di erent pieces of learned knowledge in
supporting a drawn inference [
        <xref ref-type="bibr" rid="ref40 ref43 ref51">40, 43, 51</xref>
        ].
      </p>
      <p>
        Such learned knowledge cannot form strict absolute knowledge as it could in a
didactic supervised form of learning. It would be defeasible knowledge that holds
for the most part. It expresses typical not absolute relationships between
concepts, where these links are dependent on the various contexts and sub-contexts
of the problem domain. Hence the form of such knowledge can be naturally
associated to argumentation: learned knowledge from unstructured data forms the
basis, the premises, for building arguments. In philosophical terms, the inductive
syllogism, as Aristotle calls this process of acquiring rst principles from
empirical experience, cannot produce absolute knowledge: an inductively produced
implication A ! B does not formally express the \necessity" of B when A is
known to hold but an argument for B thus making B \probable" in the current
case, as the philosopher David Hume [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] suggests. Recent work seeks to extend
the aforementioned learning results to the case of learning such implications [
        <xref ref-type="bibr" rid="ref41">41</xref>
        ].
      </p>
      <p>We note that this formalization of such learned knowledge in terms of
argumentation does not apply only to common sense knowledge but also to expert
knowledge acquired from unstructured data. Hence when IBM Watson is applied
to oncology in health applications the scienti c knowledge learned provides the
basis for acceptable arguments that the machine can build in its task to give
medical recommendations. These arguments provide a way for the Watson
machine to explain the structure and quality of its recommendation to the human
doctor as one human doctor would do to another doctor.</p>
      <p>Reconciliation - level 3: The new foundation for logic and reasoning of
argumentation can provide the basis for a more exible paradigm of computing on
which to build systems that are cognitively compatible to humans. Such cognitive
systems are built by exploiting common sense knowledge or expert knowledge
learned from unstructured data whose formal understanding can naturally be
captured by argumentation logic.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>Drawing from past and recent work in Psychology and AI, we have argued that
an alternative view of logic as a framework for argumentation is closer to
natural human reasoning than classical logic. As such an argumentation-based
formulation of logical reasoning o ers a way to reconcile and bridge human and
automated reasoning.</p>
      <p>Within argumentation the notion of classical logical entailment can be
formulated in a way that conforms with an open ended dialectic process of
argumentation o ering new possibilities and features that are attuned to human reasoning.
Important such features included handling con icting information and a form of
bounded rationality of \on-demand" reasoning where arguments are defended
against counter-arguments that are grounded on the evidential information of
the case at hand rather than on hypothetical situations that might possibly arise.</p>
      <p>Argumentation Logic allows us to give a di erent interpretation to symbolic
knowledge that on the one hand has the exibility of human reasoning while at
the same time o ers a suitable target language for learning from unstructured
data. This makes argumentation a suitable logical foundation for the emerging
paradigm of Cognitive Computing. In this new paradigm the abstract notion
of computation can be formalized as the construction of acceptable arguments
supporting solutions of the problems. These solutions are not necessarily globally
and objectively optimal but rather solutions that are locally convincing and
persuasive, according to the expectation of the human user, both for the case
where the problem is a common sense task or a task in an expert domain.</p>
      <p>To some extent our proposal constitutes a return back to early AI where
cognitive psychology had a strong in uence on the eld. The crucial di erence is
that now we are advocating argumentation as a new logical foundation for this
human-level AI, away from the, albeit many times implicit, assumption that
classical logic can form the basis for automating human reasoning, as it does
for all other conventional computing problems within the realm of scienti c and
engineering problems. In fact, AI, by insisting on a classical logic foundation
for intelligence, took a turn towards problems that fell in this engineering realm
with an emphasis on \super intelligence" beyond the level of common sense
intelligence one would nd ordinarily in humans. But for cognitive systems where
knowledge is typically incomplete and inconsistent from a classical logic
perspective, a radical change in the formal foundations of intelligent computation is
needed. We have argued that this can be given through a reformulation of logic
through argumentation as the primary notion of logical reasoning.</p>
      <p>Given the need for a new logical foundation for cognitive computation one
might ask whether this would also need its own architecture of computers on
which to be realized, in the same way that the Von Neumann architecture is
linked to classical (Boolean) logic. What would such an architecture be? Could
it be a connectionist architecture where the threshold activation of signal
propagation is linked to argument construction under inputs for and against the
argument, thus giving argumentation a nal reconciliation role between symbolic
and connectionist approaches to automating human reasoning?</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Anderson</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Lebiere</surname>
          </string-name>
          .
          <article-title>The Atomic Components of Thought</article-title>
          . Lawrence Erlbaum Associates,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>J.</given-names>
            <surname>Barnes</surname>
          </string-name>
          . The Cambridge Companion to Aristotle. Cambridge University Press,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>T. J. M. Bench-Capon</surname>
            and
            <given-names>P. E. Dunne.</given-names>
          </string-name>
          <article-title>Argumentation in Arti cial Intelligence</article-title>
          .
          <source>Arti cial Intelligence</source>
          ,
          <volume>171</volume>
          (
          <fpage>10</fpage>
          -15):
          <volume>619</volume>
          {
          <fpage>641</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>P.</given-names>
            <surname>Besnard</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Hunter</surname>
          </string-name>
          . Elements of Argumentation. The MIT Press,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>E. W.</given-names>
            <surname>Beth</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Piaget</surname>
          </string-name>
          .
          <source>Mathematical Epistemology and Psychology</source>
          . Dordrecht: Reidel,
          <year>1966</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>A.</given-names>
            <surname>Bondarenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Kowalski</surname>
          </string-name>
          .
          <article-title>An Assumption-based Framework for Non-monotonic Reasoning</article-title>
          .
          <source>In Proceedings of the 2nd International Workshop on Logic Programming and Non-monotonic Reasoning (LPNMR)</source>
          , pages
          <fpage>171</fpage>
          {
          <fpage>189</fpage>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7. P. Cheng and
          <string-name>
            <given-names>K.</given-names>
            <surname>Holyoak</surname>
          </string-name>
          .
          <source>Pragmatic Reasoning Schemas. Cognitive Psychology</source>
          ,
          <volume>17</volume>
          :
          <fpage>391</fpage>
          {
          <fpage>416</fpage>
          ,
          <year>1985</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>I.-A.</given-names>
            <surname>Diakidoy</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Michael</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Miller</surname>
          </string-name>
          .
          <article-title>Story Comprehension through Argumentation</article-title>
          .
          <source>In Proceedings of the 5th International Conference on Computational Models of Argument (COMMA)</source>
          , pages
          <fpage>31</fpage>
          {
          <fpage>42</fpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>I.-A.</given-names>
            <surname>Diakidoy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kakas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Michael</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Miller</surname>
          </string-name>
          .
          <article-title>STAR: A System of Argumentation for Story Comprehension and Beyond</article-title>
          .
          <source>In Proceedings of the 12th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense)</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>P. M. Dung</surname>
          </string-name>
          .
          <article-title>On the Acceptability of Arguments and its Fundamental Role in Nonmonotonic Reasoning, Logic Programming and n-person Games</article-title>
          .
          <source>Arti cial Intelligence</source>
          ,
          <volume>77</volume>
          :
          <fpage>321</fpage>
          {
          <fpage>357</fpage>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>P. M. Dung</surname>
            and
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Mancarella</surname>
          </string-name>
          .
          <article-title>Production Systems with Negation as Failure</article-title>
          .
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          ,
          <volume>14</volume>
          (
          <issue>2</issue>
          ):
          <volume>336</volume>
          {
          <fpage>352</fpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Evans</surname>
          </string-name>
          .
          <article-title>Logic and Human Reasoning: An Assessment of the Deduction Paradigm</article-title>
          .
          <source>Psychological Bulletin</source>
          ,
          <volume>128</volume>
          (
          <issue>6</issue>
          ):
          <volume>978</volume>
          {
          <fpage>96</fpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>N. Y.</given-names>
            <surname>Foo</surname>
          </string-name>
          and
          <string-name>
            <given-names>Q. B.</given-names>
            <surname>Vo</surname>
          </string-name>
          .
          <source>Reasoning about Action: An Argumentation-theoretic Approach. Arti cial Intelligence Research</source>
          ,
          <volume>24</volume>
          :
          <fpage>465</fpage>
          {
          <fpage>518</fpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. U. Furbach and C. Schon, editors.
          <source>Proceedings of the Workshop on Bridging the Gap between Human and Automated Reasoning | A workshop of the 25th International Conference on Automated Deduction (CADE-25)</source>
          , Berlin, Germany,
          <source>August</source>
          <volume>1</volume>
          ,
          <year>2015</year>
          , volume
          <volume>1412</volume>
          <source>of CEUR Workshop Proceedings. CEUR-WS.org</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <given-names>G.</given-names>
            <surname>Gentzen. Unstersuchungenuber das Logische Schliessen</surname>
          </string-name>
          .
          <source>Mathematische Zeitschrift</source>
          ,
          <volume>39</volume>
          :
          <fpage>176</fpage>
          {
          <fpage>210</fpage>
          ,
          <year>1935</year>
          . English translation in M. Szabo (ed.),
          <source>The Collected Papers of Gehard Gentzen</source>
          , North Holland, Amsterdam,
          <year>1969</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <given-names>E.</given-names>
            <surname>Hadjisoteriou</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Kakas</surname>
          </string-name>
          .
          <source>Reasoning about Actions and Change in Argumentation. Argument and Computation</source>
          ,
          <year>2016</year>
          . doi:
          <volume>10</volume>
          .1080/19462166.
          <year>2015</year>
          .
          <volume>1123774</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <given-names>D.</given-names>
            <surname>Hume</surname>
          </string-name>
          .
          <article-title>A Treatise of Human Nature</article-title>
          . Oxford: Clarendon Press,
          <year>1888</year>
          . Originally published 1739-
          <fpage>40</fpage>
          . Edited by L. A. Selby Bigge.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18. P. Johnson-Laird.
          <source>Mental Models</source>
          . Cambridge University Press,
          <year>1983</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19. P. Johnson-Laird.
          <article-title>Rules and Illusions: A Critical Study of Rips's The Psychology of Proof</article-title>
          .
          <source>Minds and Machines</source>
          ,
          <volume>7</volume>
          (
          <issue>3</issue>
          ):
          <volume>387</volume>
          {
          <fpage>407</fpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <given-names>P.</given-names>
            <surname>Johnson-Laird</surname>
          </string-name>
          and
          <string-name>
            <given-names>R. M. J.</given-names>
            <surname>Byrne</surname>
          </string-name>
          . Deduction. Hillsdale, NJ: Lawrence Erlbaum Associates,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <given-names>P.</given-names>
            <surname>Johnson-Laird</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Steedman</surname>
          </string-name>
          .
          <source>The Psychology of Syllogisms. Cognitive Psychology</source>
          ,
          <volume>10</volume>
          :
          <fpage>64</fpage>
          {
          <fpage>99</fpage>
          ,
          <year>1978</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <given-names>D.</given-names>
            <surname>Kahneman</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Tversky</surname>
          </string-name>
          .
          <article-title>Subjective Probability: A Judgment of Representativeness</article-title>
          .
          <source>Cognitive Psychology</source>
          ,
          <volume>3</volume>
          (
          <issue>3</issue>
          ):
          <volume>430</volume>
          {
          <fpage>454</fpage>
          ,
          <year>1972</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <given-names>A.</given-names>
            <surname>Kakas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kowalski</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          .
          <article-title>Abductive Logic Programming</article-title>
          .
          <source>Journal of Logic and Computation</source>
          ,
          <volume>2</volume>
          (
          <issue>6</issue>
          ):
          <volume>719</volume>
          {
          <fpage>770</fpage>
          ,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <given-names>A.</given-names>
            <surname>Kakas</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Mancarella</surname>
          </string-name>
          .
          <article-title>On the Semantics of Abstract Argumentation</article-title>
          .
          <source>Journal of Logic and Computation</source>
          ,
          <volume>23</volume>
          (
          <issue>5</issue>
          ):
          <volume>991</volume>
          {
          <fpage>1015</fpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <given-names>A.</given-names>
            <surname>Kakas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Miller</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          .
          <article-title>An Argumentation Framework of Reasoning about Actions and Change</article-title>
          .
          <source>In Proceedings of the 5th International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR)</source>
          , pages
          <fpage>78</fpage>
          {
          <fpage>91</fpage>
          ,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <given-names>A.</given-names>
            <surname>Kakas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Mancarella</surname>
          </string-name>
          . Argumentation and
          <string-name>
            <given-names>Propositional</given-names>
            <surname>Logic</surname>
          </string-name>
          .
          <source>In Proceedings of the 9th Panhellenic Logic Symposium (PLS)</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <given-names>A.</given-names>
            <surname>Kakas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Mancarella</surname>
          </string-name>
          .
          <article-title>Argumentation for Propositional Logic and Nonmonotonic Reasoning</article-title>
          .
          <source>In Proceedings of the 11th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense)</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <given-names>A.</given-names>
            <surname>Kakas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Mancarella</surname>
          </string-name>
          . Argumentation Logic.
          <source>In Proceedings of the 5th International Conference on Computational Models of Argument (COMMA)</source>
          , pages
          <fpage>12</fpage>
          {
          <fpage>27</fpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <given-names>R.</given-names>
            <surname>Kowalski</surname>
          </string-name>
          .
          <article-title>Computational Logic and Human Thinking: How to Be Arti cially Intelligent</article-title>
          . Cambridge University Press, New York, NY, USA,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <given-names>J. E.</given-names>
            <surname>Laird</surname>
          </string-name>
          .
          <article-title>The Soar Cognitive Architecture</article-title>
          . MIT Press,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <given-names>P.</given-names>
            <surname>Langley</surname>
          </string-name>
          .
          <source>The Cognitive Systems Paradigm. Advances in Cognitive Systems</source>
          ,
          <volume>1</volume>
          :3{
          <fpage>13</fpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <given-names>M.</given-names>
            <surname>Lippi</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Torroni</surname>
          </string-name>
          . Argumentation Mining:
          <article-title>State of the Art and Emerging Trends</article-title>
          .
          <source>ACM Transactions on Internet Technology</source>
          ,
          <volume>16</volume>
          (
          <issue>2</issue>
          ):
          <fpage>10</fpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>J. McCarthy</surname>
          </string-name>
          .
          <article-title>Programs with Common Sense</article-title>
          .
          <source>In Semantic Information Processing</source>
          , pages
          <volume>403</volume>
          {
          <fpage>418</fpage>
          . MIT Press,
          <year>1968</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>J. McCarthy</surname>
          </string-name>
          .
          <article-title>Circumscription | A Form of Non-monotonic Reasoning</article-title>
          .
          <source>Arti cial Intelligence</source>
          ,
          <volume>13</volume>
          (
          <issue>1</issue>
          ):
          <volume>27</volume>
          {
          <fpage>39</fpage>
          ,
          <year>1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <given-names>H.</given-names>
            <surname>Mercier</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Sperber</surname>
          </string-name>
          .
          <article-title>Why do Humans Reason? Arguments for an Argumentative Theory</article-title>
          .
          <source>Behavioral and Brain Sciences</source>
          ,
          <volume>34</volume>
          :
          <fpage>57</fpage>
          {
          <issue>74</issue>
          , 4
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36.
          <string-name>
            <given-names>L.</given-names>
            <surname>Michael</surname>
          </string-name>
          .
          <article-title>Reading Between the Lines</article-title>
          .
          <source>In Proceedings of the 21st International Joint Conference on Arti cial Intelligence (IJCAI)</source>
          , pages
          <fpage>1525</fpage>
          {
          <fpage>1530</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37.
          <string-name>
            <given-names>L.</given-names>
            <surname>Michael</surname>
          </string-name>
          .
          <source>Partial Observability and Learnability. Arti cial Intelligence</source>
          ,
          <volume>174</volume>
          (
          <issue>11</issue>
          ):
          <volume>639</volume>
          {
          <fpage>669</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          38.
          <string-name>
            <given-names>L.</given-names>
            <surname>Michael</surname>
          </string-name>
          .
          <article-title>Machines with WebSense</article-title>
          .
          <source>In Proceedings of the 11th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense)</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          39.
          <string-name>
            <given-names>L.</given-names>
            <surname>Michael</surname>
          </string-name>
          . Story Understanding...
          <source>Calculemus! In Proceedings of the 11th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense)</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          40.
          <string-name>
            <given-names>L.</given-names>
            <surname>Michael</surname>
          </string-name>
          .
          <article-title>Simultaneous Learning and Prediction</article-title>
          .
          <source>In Proceedings of the 14th International Conference on Principles of Knowledge Representation and Reasoning (KR)</source>
          , pages
          <fpage>348</fpage>
          {
          <fpage>357</fpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          41.
          <string-name>
            <given-names>L.</given-names>
            <surname>Michael</surname>
          </string-name>
          .
          <article-title>Cognitive Reasoning and Learning Mechanisms</article-title>
          .
          <source>In Proceedings of the 4th International Workshop on Arti cial Intelligence and Cognition (AIC)</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          42. L.
          <string-name>
            <surname>Michael</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Kakas</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Miller</surname>
            , and
            <given-names>G. Turan. Cognitive</given-names>
          </string-name>
          <string-name>
            <surname>Programming</surname>
          </string-name>
          .
          <source>In Proceedings of the 3rd International Workshop on Arti cial Intelligence and Cognition (AIC)</source>
          , pages
          <fpage>3</fpage>
          {
          <fpage>18</fpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          43.
          <string-name>
            <given-names>L.</given-names>
            <surname>Michael</surname>
          </string-name>
          and
          <string-name>
            <given-names>L. G.</given-names>
            <surname>Valiant. A First Experimental</surname>
          </string-name>
          <article-title>Demonstration of Massive Knowledge Infusion</article-title>
          .
          <source>In Proceedings of the 11th International Conference on Principles of Knowledge Representation and Reasoning (KR)</source>
          , pages
          <fpage>378</fpage>
          {
          <fpage>389</fpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          44. I. Rahwan and
          <string-name>
            <given-names>G. R.</given-names>
            <surname>Simari</surname>
          </string-name>
          .
          <source>Argumentation in Arti cial Intelligence</source>
          . Springer Publishing Company,
          <source>1st edition</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          45.
          <string-name>
            <given-names>L. J.</given-names>
            <surname>Rip</surname>
          </string-name>
          .
          <source>The Psychology of Proof: Deductive Reasoning in Human Thinking</source>
          . Cambridge MA: MIT Press,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          46.
          <string-name>
            <given-names>M.</given-names>
            <surname>Shenefelt</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>White. If</surname>
          </string-name>
          <string-name>
            <given-names>A</given-names>
            ,
            <surname>Then</surname>
          </string-name>
          <string-name>
            <surname>B</surname>
          </string-name>
          :
          <article-title>How Logic Shaped the World</article-title>
          . Columbia University Press,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          47.
          <string-name>
            <given-names>K.</given-names>
            <surname>Stenning and M. van Lambalgen</surname>
          </string-name>
          .
          <source>Human Reasoning and Cognitive Science</source>
          . MIT Press,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          48.
          <string-name>
            <given-names>K.</given-names>
            <surname>Stenning and M. van Lambalgen</surname>
          </string-name>
          .
          <source>Reasoning</source>
          , Logic, and Psychology.
          <source>WIREs Cognitive Science</source>
          ,
          <volume>2</volume>
          :5:
          <fpage>555</fpage>
          {
          <fpage>567</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          49. G. Stoerring.
          <article-title>Experimentelle Untersuchungen ueber einfache Schlussprozesse</article-title>
          .
          <source>Archiv fuer die gesammte Psychologie</source>
          ,
          <volume>11</volume>
          :1{
          <fpage>127</fpage>
          ,
          <year>1908</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          50.
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Torroni</surname>
          </string-name>
          .
          <article-title>Bottom-Up Argumentation</article-title>
          .
          <source>In Proceedings of the 1st International Workshop on Theories and Applications of Formal Argumentation (TAFA)</source>
          , pages
          <fpage>249</fpage>
          {
          <fpage>262</fpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          51.
          <string-name>
            <given-names>L. G. Valiant. Robust</given-names>
            <surname>Logics</surname>
          </string-name>
          .
          <source>Arti cial Intelligence</source>
          ,
          <volume>117</volume>
          (
          <issue>2</issue>
          ):
          <volume>231</volume>
          {
          <fpage>253</fpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>