=Paper= {{Paper |id=Vol-2090/AIC17_paper6 |storemode=property |title=Dialogical Scaffolding for Human and Artificial Agent Reasoning |pdfUrl=https://ceur-ws.org/Vol-2090/paper6.pdf |volume=Vol-2090 |authors=Sanjay Modgil |dblpUrl=https://dblp.org/rec/conf/aic/Modgil17 }} ==Dialogical Scaffolding for Human and Artificial Agent Reasoning== https://ceur-ws.org/Vol-2090/paper6.pdf
     Dialogical Scaffolding for Human and Artificial Agent
                           Reasoning

                           Sanjay Modgil (sanjay.modgil@kcl.ac.uk)

                         Department of Informatics, King’s College London



          Abstract. This paper proposes use of computational models of argumentation
          based dialogue for enhancing the quality and scope (‘scaffolding’) of both hu-
          man and artificial agent reasoning. In support of this proposal I draw on work
          in cognitive psychology that justifies such a role for human reasoning. I also re-
          fer to recent concerns about the potential dangers of artificial intelligence (AI),
          and the consequent need to ensure that AI actions are aligned with human val-
          ues. I will advocate that argumentation based models of dialogue can contribute
          to value alignment by enabling joint human and AI reasoning that may indeed
          be better purposed to resolve challenging ethical issues. This paper also reviews
          research in formals models of argumentation based reasoning and dialogue that
          will underpin applications for scaffolding human and artificial agent reasoning.


1      Introduction
This position paper argues that computational models of argumentation based dialogue
can play a key role in enhancing the quality and scope (henceforth referred to as ‘scaf-
folding’1 ) of both human and artificial agent reasoning. In developing the argument I
will draw on ground-breaking work in cognitive psychology – Sperber and Mercier’s
‘argumentative theory of reasoning’ [25] – to support the scaffolding role of argumen-
tation based dialogue for human reasoning. I will also refer to work by N. Bostrom [7]
(and others), who argue the need for aligning the values of artificial intelligence (AI)
and humans, so as to avert the potential threats that AI poses to humans. I will propose
that argumentation based models of dialogue can contribute to solving the so called
‘value alignment problem’, through enabling joint human and AI reasoning that may
indeed be better purposed to resolve challenging moral and ethical issues, as compared
with such deliberations being exclusively within the purview of humans or AI.
    The remainder of this paper is structured as follows. In Section 2 I review work
on provision of argumentative characterisations of non-monotonic inference over given
static belief bases. I then describe how these characterisations can be generalised to dia-
logical models in which interlocutors effectively reason non-monotonically over sets of
beliefs that are incrementally defined by the contents of locutions communicated during
the course of such dialogues. Section 3 then reviews Sperber and Mercier’s theory of
the evolutionary impetus for human acquisition of explicit (system 2) reasoning capaci-
ties, and the theory’s empirically supported implication that reasoning delivers superior
 1
     I here use the connoting term ‘scaffolding’ in view of its pedagogical use to describe instruc-
     tional techniques for inculcating interpretative and reasoning skills.
outcomes when human reasoners engage in dialogue. This implication in turn suggests
benefits accruing from deployment of computational models of argumentation based
dialogue for scaffolding human reasoning. I then propose deployment of such models
in education, deliberative democracy, and, more speculatively, the puncturing of belief
bubbles erected by the filtering algorithms of social media. Section 4 then reviews argu-
ments to the effect that future AI systems may pose serious threats to humankind, due
to their single minded pursuit of operators’ goals. This has led researchers to focus on
the problem of how to ensure that the reasoning of AI systems account for human val-
ues. I argue that dialogical models will contribute to solving this problem, by enabling
joint human-AI reasoning, so that human values may inform AI reasoning tasks that
have an ethical dimension. In Section 5 I review current work that can contribute to the
development of dialogical models for the applications envisaged in Sections 3 and 4,
and point to future research challenges. Finally, Section 6 concludes the paper.


2   From Non-monotonic Inference to Distributing Non-monotonic
    Reasoning through Dialogue

AI research in the 80s and early 90s saw a proliferation of non-monotonic logics tackle
classical logic’s failure to formalise our common-sense ability to reason in the pres-
ence of incomplete and uncertain information. In the classical paradigm, the inferences
from a set of formulae grows monotonically, as the set of formulae grow. However in
practice, conclusions that we previously obtain may be withdrawn because new infor-
mation conflicts with what we concluded previously or with the assumptions made in
drawing previous conclusions. Essentially then, a key concern of non-monotonic rea-
soning is how to arbitrate amongst conflicting information; a concern that is central
to the argumentative enterprise. It is this insight that is substantiated by argumentative
characterisations of non-monotonic inference. Most notably, in Dung’s seminal theory
of argumentation [15] and subsequent developments of the theory, one constructs the
arguments A from a given set of formulae ∆ (essentially each argument being a self-
contained proof of a conclusion derived from the supporting formulae). Arguments are
then related to each other in an argument framework (AF ) hA, →i where the binary
attack relation →⊆ A × A denotes that one argument is a counter-argument to (at-
tacks) another; for example when the conclusion, or claim, of one argument negates
a formula in the support of the attacked argument. In this way the formulae ∆ are
said to ‘instantiate’ the AF , as henceforth indicated by AF∆ . Of particular relevance
here, is developments of Dung’s theory to account for preferences over arguments [1, 5,
32]. For example, preferences may be based on the relative reliability of the sources of
the arguments, the epistemic certainty attached to the arguments’ constituent formulae,
principles of precedence (such as when rules in legal arguments encoding more recent
legislation are given higher priority), or orderings of values associated with the decision
options supported by arguments in practical reasoning. Preferences can then be used to
distinguish those attacks that can be deployed dialectically; that is, even though X’s
claim negates a formula in the support of Y , we have that (X, Y ) ∈→ only if X ⊀ Y
(Y is not strictly preferred to X).
    Conflict free sets (i.e. sets that contain no attacking arguments) of acceptable ar-
guments (extensions) of an AF hA, →i are then identified under different ‘semantics’.
The fundamental principle of ‘defense’ licenses membership of an argument X in any
such extension E ⊆ A: X ∈ E iff (Y, X) ∈→ implies ∃Z ∈ E, (Z, Y ) ∈→ (E is
said to defend X). An admissible extension E is one that defends all its contained argu-
ments. E is a complete extension if all arguments defended by E are in E. Then E is a
preferred, respectively the grounded, extension, if E is a maximal (under set inclusion),
respectively the minimal (under set inclusion) complete extension. E is stable if all ar-
guments outside of E are attacked by some argument in E. The claims of sceptically or
credulously justified arguments (those arguments that are in all extensions, or, respec-
tively, at least one extension) identify a semantics-parameterised family of inference
relations over ∆:

     ∆ |∼(a,s) α iff α is the claim of an a ∈ {sceptically, credulously} justified
                                                                                               (1)
          argument under semantics s ∈ {grounded , preferred , stable} in AF∆
    Argumentation thus provides for the definition of novel non-monotonic inference
relations. Moreover, Dung and others [2, 15, 32, 50] have shown that for various estab-
lished non-monotonic logics L 2 and their associated inference relations |∼L , that:
                         f or some a, s : ∆ |∼L α iff ∆ |∼(a,s) α                              (2)
    Given an AF hA, →i, argument game proof theories (e.g., [10, 30, 47]) establish
whether a given argument X ∈ A is justified. The essential idea is that a proponent wins
a game iff she successfully counter-attacks (defends) against all attacking arguments
moved by an opponent, where all attacks moved are licensed by reference to those in
the given AF . Players can backtrack to attack previous moves of their interlocutors,
so defining a tree of moves, with X as the root node, and Y a child node of Z iff
(Y, X) ∈→. A game is won in respect of showing that X is justified, iff X is justified
in the sense that it belongs to an extension of the framework under some semantics,
with rules on the allowable moves in the game varying according to the semantics3 .
    Argumentation based dialogues in which agents communicate to persuade one an-
other of the truth of a proposition, or decide amongst alternative action options (e.g.,
[17, 28, 36, 45]), can be seen as generalising the above argument games in two impor-
tant respects. Firstly, consider proponent and opponent agents attacking each others’
arguments, as in the above described games, where these attacks are not licensed by
reference to a given AF ; rather, the arguments are constructed from the agents’ private
belief bases, and the contents of these arguments incrementally define a public com-
mitment store Bp . At any point in the dialogue, an agent can then construct and move
arguments constructed from their own belief bases and the contents of Bp thus far de-
fined. An agent can at any point in the dialogue be said to have successfully established
the ‘topic’ α (a belief or decision option) iff α is the clam of a justified argument (under
some semantics implemented by the rules licensing allowable moves) in AFBp [17, 36,
 2
   Including Logic Programming, Reiter’s Default Logic, Pollock’s Inductive Defeasible Logic,
   Brewka’s Preferred Subtheories and Brewka’s Prioritised Default Logic.
 3
   E.g., in [30], variations in rules licensing allowable (legal) moves, yield games for membership
   of extensions under grounded, preferred and stable semantics.
28]. Dialogues also generalise games by allowing for agents to submit locutions that
not only consist of arguments, but locutions of other types (in the tradition of agent
communication languages that build on speech act theory [41]). For example, an agent
may simply make an individual claim rather than move an argument, or question why
a claim or premise of a moved argument is the case, or retract the contents of previ-
ous locutions, or concede that an interlocutor’s assertion is the case. Thus locutions
more typical of real world dialogues are defined, and dialogue protocols specify when
locutions are legal replies to other locutions. In such dialogues, only the contents of
assertional locutions (i.e., claims and arguments) define the contents of Bp .
    Now, let a dialogue D be defined by a sequence of moves (locutions) m1 , . . . , mn ,
where each mj (j 6= 1) replies to a single move mi