=Paper=
{{Paper
|id=Vol-3768/paper8
|storemode=property
|title=Towards Deontic Explanations Through Dialogue
|pdfUrl=https://ceur-ws.org/Vol-3768/paper8.pdf
|volume=Vol-3768
|authors=Kees van Berkel,Christian Strasser
|dblpUrl=https://dblp.org/rec/conf/comma/0002S24
}}
==Towards Deontic Explanations Through Dialogue==
Towards Deontic Explanations Through Dialogue
Kees van Berkel1 , Christian Straรer2
1
Institute for Logic and Computation, TU Wien, Austria
2
Institute of Philosophy II, Ruhr University Bochum, Germany
Abstract
Deontic explanations answer why-questions concerning agentsโ obligations and permissions. Normative
systems are notoriously conflict sensitive, making contrastive explanations pressing: โWhy am I obliged
to do ๐, despite my (seemingly) conflicting obligation to do ๐?โ In this paper, we develop a model of
contrastive explanatory dialogues for the well-established defeasible reasoning formalism Input/Output
logic. Our model distinguishes between successful, semi-successful, and unsuccessful deontic dialogues.
We prove that the credulous and skeptical (under shared reasons) entailment relation of Input/Output
logic, can be characterized in formal argumentation using preferred and grounded semantics. This result
allows us to leverage known results for dialogue models of the latter two semantics. Since this work is
the first of its kind, we discuss 5 key challenges for deontic explanations through dialogue.
Keywords
Defeasible normative reasoning, Contrastive deontic explanations, Logical argumentation, Dialogues
1. Introduction
Norms are indispensable in many aspects of society, ranging from law, ethics, to business
protocols and AI. They motivate, guide, and regulate agents, whether they are human or
artificial. Often, agents affected by norms do not only need to know that they are bound by
obligations or that they may appeal to rights: they need to understand why. Such understanding
may enhance compliance and collaboration and is especially pressing when conflicts between
norms arise. For instance, I may want to know why I may take over on the left, despite being
obliged to drive on the right. Here, a good explanation not only explains that I am permitted, but
also why the obligation to the contrary does not currently apply: the permission is an exception
to the obligation. Answers to this type of why-question are called deontic explanations.
Deontic logic is the well-established field exploring formal methods to model normative
reasoning. However, the focus has been nearly exclusively on formal systems that determine
which obligations and permissions can be inferred from a normative system, rather than to
explain why. This gap is remarkable, especially given the increasingly vital role that normative
systems play in alignment and compliance requirements for AI. This paper investigates how
knowledge representation methods can be used to generate explanatory deontic dialogues.
The demand for explanatory models in AI is increasing [1] and formal argumentation provides
a promising method in this respect. First of all, formal argumentation has proven to be a unifying
framework for nonmonotonic reasoning [2]. In particular, two central paradigms of defeasible
ArgXAI-24: 2nd International Workshop on Argumentation for eXplainable AI
$ kees.van.berkel@tuwien.ac.at (K. v. Berkel); christian.strasser@rub.de (C. Straรer)
ยฉ 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
Workshop
Proceedings
http://ceur-ws.org
ISSN 1613-0073
CEUR Workshop Proceedings (CEUR-WS.org)
29
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
Kees van Berkel et al. CEUR Workshop Proceedings 29โ40
reasoning, constrained Input/Output (I/O) logic [3] and default logic [4], can be argumentatively
characterized [5, 6]. Second, a wide variety of methods has been proposed in Argumentation for
Explainable AI (ArgXAI, [7]). Finally, dialogue models and argumentation games [8, 9, 10], offer
dynamic characterizations of formal argumentation, that have the potential to yield interactive
(or even tailor-made) explanatory episodes through dialogues.
Once a given nonmonotonic logic is represented in logical argumentation, such as I/O- or
default logic, dialogical methods can be leveraged for explanatory purposes. However, a first
obstacle, in this respect, is that most characterization results are shown with respect to stable
semantics (including [5, 6]; also see [2]), whereas other semantics such as preferred, admissible,
and grounded are more suitable for dialogical generalization. In brief, the problem with stable
extensions is that they reference the entire set of arguments (each argument is either โinโ or โoutโ),
while we expect explanatory dialogues to focus on reasons relevant to the explanatory purpose.
Furthermore, defining dialogue models and argumentation games for skeptical reasoning is
challenging (in the context of multi-extension semantics such as preferred; cf. [10]).
Contributions. We provide dialogue models for one of the central defeasible normative
reasoning formalisms in the literature: Input/Output logic [3, 11]. Unfortunately, the original
formalism does not naturally lend itself to explanatory reasoning. Recently, a highly modular
rule-based proof system โ the Deontic Argumentation Calculus (DAC) โ was developed with
the aim of making I/O suitable for explanatory purposes and it was shown that DAC-induced
argumentation frameworks are sound and complete for a large class of constrained I/O logics [5]
and default logic [6]. Despite these promising results, the correspondences were only obtained
for stable semantics, making them seemingly unsuitable for dialogical deontic explanations.
In this article, we extend these results by a model of dialogue episodes for deontic explanations:
(1) As a preparatory step, we first prove that for DAC-induced argumentation frameworks
the stable and the preferred semantics coincide. This allows us to use well-developed
preferred dialogue models in the context of I/O reasoning.
(2) Furthermore, we lift recent results [12] that show that the โfree consequencesโ of skeptical
entailment under stable semantics is identical to entailment under the grounded semantics.
In other words, we may also use grounded dialogue models for I/O reasoning.
(3) Using (1) and (2), we enhance dialogue models and define contrastive deontic explanations
that explain certain obligations in contrast to seeming obligations to the contrary.
Outline. Section 2 introduces the DAC formalism. In Section 3, we define DAC-induced
argumentation frameworks and prove that stable equals preferred and the free consequences
correspond to grounded entailment. We harness these results to specify contrastive dialogue
models in Section 4. This paper lays the foundations for a more extensive study of dialogue
models of deontic explanation and, for this reason, we discuss five key challenges in Section 5.
2. Preliminaries: A Deontic Argumentation Calculus (DAC)
We recall the basics of the Deontic Argumentation Calculus (DAC). Although the results in this
paper hold for a range of languages, base logics, and DAC systems, for readability we assume a
propositional language โ and classical logic L, and illustrate our approach for one DAC system
from [5]. To enhance explainability, โ is labeled and augmented with a language of norms:
30
Kees van Berkel et al. CEUR Workshop Proceedings 29โ40
Labeled propositional languages: โ๐ = {๐๐ | ๐ โ โ} where ๐ โ {๐, ๐, ๐}.
Norm languages: โ๐ = {(๐, ๐) | ๐, ๐ โ โ} and โ๐ = {ยฌฮ | โ
โ ฮ โ โ๐ , ฮ is finite}.
Normative systems: ๐ฆ = โจโฑ, ๐ฉ , ๐โฉ is a normative system, where โฑ โ โ๐ is a factual context,
๐ฉ โ โ๐ a normative code, and ๐ โ โ๐ a set of constraints (and โฑ and ๐ are L-consistent).
Labels explicate the roles that propositional formulas adopt in the reasoning process: ๐๐ denotes
that ๐ is a fact, ๐๐ that ๐ is obligatory, and ๐๐ that obligations must be consistent with ๐. We
take (๐, ๐) โ โ๐ to expresses the norm โgiven ๐, it is obligatory that ๐โ and ยฌฮ โ โ๐ is
read as โthe norms in ฮ are jointly inapplicable.โ For ยฌ{(๐, ๐)}, we simply write ยฌ(๐, ๐). The
latter type of expression plays an essential role in defeasible reasoning with norms. The entire
enhanced I/O language is defined as the union โ๐๐ = โ๐ โช โ๐ โช โ๐ โช โ๐ โช โ๐ . We write
ฮ๐ , ฮ๐ , . . . for finite sets of ๐-labeled formulas, where ๐ โ {๐, ๐, ๐}. We write ฮ, ฮ, . . . for any
finite subset of โ๐๐ and ฮโ for a set ฮ โ โ๐ stripped from its label ๐ โ {๐, ๐, ๐}.
Defeasible normative reasoning occurs with respect to a normative system ๐ฆ. The basic idea
of I/O reasoning [3] and DAC is that facts (input) trigger norms from which obligations (output)
are detached where the constraints filter the output to ensure consistency. Our aim is to construct
arguments from ๐ฆ. Our approach belongs to logical argumentation (a subfield of structured
argumentation [2]). We write arguments as sequents: ๐ = ฮ โ ๐, where prem(๐) = ฮ is
a (possibly empty) set of premises, and conc(๐) = ๐ is the conclusion of the argument. An
explanatory argument is an argument stating reasons for a conclusion. We take facts, constraints,
and norms as reasons and differentiate two types of argument:
๐๐ , (๐, ๐) โ ๐ ๐ and ๐๐ , ยฌ๐ ๐ โ ยฌ(๐, ๐)
The first type (left) contains arguments providing reasons for obligations, where the fact ๐๐ and
norm (๐, ๐) provide reasons for the obligation ๐ ๐ . The second type (right) contains arguments
that attack reasons, expressing which norms are inapplicable in the given context, where given
๐๐ , the norm (๐, ๐) is inapplicable since its detachable obligation is inconsistent with the
constraint ยฌ๐ ๐ . The latter type attacks all arguments using (๐, ๐) as a reason.
A DAC is a sequent-style, that is, rule-based proof system for deriving these two types of
argument [5]. We assume that LC is the sound and complete sequent calculus for L.
Deontic Argumentation Calculus (DAC): Let DAC be a system consisting of the rules Ax,
FDet, DDet, Con, Ina, InaC, Taut, and Cut (Figure 1). A DAC-derivation of ฮ โ ฮ is a
tree-like structure whose leaves are initial sequents, whose root is ฮ โ ฮ, and whose rule-
applications are instances of the rules of DAC. We say ฮ โ ฮ is DAC-derivable (written
โขDAC ฮ โ ฮ) whenever there exists a DAC-derivation for it, ฮ โ โ๐๐ , and ฮ โ โ๐๐
contains at most one formula. We say ฮ โ ฮ is ๐ฆ-based whenever ฮ โ โฑ โช ๐ฉ โช ๐.
There are three initial sequent rules: Ax introduces labeled versions of any classically
derivable ฮ โ ฮ to a DAC-derivation (and so LC rules are not part of DAC). Taut guarantees
that all propositional tautologies are among the output. FDet expresses factual detachment and
gives an initial explanatory argument stating that the fact ๐๐ and the norm (๐, ๐) are reasons
for concluding the obligation ๐ ๐ . DDet corresponds to deontic detachment and makes it possible
31
Kees van Berkel et al. CEUR Workshop Proceedings 29โ40
โขLC ฮ โ ฮ Taut FDet
Ax , ๐ โ {๐, ๐, ๐} and ฮ, ฮ โ โ โ (โค, โค) ๐๐, (๐, ๐) โ ๐ ๐
ฮ๐ โ ฮ๐
๐๐ , ฮ โ ฮ ฮ โ ๐๐ ฮ, (๐, ๐) โ
DDet๐ Con Ina
๐๐ , ฮ โ ฮ ฮ, (ยฌ๐)๐ โ ฮ โ ยฌ(๐, ๐)
ฮโ ฮโ๐ ๐, ฮโฒ โ ฮ
InaC Cut๐
ฮ โ โ โ ยฌ(ฮ โฉ โ๐ )
๐ โฒ
ฮ, ฮ โ ฮ
Figure 1: A Deontic Argumentation Calculus (DAC). The upper row represents initial sequent rules.
Side-condition (๐) on DDet stipulates ฮ โฉ โ๐ ฬธ= โ
; and (๐) on Cut requires that ๐ โ โ๐๐ .
that a norm may be triggered by obligations detached from other norms (see Ex. 1). The rules
Con, Ina, and InaC deal with the defeasibility of normative reasoning and yield attacking
arguments. The Con rule expresses the consistency constraint that if ฮ constitutes reasons for
๐๐ , then ฮ is inconsistent with the constraint ยฌ๐๐ (where an empty right-hand side denotes
inconsistent reasons). We also refer to ฮ โ as an inconsistent argument. When an argument
expresses inconsistent reasons, at least one of its involved norms is inapplicable (Ina) and all
involved norms are jointly inapplicable (InaC). We refer to [5] for other DAC systems.
Example 1. We look at Chisholmโs scenario [11], an archetype of contrary-to-duty reasoning.
Billie is obligated to go and help her neighbors (โค, โ) (โค denotes that โ is detached by default). If
Billie goes to help, she must tell the neighbors she goes (โ, ๐ก), otherwise she ought not to tell them
she goes (ยฌโ, ยฌ๐ก). Suppose that Billie does not go and help ยฌโ๐ and, so, violates the default duty in
(โค, โ). To know what Billie must do in light of her violation ยฌโ๐ , the constraint is imposed that the
obligations must be consistent with the fact that Billie does not help ยฌโ๐ [5, 11]. Let โฑ = {ยฌโ๐ },
๐ฉ = {(โค, โ), (โ, ๐ก), (ยฌโ, ยฌ๐ก)}, and ๐ = {ยฌโ๐ } be the normative system ๐ฆ. The desired outcome
is that Billie ought not to tell the neighbors she goes ยฌ๐ก๐ given that she does not go.
Argument ๐ (below left), stating that Billie ought to tell, is derived with deontic detachment.
Argument ๐ (below right), expresses the inapplicability of ยฌ(โค, โ) given the set constraint. Similar
reasoning gives the inconsistent argument ๐ฅ = ยฌโ๐ , (โค, โ), (โ, ๐ก), (ยฌโ, ยฌ๐ก) โ , which with Con,
Cut, and InaC derives the unattackable ๐ฅโฒ = ยฌโ๐ โ ยฌ{(โค, โ), (โ, ๐ก), (ยฌโ, ยฌ๐ก)}.
FDet FDet
โ๐ , (โ, ๐ก) โ ๐ก๐ โค๐ , (โค, โ) โ โ๐
FDet DDet Con
โค๐ , (โค, โ) โ โ ๐ โ๐ , (โ, ๐ก) โ ๐ก๐ โค๐ , (ยฌโ)๐ , (โค, โ) โ
Cut Ina
๐ = โค๐ , (โค, โ), (โ, ๐ก) โ ๐ก๐ ๐ = โค๐ , (ยฌโ)๐ โ ยฌ(โค, โ)
For the sake of completion, we recall the I/O system out3 here and some known results [5].
Proposition 1 ([5]). Let ๐ฆโ = โจโฑ, ๐ฉ , ๐โฉ be ๐ฆ stripped from its labels. Let ฮ โ โ, ๐ถ๐(ฮ) =
โ๏ธ | ฮ โขL ๐}, and out(๐ฉ , ฮ) = ๐ถ๐({๐ | (๐, ๐) โ ๐ฉ and ๐ โ ๐ถ๐(ฮ)}). Let out3 (๐ฉ , โฑ) =
{๐
๐โฅ0 ๐๐ , where ๐0 = out(๐ฉ , โฑ) and ๐๐+1 = ๐ถ๐(๐๐ โช out(๐ฉ , ๐๐ โช โฑ)). In words, out3 is a
closure of ๐ฉ under successive (deontic) detachment with respect to โฑ. Then ๐ฉ โ โ๐ is ๐-consistent
in ๐ฆ, if โฅ โ/ ๐ถ๐(out3 (๐ฉ , โฑ) โช ๐). Let ฮ โ ๐ฉ โ โ๐ , ฮ โ โฑ โ โ and ฮฉ โ ๐ โ โ, we have:
1. ๐ โ out3 (ฮ, ฮ) iff โขDAC ฮ, ฮ๐ โ ๐๐ ;
2. ฮ is ๐-inconsistent iff there are ฮ โ โฑ and ฮฉ โ ๐ for which โขDAC ฮ, ฮ๐ , ฮฉ๐ โ ;
3. โฅ โ ๐ถ๐(out3 (ฮ, ฮ) โช ฮฉ) iff for all (๐, ๐) โ ฮ, โขDAC ฮ โ {(๐, ๐)}, ฮ๐ , ฮฉ๐ โ ยฌ(๐, ๐).
32
Kees van Berkel et al. CEUR Workshop Proceedings 29โ40
3. Formal Argumentation with DAC-arguments
We use formal argumentation [13] to capture the defeasibility of normative reasoning and to
explicate norm conflicts [5]. An Argumentation Framework [13] contains a set of arguments
and an attack relation between arguments, where semantics stipulate conditions under which
sets of arguments are jointly acceptable. We instantiate such frameworks with DAC-arguments.
DAC-induced Argumentation Frameworks Let ๐ฆ = โจโฑ, ๐ฉ , ๐โฉ be a normative system. A
DAC-induced argumentation framework ๐โฑ(๐ฆ) = โจArg, Attโฉ is defined as follows:
โข ฮ โ ฮ โ Arg iff ฮ โ ฮ is DAC-derivable and ๐ฆ-based.
โข ๐ defeats ๐, i.e., (๐, ๐) โ Att โ Arg รArg iff conc(๐) = ยฌฮ โ โ๐ , and ฮ โ prem(๐).
We write Arg(ฮฃ) = {โขDAC ๐ | prem(๐) โ ฮฃ}.
Argumentative Semantics and Entailment Let โจArg, Attโฉ be an ๐โฑ and let ๐ฎ โ Arg: ๐ฎ
defeats an argument ๐ โ Arg if there is a ๐ โ ๐ฎ that defeats ๐; and ๐ฎ defends ๐ if ๐ฎ defeats
every argument that defeats ๐. Let Defended(๐ฎ) be the set of arguments defended by ๐ฎ.
We recall the following semantic definitions [13]: ๐ฎ is conflict-free if it does not defeat
any of its own elements; ๐ฎ is admissible if it is conflict-free and defends all ๐ โ ๐ฎ; ๐ฎ
is preferred if it is maximally admissible; ๐ฎ is stable if it is conflict-free and defeats all
๐ โ Arg โ ๐ฎ; ๐ฎ is grounded if ๐ฎ = ๐โฅ0 ๐ข๐ where ๐ข0 = โ
and ๐ข๐+1 = Defended(๐ข๐ ).
โ๏ธ
Let sem โ {admissible,preferred,stable,grounded}, we define two entailment relations:
โข ๐โฑ |โผโฉrea
sem ๐ iff there is an ๐ contained in every sem-extension that concludes ๐;
โข ๐โฑ |โผโช
sem ๐ iff there is a sem-extension โฐ for which there is an ๐ โ โฐ concluding ๐.
|โผโฉrea
sem captures the shared arguments (shared reasons) by all sem-extensions. The resulting
conclusions are called the free consequences of ๐ฆ, which are obligations from unproblematic
norms compatible with any sem-extension. Credulous entailment |โผโช sem captures the existence
of reasons in favor of a conclusion for some sem-extension, expressing a defensible stance.
Example 2. The partial ๐โฑ in Figure 2 captures the scenario from Ex. 1. There is only one
stable extension {๐, ๐, ๐, ๐ฅโฒ } (the arrows from ๐, ๐, ๐, and ๐ to ๐ฅ are implicit), which is also the
grounded extension (cf. Prop. 3-2 below). We may, thus, conclude ๐โฑ |โผ(ยฌ๐ก)๐ (where |โผ โ {|โผโsem |
โ โ {โฉrea, โช} and sem โ {stable, grounded}}). As desired, since Billie does not go to help her
neighbors, she ought not to tell them she is coming. Billie may now ask โWhy am I obliged to not
tell my neighbors, despite my seeming duty to tell them I am coming to help?โ To this we turn next.
We recall [5, Theorem 2] that for the system adopted in this paper, DAC-induced ๐โฑs are
sound and complete for the system out3 of constrained Input/Output logic [3].
Proposition 2. Let ๐ฆ = โจโฑ, ๐ฉ , ๐โฉ and let maxfam(๐ฆ) = {๐ฉ โฒ โ ๐ฉ | โฅ ฬธโ ๐ถ๐(out3 (๐ฉ โฒ , โฑ โ ) โช
๐ โ ) and for each ๐ฉ โฒ โ ๐ฉ โฒโฒ , โฅ โ ๐ถ๐(out3 (๐ฉ โฒโฒ , โฑ โ ) โช ๐ โ )} be the set of maximal consistent sets
of norms over ๐ฆ. Let ๐โฑ be induced by DAC and ๐ฆ with the set of stable extensions stable(๐โฑ):
1. If ๐ฉ โฒ โ maxfam(๐ฆ), then Arg(โฑ โช ๐ฉ โฒ โช ๐) โ stable(๐โฑ);
2. If ๐ โ stable(๐โฑ), then there is a ๐ฉ โฒ โ maxfam(๐ฆ) for which ๐ = Arg(โฑ โช ๐ฉ โฒ โช ๐).
33
Kees van Berkel et al. CEUR Workshop Proceedings 29โ40
(ยฌโ)๐
[๏ธ ]๏ธ [๏ธ ]๏ธ [๏ธ ]๏ธ
(ยฌโ)๐ , (ยฌโ, ยฌ๐ก) (โค, โ), (โ, ๐ก) (โค, โ)
[๏ธ ]๏ธ
๐= ๐= ๐= ๐=
โ (ยฌ๐ก)๐ โ ยฌ(โค, โ) โ ๐ก๐ โ โ๐
(ยฌโ)๐ , (โค, โ), (โ, ๐ก) (ยฌโ)๐ , (โค, โ), (ยฌโ, ยฌ๐ก)
[๏ธ ]๏ธ [๏ธ ]๏ธ
๐= ๐=
โ ยฌ(ยฌโ, ยฌ๐ก) โ ยฌ(โ, ๐ก)
(ยฌโ)๐ , (โ, ๐ก), (ยฌโ, ยฌ๐ก)
[๏ธ ]๏ธ
๐ฅ ๐ฅโฒ ๐=
โ ยฌ(โค, โ)
Figure 2: DAC-induced ๐โฑ of Ex. 1. Arrows denote defeats relative to the constraint ๐ = {(ยฌโ)๐ }.
Our aim is to employ dialogue models for contrastive deontic explanations and for this we need
some additional results. Since the grounded extension is unique [13], |โผโฉrea โช
grounded and |โผgrounded
coincide and we simply write |โผgrounded . The proofs below do not reference specific DAC-rules
(outside the base system [5, 12]), and thus generalize to all DAC systems in [5] (augmented
with InaC). Proposition 3 tells us that for reasoning about the free consequences under the
stable semantics, it suffices to reason with the grounded semantics. Proposition 4 shows that the
preferred and stable semantics coincide for DAC. Consequently, these propositions allow us to
apply well-developed dialogue techniques to DAC for skeptical (in terms of free consequences)
and credulous reasoning under the grounded, respectively the preferred semantics.
Proposition 3. Let ๐โฑ be a DAC-induced ๐โฑ for ๐ฆ = โจโฑ, ๐ฉ , ๐โฉ and let stb(๐โฑ) and
grd(๐โฑ) be the set of stable extensions, respectively the grounded extension of ๐โฑ:
1. ๐ โ stb(๐โฑ) iff every defeater ๐ โ Arg of ๐ is ๐-inconsistent (i.e., it is defeated by an
โ๏ธ
argument ๐ โ Arg(โฑ โช ๐)).
2. grd(๐โฑ) = stb(๐โฑ) = ๐ข2 = Defended(Arg(โฑ โช ๐)) (and so |โผgrounded = |โผโฉrea 1
โ๏ธ
stable ).
Proof. Ad 1. Let ๐ = ฮ๐1 , ฮ1 , ฮ๐1 โ ฮฃ โ Arg(๐ฆ). Left-to-Right. Consider a defeater
ฮ๐2 , ฮ2 , ฮ๐2 โ ยฌ(๐, ๐) of ๐. By Proposition 1, ฮ2 โช {(๐, ๐)} is ๐-inconsistent in ๐ฆ. Since
๐ = โ๏ธ
๐ โ stb(๐โฑ) and (๐, ๐) โ prem(๐), and by Proposition 3, ฮ2 is not contained in a consistent
set of norms in ๐ฆ and it is therefore inconsistent. By Proposition 1, there are ฮ๐3 โช ฮ๐3 โ โฑ โช ๐
such that ฮ๐3 , ฮ๐3 โ ยฌฮ2 defeats ๐. Right-to-Left. It is easy to see that stb(๐โฑ) contains
โ๏ธ
every argument it defends. Suppose now that ๐ = ฮ โ ฮ is ๐-inconsistent. By Proposition 1,
thereโ๏ธis a ๐ = ฮฉ โ ยฌ(ฮโ๏ธ โฉ โ ) that defeats ๐ and for which ฮฉ โฉ โโ๏ธ = โ
. Since ๐ has no defeaters,
๐ ๐
๐ โ stb(๐โฑ). So, stb(๐โฑ) defends ๐ and therefore ๐ โ stb(๐โฑ). โ๏ธ
Ad 2. Left-to-Right. Straightforward. We show Right-to-Left. Let ๐ โ stb(๐โฑ). By
Item 1, ๐ is defended by Arg(โฑ โช ๐). Clearly, Arg(โฑ โช ๐) โ ๐ข1 โ grd(๐โฑ) since arguments
in this set do not have defeaters. So, ๐ โ ๐ข2 โ grd(๐โฑ).
1
Recall that ๐โฅ0 ๐ข๐ where ๐ข0 = โ
and ๐ข๐+1 = Defended(๐ข๐ ). The proposition states the computationally
โ๏ธ
interesting result that the fixed-point construction of the grounded extension terminates on the second iteration.
34
Kees van Berkel et al. CEUR Workshop Proceedings 29โ40
Proposition 4. Let ๐โฑ be a DAC-induced ๐โฑ for ๐ฆ = โจโฑ, ๐ฉ , ๐โฉ and let ๐ โ Arg: ๐ is a
stable extension iff ๐ is a preferred extension (and so |โผ*preferred = |โผ*stable , for * โ {โช, โฉrea}).
Proof. Left-to-Right. Lemma 15 in [13]. Right-to-Left. Suppose not, then there is an ๐ โ
Arg(๐ฆ) โ ๐ but no ๐ โ ๐ for which (๐, ๐) โ Att. Since ๐ is preferred it is conflict-free. By
Proposition 2, ๐ฉ โฒ = {(๐, ๐) | (๐, ๐) โ ฮ โฉ โ๐ and ฮ โ ฮ โ ๐} is ๐-consistent and so ๐ฉ โฒ โ
๐ฉ โฒโฒ for some ๐ฉ โฒโฒ โ maxfam(๐ฆ) and there is a stable extension ๐โฒ = Arg(โฑ โช ๐ฉ โฒโฒ โช ๐). Clearly,
Arg(โฑ โช ๐ฉ โฒ โช ๐) โ Arg(โฑ โช ๐ฉ โฒโฒ โช ๐) and so, ๐ is not maximally admissible. Contradiction.
4. Dialogues and Contrastive Explanations
We now provide dialogue models for contrastive deontic explanations. A contrastive explanatory
dialogue starts with a command โ๐๐ !โ issued by the explainer, immediately followed by the
explainee asking a question of the form: โWhy ๐๐ , despite ๐ ๐ ?โ
Due to the conflict sensitivity of norm systems [3, 11], we consider contrastive why-questions
as the starting point of explanatory episodes [14]. We refer in what follows to ๐๐ as the claim
and to ๐ ๐ as the counter-claim. 2 We do not assume that ๐๐ and ๐ ๐ are derivable from the given
normative system ๐ฆ, nor do we assume that there is a dialectical relation between ๐๐ and ๐ ๐ ,
referred to as the contrastive link. Both must become explicit (if existent) through the dialogue
itself. We say there exists a contrastive link when two arguments ๐ and ๐ exist concluding ๐๐ ,
respectively ๐ ๐ , which are incompatible, meaning that there is no stable extension containing
both. In such a case, an incompatibility argument can be provided using the premises in ๐ and ๐:
Proposition 5. Let ๐ฆ = โจโฑ, ๐ฉ , ๐โฉ and ๐โฑ(๐ฆ) = โจArg, Attโฉ. For any two arguments ๐ = ฮ โ
๐๐ , ๐ = ฮ โ ๐ ๐ โ Arg: there is no stable extension ๐ฎ with ๐, ๐ โ ๐ฎ iff there is a DAC-derivable
argument ฮ, ฮ, ฮฉ โ with ฮฉ โ ๐ (we call ๐ and ๐ ๐-incompatible).
Proof. Left-to-Right. Let ๐ฉ โฒ = (ฮ โช ฮ) โฉ โ๐ , and โฑ โฒ = {๐ | ๐๐ โ ฮ โช ฮ}. By Prop. 3, there is
no โณ โ maxfam(๐ฆ) with ๐ฉ โฒ โ โณ. So, there is a ฮฉ โ ๐ for which โฅ โ ๐ถ๐(out3 (๐ฉ โฒ , โฑ โฒ )โชฮฉ).
By Prop. 1 and a Cut application, โขDAC ๐ฉ โฒ , โฑ โฒ , ฮฉ โ . Right-to-Left. Straightforward.
An explanatory dialogue addressing โWhy ๐๐ , despite ๐ ๐ ?โ is successful whenever it contains
c1 an argument ๐ for ๐๐ and a demonstration that all (indirect) objections to ๐ can be met;
c2 an argument ๐ for ๐ ๐ such that ๐ and ๐ are ๐-incompatible (recall Prop. 5);
c3 an argument ๐ defeating ๐ and a demonstration that all (indirect) objections to ๐ can be met;
c4 a demonstration that the demonstrations in c1 and c3 are ๐-compatible.
Informally, c1 provides the โillative explanationโ of โ๐๐ !โ by stating ๐ containing the facts and
norms in view of which ๐๐ holds. It also provides the โdialectic explanationโ of ๐๐ by refuting all
2
In the philosophical literature deontic explanations are relatively unexplored. Our account takes the question-
oriented pragmatic approach to contrastive explanation (cf. [14]), which naturally extends to dialogue models. It
accords with [15] who calls upon defeasible moral principles (here interpreted as norms) to substantiate explanations
and with [16] who takes defeasible norms to serve as justifications, namely, norms ground as to why the called-upon
facts are explanatory. See also [17] for the role of justification in the context of normative explanations.
35
Kees van Berkel et al. CEUR Workshop Proceedings 29โ40
R: ๐๐ !
H: why(๐๐ )despite(๐ ๐ )?
R: argue(๐ = ฮ โ ๐๐ ) R: despite(๐ ๐ )?
H: argue(๐ = ฮ โ ๐ ๐ ) H: argue ฮ, ฮ, ฮฉ โ
ฮฉโ๐
โฐcontrast (c2)
R: argue ๐ = ฮฃ โ ยฌฮโฒ
ฮโฒ โ (ฮ โฉ โ ๐ )
Sub-dialogue for H: argue ฮ , ฮฃ, ฮ โ
Sub-dialogue for
the acceptance of ๐ ฮ โprem โฐR ,
the acceptance of ๐
ฮโ๐
โฐclaim (c1) โฐcomp (c4) โฐcounter (c3)
Figure 3: A DAC-based contrastive explanatory dialogue (CED) for explainer (R) and explainee (H).
possible objections the explainee may have against concluding ๐๐ . c2 makes explicit the contrast
between the argument for ๐๐ with an argument for ๐ ๐ , and c3 provides illative and dialectical
explanations for why ๐ ๐ can be successfully objected to (our terminology mirrors Johnsonโs
well-known two-tier model of argument [18]). Last, c4 ensures that the two sub-explanations
in c1 and c3 form a ๐-compatible view. The intuitive idea of contrastive explanatory dialogues,
following c1-c4, is provided in Figure 3. Below, we make this formally precise.
Although explanatory dialogues are collaborative, we assume a burden of proof for the
explainer with respect to c1 and c3, and for explainee with respect to c2 and c4. For the sake of
simplified reference, we call the explainee โhumanโ (H) and the explainer โrobotโ (R).
Explanatory Dialogues Let ๐โฑ(๐ฆ) be a DAC-induced argumentation framework. Let R be
the explainer and H the explainee. A contrastive explanatory dialogue (CED) is a sequence
โฐ = โจ๐1 , . . . , ๐๐ โฉ of tuples ๐๐ = โจpl, lo, taโฉ called moves such that ๐ is ๐๐ โs position in
the dialogue, pl(๐๐ ) โ {R, H} is the player making move ๐๐ , lo(๐๐ ) โ Locutions is the
locution in ๐๐ , and ta(๐๐ ) โ {๐1 , . . . , ๐๐โ1 } โช {โ
} is the target of ๐๐ . Locutions =
{(๐ฅ)!, why(๐ฅ)despite(๐ฆ)?, despite(๐ฅ)?, argue(๐ง)} is the set of expressions that inter-
locutors may use, where ๐ฅ and ๐ฆ range over โ๐ and ๐ง ranges over Arg(๐ฆ). โฐ is a sem-CED
(for sem โ {preferred, grounded}) whenever โฐ satisfies the protocol stipulated by P1-P5.
P1 Dialogue Commencement Rules for ๐๐ โ โฐ with ๐ โ {1, 2, 3, 4}:
๐1 = โจR, ๐๐ !, โ
โฉ ๐3 = โจR, argue(๐), ๐2 โฉ with conc(๐) = ๐๐
๐2 = โจH, why(๐ )despite(๐ )?, ๐1 โฉ ๐4 = โจR, despite(๐ ๐ )?, ๐2 โฉ
๐ ๐
36
Kees van Berkel et al. CEUR Workshop Proceedings 29โ40
P2 General Rules for each ๐๐ , ๐๐ โ โฐ:
i) if ๐ > 4, then lo(๐๐ ) = argue(๐ฅ), and ta(๐๐ ) = ๐๐ with ๐ < ๐.
ii) if ta(๐๐ ) = ๐๐ , lo(๐๐ ) = argue(๐ฅ) and lo(๐๐ ) = argue(๐ฆ), then conc(๐ฅ) =
ยฌฮ โ โ๐ for some ฮ โ prem(๐ฆ);
iii) if ta(๐๐ ) = ta(๐๐ ) and ๐ ฬธ= ๐, then lo(๐๐ ) ฬธ= lo(๐๐ );
iv) if ta(๐๐ ) = ๐๐ , then pl(๐๐ ) ฬธ= pl(๐๐ ).
P1 stipulates that (1) R starts the dialogue with a command to which, (2) H responds with a
contrastive why-question. Then, (3) R must provide reasons for the command and, after that, (4)
shifts the burden of proof to H requesting support for the contrastive claim. P2 stipulates rules
that hold for both R and H: (i) after the start of the dialogue any player may continue making
moves that target previous moves by stating arguments, where (ii) arguments moved against
other arguments express undermining defeats, (iii) players may not move an argument twice
against the same move, and (iv) they may not attack their own claims.
P3 Explainer Rules for each ๐๐ , ๐๐ โ โฐ, if ta(๐๐ ) = ๐2 , then ๐ = 3 or ๐ = 4.
P4 Explainee Rules for each ๐๐ , ๐๐ , ๐๐ โ โฐ:
i) if ta(๐๐ )=ta(๐๐ )=ta(๐๐ )=๐4 (and, so, pl(๐๐ ) = H), then |{๐, ๐, ๐}| โค 2.
ii) if ta(๐๐ )=ta(๐๐ )=๐4 , ๐ฬธ=๐, {lo(๐๐ ), lo(๐๐ )} = {argue(๐), argue(๐)}, lo(๐3 ) =
argue(๐), then conc(๐) = ๐ ๐ and ๐ = prem(๐), prem(๐), ฮฉ โ with ฮฉ โ ๐.
iii) if ta(๐๐ ) = ๐๐ , ta(๐๐ ) = ๐๐ , ta(๐๐ ) = ๐4 , lo(๐๐ ) = argue(๐), with conc(๐) ฬธ=
โ
, and lo(๐๐ ) = argue(๐), then
โ either lo(๐๐ ) = argue(ฮฃ, prem(๐), ๐ โ ), and ฮฃ โ prem(โฐR ) with โฐR =
{๐ | ๐ โ โฐ, pl(๐) = R, lo(๐) = argue(๐)};
โ or lo(๐๐ ) = argue(๐) for some ๐ with conc(๐) = ยฌฮ โ โ๐ and ฮ โ prem(๐).
P3 states that R must provide exactly two moves against the contrastive why-question (one
of which provides reasons, and the other questioning the contrastive claim). P4 stipulates that
the explainee H may (i) make at most two moves against Rโs questioning of the contrastive link,
(ii) one of which is an argument providing reasons for the counter-claim, one which shows the
๐-incompatibility of the arguments for the claim and the counter-claim ๐ ๐ . Then, (iii) H may
also move against the explainerโs argument ๐ opposing the reasons for the counter-claim. In
case ๐ is incompatible with the other arguments offered by R, R engages in incoherent reasoning.
H may, thus, oppose by demonstrating the ๐-incompatibility of ๐ and other R arguments.
A CED has a tree-structure since each move has exactly one predecessor (except for the
root). A branch of โฐ containing ๐๐ is the maximal linear sequence branch(๐๐ ) = โจ๐๐1 , . . . ,
๐๐=๐๐ , . . . , ๐๐๐ โฉ such that for each ๐๐๐ and ๐๐๐+1 , ta(๐๐๐+1 ) = ๐๐ . We say ๐๐๐ is a leaf.
Each CED consists of four subdialogues (see Fig. 3), which constitute four sub-explanations: a
subdialogue โฐclaim (cf. c1) that engages with the argument ๐ given in favour of the claim ๐๐
(generated from ๐3 down); a subdialogue โฐcounter (cf. c3) that engages with the argument ๐ given
in favour of the counter-claim ๐ ๐ (generated from the move attacking the argument providing
reasons for ๐ ๐ ); and the subdialogues โฐcontrast (cf. c2) and โฐcomp (cf. c4) each containing at most
one node with an argument that shows the ๐-incompatibility of ๐ and ๐, respectively, joint
37
Kees van Berkel et al. CEUR Workshop Proceedings 29โ40
๐-incompatibility of Rโs explanations โฐclaim and โฐcounter . These four subdialogues determine
when a given dialogues is (un/semi)successful.
Before defining success, we add P5 to the protocol to accommodate reasoning with preferred
and grounded acceptance of arguments. For sem โ {preferred, grounded}, (i) R may move at
most one counter-argument to each H argument. For preferred dialogues, (ii) H is not allowed to
move the same argument twice on a branch in โฐclaim or โฐcounter . For grounded dialogues, (iii) R
is not allowed to move the same argument twice on a branch. We note that (i)-(iii) follow the
protocols for admissible (and, so, preferred) and grounded argumentation games [10].
P5 Preferred and Grounded Rules for each ๐๐ , ๐๐ โ โฐ, and sem โ {preferred, grounded}:
i) ta(๐๐ ) = ta(๐๐ ) = ๐๐ with ๐ > 3 and pl(๐๐ ) = R, then ๐๐ = ๐๐ ;
ii) if sem = preferred, pl(๐๐ ) = H, and ta(๐๐ ) = ๐๐ , then there is no ๐๐ โ
branch(๐๐ ) for which pl(๐๐ ) = H, lo(๐๐ ) = lo(๐๐ ) and ๐ ฬธ= ๐;
iii) if sem = grounded, pl(๐๐ ) = R, and ta(๐๐ ) = ๐๐ , then there is no ๐๐ โ
branch(๐๐ ) for which pl(๐๐ ) = R, lo(๐๐ ) = lo(๐๐ ) and ๐ ฬธ= ๐.
Successful dialogues Let ๐โฑ(๐ฆ) be DAC-induced. A CED โฐ satisfying P1-P5 is:
โ successful if โฐcontrast ฬธ= โ
, โฐcomp = โ
, and โฐclaim and โฐcounter both contain R-leaves only;
โ semi-successful if โฐcontrast = โ
= โฐcomp , and โฐclaim contains R-leaves only;
โ unsuccessful if neither of the above holds.
Then, โฐ is sem-successful when it is saturated (i.e., all movable arguments from ๐โฑ(๐ฆ)
are moved in โฐ; cf. [9]) and โฐ is successful (similar for semi- and unsuccessful).
In brief, a successful CED features (c1) an illative explanation that is supplemented by
a dialectical explanation (โฐclaim contains only R-leaves), where (c2) the explainee is able to
demonstrate the incompatibility of the contrastive claim (โฐcontrast ฬธ= โ
), the latter which (c3) the
explainer successfully counters (โฐcounter contains only R-leaves). Furthermore, (c4) the position
taken by R in โฐclaim and โฐcounter must be ๐-compatible. A semi-successful dialogue features
(c1), but H is not able to demonstrate the adequacy of the contrastive link (โฐcontrast = โ
). A
dialogue can be unsuccessful for various reasons, e.g., R cannot provide an illative or dialectic
explanation, or R cannot argue against Hโs counter-claim.
Under saturation, it can be easily checked that the sub-dialogues in โฐclaim and โฐcounter are for
sem โ {preferred, grounded} instances of credulous preferred and grounded argumentation
games [10] (where for the latter credulous equals skeptical entailment). Hence, we obtain
dialogue models that construct explanations for credulous I/O entailment (i.e., when sem =
preferred) and for skeptical I/O entailment under shared reasons (i.e., when sem = grounded).
Proposition 6. Let ๐โฑ(๐ฆ) = โจArg, Attโฉ be DAC-induced, sem โ {preferred, grounded}, and
โฐ* = โจ๐๐ , . . . , ๐๐ โฉ be ๐โฑ(๐ฆ)-based with lo(๐๐ ) = argue(๐) and * โ {claim, counter}:
โข if โฐ* is saturated and contains only R leaves then ๐ โ ๐ฎ for some sem-extension;
โข if ๐ โ ๐ฎ for some sem-extension ๐ฎ, there is a saturated extension of โฐ* with only R leaves.
Proof. Straightforward modification of the proofs of Theorems 6.2 and 6.5 in [10].
38
Kees van Berkel et al. CEUR Workshop Proceedings 29โ40
Example 3. Let sem โ {preferred,grounded}: Figure i) provides a successful saturated sem-CED
for โWhy ยฌ๐ก๐ , despite ๐ก๐ ?,โ notice that โฐcomp is empty since โฐR = {๐, ๐} is ๐-compatible. Figure ii)
contains an unsuccessful saturated sem-CED for โWhy ๐ก๐ , despite ยฌ๐ก๐ ?โ where H refutes the claim
(with ๐) and defends the counter-claim (with ๐ and ๐). The arguments in i) and ii) reference those
in Fig. 2 of Ex. 2 (for space reasons, we adopted an example for which grounded equals stable).
R: ยฌ๐ก ๐ ! R: ๐ก ๐ !
H: why(ยฌ๐ก ๐ )despite(๐ก ๐ )? H: why(๐ก ๐ )despite(ยฌ๐ก ๐ )?
R: argue(a) R: despite(๐ก ๐ )? R: argue(๐) R: despite(ยฌ๐ก ๐ )?
H: argue(๐) H: argue(๐) H: argue(๐ฅ) H: argue(๐) H: argue(๐) H: argue(๐ฅ)
โฐcontrast โฐclaim โฐcontrast
R: argue(๐) H: โ
R: argue ๐ H: โ
R: argue ๐
โฐclaim โฐcomp โฐcounter โฐcomp
i) ii) H: argue ๐ โฐcounter
An alternative successful CED for i) exists that includes R moving ๐ against ๐, followed by H moving
๐ , which is then defeated by R moving ๐. However, here we can use Proposition 3-1, giving us for
sem = grounded the existence of a strategic shortcut by directly moving argument ๐ against ๐.
5. Challenges for Dialogical Deontic Explanations
This paper shows how to incorporate existing results in formal argumentation and refine them to
yield contrastive explanatory dialogues (CEDs) in the context of defeasible normative reasoning,
Input/Output logic in particular. This work is the first of its kind and, so, we end by highlighting
some key challenges for deontic explanations (through formal argumentation).
Challenge 1: Conflict Types The contrastive claims offered by the explainee may give rise
to various kinds of conflicts with the main claim. Two particularly interesting cases when
dealing with (conditional) norms are specificity (you are not allowed to park, unless you
are medical personnel) and contrary-to-duty (donโt be late, but if you are, not more than
10 minutes). Good explanations should make transparent the type of conflicts involved.
Challenge 2: Cognitive Adequacy A good explainer seeks to understand the explainee in
order to tailor the given explanation to precisely target the gaps in the explaineeโs
understanding. For this the explainer may use queries and strategic argumentation, com-
plemented by a theory of (the explaineeโs) mind. Moreover, the knowledge bases of the
explainer and the explainee may be disjoint and incomplete. Tailored explanations must
additionally keep track of commitments and shifts therein throughout a dialogue.
Challenge 3: Richer Handling of Contrastives The explainee may offer contrastive claims
that are, under thorough analysis, not really incompatible with the offered claim. A good
explainer should catch such cases and provide an argument concerning the compatibility
of the claims. For this, more proof-theoretic resources have to be developed. In such
cases, the explainee should be able to withdraw or replace the contrast.
Challenge 4: Richer Deontic Vocabulary Often, normative codes are richer than the ones
studied here, e.g., they may contain priority orderings over norms and permissive norms.
39
Kees van Berkel et al. CEUR Workshop Proceedings 29โ40
These come with challenges, for instance concerning reinstatement (e.g., permissions
generally do not reinstate obligations). Dialogues ideally accommodate such complexity.
Challenge 5: Casuistry In many application contexts of ethical (e.g., in bioethics) and legal
reasoning, we find case-based reasoning when reasoning towards obligations, rights, and
permissions. Deontic explanations of such conclusions need a different conceptual base
than the one provided here, posing their own specific challenges (e.g., balancing reasons).
Acknowledgements. This work was partially funded by the โLogical Methods of Deontic
Explanationsโ (LoDeX) project, Deutsche Forschungsgemeinschaft, Project number 511915728.
References
[1] T. Miller, Explanation in artificial intelligence: Insights from the social sciences, Artificial
Intelligence 267 (2019) 1โ38.
[2] O. Arieli, A. Borg, J. Heyninck, C. Straรer, Logic-based approaches to formal argumen-
tation, in: D. Gabbay, M. Giacomin, G. R. Simari, M. Thimm (Eds.), Handbook of Formal
Argumentation, Volume 2, College Publications, 2021, pp. 1793โ1898.
[3] D. Makinson, L. van der Torre, Constraints for Input/Output logics, Journal of Philosophical
Logic 30 (2001) 155โ185.
[4] J. F. Horty, Reasons as defaults, Oxford University Press, 2012.
[5] K. van Berkel, C. Straรer, Reasoning with and about norms in logical argumentation, in:
Proceedings of COMMA 2022, volume 353, IOS press, 2022, pp. 332 โ 343.
[6] K. van Berkel, C. Straรer, Z. Zhou, Towards an argumentative unification of default
reasoning, in: Proceeding of COMMA 2024, IOS press, 2024, p. TBA.
[7] K. ฤyras, A. Rago, E. Albini, P. Baroni, F. Toni, Argumentative XAI: A survey, 2021.
[8] L. Amgoud, N. Maudet, S. Parsons, Modelling dialogues using argumentation, in: Proceed-
ings Fourth International Conference on MultiAgent Systems, IEEE, 2000, pp. 31โ38.
[9] H. Prakken, Coherence and flexibility in dialogue games for argumentation, Journal of
Logic and Computation 15 (2005) 1009โ1040.
[10] S. Modgil, M. Caminada, Proof theories and algorithms for abstract argumentation frame-
works, in: Argumentation in artificial intelligence, Springer, 2009, pp. 105โ129.
[11] D. Gabbay, J. F. Horty, X. Parent, R. van der Meyden, L. van der Torre, Handbook of Deontic
Logic and Normative Systems, Volume 1, College Publications, United Kingdom, 2013.
[12] O. Arieli, K. van Berkel, C. Straรer, Defeasible normative reasoning: A proof-theoretic
integration of logical argumentation, in: Proceedings of AAAI 2024, 2024, pp. 10450โ10458.
[13] P. M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic
reasoning, logic programming and n-person games, Art. Int. 77 (1995) 321โ357.
[14] P. Lipton, Contrastive explanation, Royal Inst. of Philosophy Suppl. 27 (1990) 247โ266.
[15] U. D. Leibowitz, Scientific explanation and moral explanation, Noรปs 45 (2011) 472โ503.
[16] M. Scriven, Explanations, predictions, and laws, in: Minnesota Studies in the Philosophy
of Science, Vol. 3, University of Minnesota Press, Minneapolis, 1962, pp. 170โ230.
[17] P. Vรคyrynen, Normative explanation and justification, Noรปs 55 (2021) 3โ22.
[18] R. H. Johnson, Manifest rationality: A pragmatic theory of argument, Routledge, 2000.
40