=Paper= {{Paper |id=Vol-3768/paper8 |storemode=property |title=Towards Deontic Explanations Through Dialogue |pdfUrl=https://ceur-ws.org/Vol-3768/paper8.pdf |volume=Vol-3768 |authors=Kees van Berkel,Christian Strasser |dblpUrl=https://dblp.org/rec/conf/comma/0002S24 }} ==Towards Deontic Explanations Through Dialogue== https://ceur-ws.org/Vol-3768/paper8.pdf
                                Towards Deontic Explanations Through Dialogue
                                Kees van Berkel1 , Christian StraรŸer2
                                1
                                    Institute for Logic and Computation, TU Wien, Austria
                                2
                                    Institute of Philosophy II, Ruhr University Bochum, Germany


                                                                         Abstract
                                                                         Deontic explanations answer why-questions concerning agentsโ€™ obligations and permissions. Normative
                                                                         systems are notoriously conflict sensitive, making contrastive explanations pressing: โ€œWhy am I obliged
                                                                         to do ๐œ™, despite my (seemingly) conflicting obligation to do ๐œ“?โ€ In this paper, we develop a model of
                                                                         contrastive explanatory dialogues for the well-established defeasible reasoning formalism Input/Output
                                                                         logic. Our model distinguishes between successful, semi-successful, and unsuccessful deontic dialogues.
                                                                         We prove that the credulous and skeptical (under shared reasons) entailment relation of Input/Output
                                                                         logic, can be characterized in formal argumentation using preferred and grounded semantics. This result
                                                                         allows us to leverage known results for dialogue models of the latter two semantics. Since this work is
                                                                         the first of its kind, we discuss 5 key challenges for deontic explanations through dialogue.

                                                                         Keywords
                                                                         Defeasible normative reasoning, Contrastive deontic explanations, Logical argumentation, Dialogues




                                1. Introduction
                                Norms are indispensable in many aspects of society, ranging from law, ethics, to business
                                protocols and AI. They motivate, guide, and regulate agents, whether they are human or
                                artificial. Often, agents affected by norms do not only need to know that they are bound by
                                obligations or that they may appeal to rights: they need to understand why. Such understanding
                                may enhance compliance and collaboration and is especially pressing when conflicts between
                                norms arise. For instance, I may want to know why I may take over on the left, despite being
                                obliged to drive on the right. Here, a good explanation not only explains that I am permitted, but
                                also why the obligation to the contrary does not currently apply: the permission is an exception
                                to the obligation. Answers to this type of why-question are called deontic explanations.
                                   Deontic logic is the well-established field exploring formal methods to model normative
                                reasoning. However, the focus has been nearly exclusively on formal systems that determine
                                which obligations and permissions can be inferred from a normative system, rather than to
                                explain why. This gap is remarkable, especially given the increasingly vital role that normative
                                systems play in alignment and compliance requirements for AI. This paper investigates how
                                knowledge representation methods can be used to generate explanatory deontic dialogues.
                                   The demand for explanatory models in AI is increasing [1] and formal argumentation provides
                                a promising method in this respect. First of all, formal argumentation has proven to be a unifying
                                framework for nonmonotonic reasoning [2]. In particular, two central paradigms of defeasible

                                ArgXAI-24: 2nd International Workshop on Argumentation for eXplainable AI
                                $ kees.van.berkel@tuwien.ac.at (K. v. Berkel); christian.strasser@rub.de (C. StraรŸer)
                                                                       ยฉ 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
                                    CEUR
                                    Workshop
                                    Proceedings
                                                  http://ceur-ws.org
                                                  ISSN 1613-0073
                                                                       CEUR Workshop Proceedings (CEUR-WS.org)



                                                                                                                                          29




CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
Kees van Berkel et al. CEUR Workshop Proceedings                                               29โ€“40


reasoning, constrained Input/Output (I/O) logic [3] and default logic [4], can be argumentatively
characterized [5, 6]. Second, a wide variety of methods has been proposed in Argumentation for
Explainable AI (ArgXAI, [7]). Finally, dialogue models and argumentation games [8, 9, 10], offer
dynamic characterizations of formal argumentation, that have the potential to yield interactive
(or even tailor-made) explanatory episodes through dialogues.
   Once a given nonmonotonic logic is represented in logical argumentation, such as I/O- or
default logic, dialogical methods can be leveraged for explanatory purposes. However, a first
obstacle, in this respect, is that most characterization results are shown with respect to stable
semantics (including [5, 6]; also see [2]), whereas other semantics such as preferred, admissible,
and grounded are more suitable for dialogical generalization. In brief, the problem with stable
extensions is that they reference the entire set of arguments (each argument is either โ€˜inโ€™ or โ€˜outโ€™),
while we expect explanatory dialogues to focus on reasons relevant to the explanatory purpose.
Furthermore, defining dialogue models and argumentation games for skeptical reasoning is
challenging (in the context of multi-extension semantics such as preferred; cf. [10]).
   Contributions. We provide dialogue models for one of the central defeasible normative
reasoning formalisms in the literature: Input/Output logic [3, 11]. Unfortunately, the original
formalism does not naturally lend itself to explanatory reasoning. Recently, a highly modular
rule-based proof system โ€“ the Deontic Argumentation Calculus (DAC) โ€“ was developed with
the aim of making I/O suitable for explanatory purposes and it was shown that DAC-induced
argumentation frameworks are sound and complete for a large class of constrained I/O logics [5]
and default logic [6]. Despite these promising results, the correspondences were only obtained
for stable semantics, making them seemingly unsuitable for dialogical deontic explanations.
   In this article, we extend these results by a model of dialogue episodes for deontic explanations:
  (1) As a preparatory step, we first prove that for DAC-induced argumentation frameworks
      the stable and the preferred semantics coincide. This allows us to use well-developed
      preferred dialogue models in the context of I/O reasoning.
  (2) Furthermore, we lift recent results [12] that show that the โ€˜free consequencesโ€™ of skeptical
      entailment under stable semantics is identical to entailment under the grounded semantics.
      In other words, we may also use grounded dialogue models for I/O reasoning.
  (3) Using (1) and (2), we enhance dialogue models and define contrastive deontic explanations
      that explain certain obligations in contrast to seeming obligations to the contrary.
  Outline. Section 2 introduces the DAC formalism. In Section 3, we define DAC-induced
argumentation frameworks and prove that stable equals preferred and the free consequences
correspond to grounded entailment. We harness these results to specify contrastive dialogue
models in Section 4. This paper lays the foundations for a more extensive study of dialogue
models of deontic explanation and, for this reason, we discuss five key challenges in Section 5.


2. Preliminaries: A Deontic Argumentation Calculus (DAC)
We recall the basics of the Deontic Argumentation Calculus (DAC). Although the results in this
paper hold for a range of languages, base logics, and DAC systems, for readability we assume a
propositional language โ„’ and classical logic L, and illustrate our approach for one DAC system
from [5]. To enhance explainability, โ„’ is labeled and augmented with a language of norms:



                                                 30
Kees van Berkel et al. CEUR Workshop Proceedings                                                 29โ€“40


Labeled propositional languages: โ„’๐‘– = {๐œ™๐‘– | ๐œ™ โˆˆ โ„’} where ๐‘– โˆˆ {๐‘“, ๐‘œ, ๐‘}.

Norm languages: โ„’๐‘› = {(๐œ™, ๐œ“) | ๐œ™, ๐œ“ โˆˆ โ„’} and โ„’๐‘› = {ยฌฮ” | โˆ… โŠ‚ ฮ” โІ โ„’๐‘› , ฮ” is finite}.

Normative systems: ๐’ฆ = โŸจโ„ฑ, ๐’ฉ , ๐’žโŸฉ is a normative system, where โ„ฑ โІ โ„’๐‘“ is a factual context,
    ๐’ฉ โІ โ„’๐‘› a normative code, and ๐’ž โІ โ„’๐‘ a set of constraints (and โ„ฑ and ๐’ž are L-consistent).

Labels explicate the roles that propositional formulas adopt in the reasoning process: ๐œ™๐‘“ denotes
that ๐œ™ is a fact, ๐œ™๐‘œ that ๐œ™ is obligatory, and ๐œ™๐‘ that obligations must be consistent with ๐œ™. We
take (๐œ™, ๐œ“) โˆˆ โ„’๐‘› to expresses the norm โ€œgiven ๐œ™, it is obligatory that ๐œ“โ€ and ยฌฮ” โˆˆ โ„’๐‘› is
read as โ€œthe norms in ฮ” are jointly inapplicable.โ€ For ยฌ{(๐œ™, ๐œ“)}, we simply write ยฌ(๐œ™, ๐œ“). The
latter type of expression plays an essential role in defeasible reasoning with norms. The entire
enhanced I/O language is defined as the union โ„’๐‘–๐‘œ = โ„’๐‘“ โˆช โ„’๐‘œ โˆช โ„’๐‘ โˆช โ„’๐‘› โˆช โ„’๐‘› . We write
ฮ“๐‘– , ฮ”๐‘– , . . . for finite sets of ๐‘–-labeled formulas, where ๐‘– โˆˆ {๐‘“, ๐‘œ, ๐‘}. We write ฮ“, ฮ”, . . . for any
finite subset of โ„’๐‘–๐‘œ and ฮ”โ†“ for a set ฮ” โІ โ„’๐‘– stripped from its label ๐‘– โˆˆ {๐‘“, ๐‘œ, ๐‘}.
   Defeasible normative reasoning occurs with respect to a normative system ๐’ฆ. The basic idea
of I/O reasoning [3] and DAC is that facts (input) trigger norms from which obligations (output)
are detached where the constraints filter the output to ensure consistency. Our aim is to construct
arguments from ๐’ฆ. Our approach belongs to logical argumentation (a subfield of structured
argumentation [2]). We write arguments as sequents: ๐‘Ž = ฮ“ โ‡’ ๐œ™, where prem(๐‘Ž) = ฮ“ is
a (possibly empty) set of premises, and conc(๐‘Ž) = ๐œ™ is the conclusion of the argument. An
explanatory argument is an argument stating reasons for a conclusion. We take facts, constraints,
and norms as reasons and differentiate two types of argument:

                          ๐œ™๐‘“ , (๐œ™, ๐œ“) โ‡’ ๐œ“ ๐‘œ     and ๐œ™๐‘“ , ยฌ๐œ“ ๐‘ โ‡’ ยฌ(๐œ™, ๐œ“)

The first type (left) contains arguments providing reasons for obligations, where the fact ๐œ™๐‘“ and
norm (๐œ™, ๐œ“) provide reasons for the obligation ๐œ“ ๐‘œ . The second type (right) contains arguments
that attack reasons, expressing which norms are inapplicable in the given context, where given
๐œ™๐‘“ , the norm (๐œ™, ๐œ“) is inapplicable since its detachable obligation is inconsistent with the
constraint ยฌ๐œ“ ๐‘ . The latter type attacks all arguments using (๐œ™, ๐œ“) as a reason.
  A DAC is a sequent-style, that is, rule-based proof system for deriving these two types of
argument [5]. We assume that LC is the sound and complete sequent calculus for L.

Deontic Argumentation Calculus (DAC): Let DAC be a system consisting of the rules Ax,
    FDet, DDet, Con, Ina, InaC, Taut, and Cut (Figure 1). A DAC-derivation of ฮ“ โ‡’ ฮ” is a
    tree-like structure whose leaves are initial sequents, whose root is ฮ“ โ‡’ ฮ”, and whose rule-
    applications are instances of the rules of DAC. We say ฮ“ โ‡’ ฮ” is DAC-derivable (written
    โŠขDAC ฮ“ โ‡’ ฮ”) whenever there exists a DAC-derivation for it, ฮ“ โІ โ„’๐‘–๐‘œ , and ฮ” โІ โ„’๐‘–๐‘œ
    contains at most one formula. We say ฮ“ โ‡’ ฮ” is ๐’ฆ-based whenever ฮ“ โІ โ„ฑ โˆช ๐’ฉ โˆช ๐’ž.

   There are three initial sequent rules: Ax introduces labeled versions of any classically
derivable ฮ“ โ‡’ ฮ” to a DAC-derivation (and so LC rules are not part of DAC). Taut guarantees
that all propositional tautologies are among the output. FDet expresses factual detachment and
gives an initial explanatory argument stating that the fact ๐œ™๐‘“ and the norm (๐œ™, ๐œ“) are reasons
for concluding the obligation ๐œ“ ๐‘œ . DDet corresponds to deontic detachment and makes it possible



                                                  31
Kees van Berkel et al. CEUR Workshop Proceedings                                                   29โ€“40


   โŠขLC ฮ“ โ‡’ ฮ”                                                       Taut                           FDet
             Ax , ๐‘– โˆˆ {๐‘“, ๐‘œ, ๐‘} and ฮ“, ฮ” โІ โ„’           โ‡’ (โŠค, โŠค)              ๐œ™๐‘“, (๐œ™, ๐œ“) โ‡’ ๐œ“ ๐‘œ
    ฮ“๐‘– โ‡’ ฮ”๐‘–
                   ๐œ™๐‘“ , ฮ“ โ‡’ ฮ”                ฮ“ โ‡’ ๐œ™๐‘œ                ฮ“, (๐œ™, ๐œ“) โ‡’
                              DDet๐‘Ž                   Con                      Ina
                   ๐œ™๐‘œ , ฮ“ โ‡’ ฮ”              ฮ“, (ยฌ๐œ™)๐‘ โ‡’              ฮ“ โ‡’ ยฌ(๐œ™, ๐œ“)

                          ฮ“โ‡’                           ฮ“โ‡’๐œ™      ๐œ™, ฮ“โ€ฒ โ‡’ ฮ”
                                       InaC                               Cut๐‘
                    ฮ“ โˆ– โ„’ โ‡’ ยฌ(ฮ“ โˆฉ โ„’๐‘› )
                         ๐‘›                                    โ€ฒ
                                                          ฮ“, ฮ“ โ‡’ ฮ”

Figure 1: A Deontic Argumentation Calculus (DAC). The upper row represents initial sequent rules.
Side-condition (๐‘Ž) on DDet stipulates ฮ“ โˆฉ โ„’๐‘› ฬธ= โˆ…; and (๐‘) on Cut requires that ๐œ™ โˆˆ โ„’๐‘–๐‘œ .


that a norm may be triggered by obligations detached from other norms (see Ex. 1). The rules
Con, Ina, and InaC deal with the defeasibility of normative reasoning and yield attacking
arguments. The Con rule expresses the consistency constraint that if ฮ“ constitutes reasons for
๐œ™๐‘œ , then ฮ“ is inconsistent with the constraint ยฌ๐œ™๐‘ (where an empty right-hand side denotes
inconsistent reasons). We also refer to ฮ“ โ‡’ as an inconsistent argument. When an argument
expresses inconsistent reasons, at least one of its involved norms is inapplicable (Ina) and all
involved norms are jointly inapplicable (InaC). We refer to [5] for other DAC systems.
Example 1. We look at Chisholmโ€™s scenario [11], an archetype of contrary-to-duty reasoning.
Billie is obligated to go and help her neighbors (โŠค, โ„Ž) (โŠค denotes that โ„Ž is detached by default). If
Billie goes to help, she must tell the neighbors she goes (โ„Ž, ๐‘ก), otherwise she ought not to tell them
she goes (ยฌโ„Ž, ยฌ๐‘ก). Suppose that Billie does not go and help ยฌโ„Ž๐‘“ and, so, violates the default duty in
(โŠค, โ„Ž). To know what Billie must do in light of her violation ยฌโ„Ž๐‘“ , the constraint is imposed that the
obligations must be consistent with the fact that Billie does not help ยฌโ„Ž๐‘ [5, 11]. Let โ„ฑ = {ยฌโ„Ž๐‘“ },
๐’ฉ = {(โŠค, โ„Ž), (โ„Ž, ๐‘ก), (ยฌโ„Ž, ยฌ๐‘ก)}, and ๐’ž = {ยฌโ„Ž๐‘ } be the normative system ๐’ฆ. The desired outcome
is that Billie ought not to tell the neighbors she goes ยฌ๐‘ก๐‘œ given that she does not go.
   Argument ๐‘‘ (below left), stating that Billie ought to tell, is derived with deontic detachment.
Argument ๐‘’ (below right), expresses the inapplicability of ยฌ(โŠค, โ„Ž) given the set constraint. Similar
reasoning gives the inconsistent argument ๐‘ฅ = ยฌโ„Ž๐‘“ , (โŠค, โ„Ž), (โ„Ž, ๐‘ก), (ยฌโ„Ž, ยฌ๐‘ก) โ‡’ , which with Con,
Cut, and InaC derives the unattackable ๐‘ฅโ€ฒ = ยฌโ„Ž๐‘“ โ‡’ ยฌ{(โŠค, โ„Ž), (โ„Ž, ๐‘ก), (ยฌโ„Ž, ยฌ๐‘ก)}.
                                                      FDet                               FDet
                                     โ„Ž๐‘“ , (โ„Ž, ๐‘ก) โ‡’ ๐‘ก๐‘œ                 โŠค๐‘“ , (โŠค, โ„Ž) โ‡’ โ„Ž๐‘œ
                         FDet                         DDet                                  Con
       โŠค๐‘“ , (โŠค, โ„Ž) โ‡’ โ„Ž ๐‘œ             โ„Ž๐‘œ , (โ„Ž, ๐‘ก) โ‡’ ๐‘ก๐‘œ               โŠค๐‘“ , (ยฌโ„Ž)๐‘ , (โŠค, โ„Ž) โ‡’
                                                      Cut                                    Ina
               ๐‘ = โŠค๐‘“ , (โŠค, โ„Ž), (โ„Ž, ๐‘ก) โ‡’ ๐‘ก๐‘œ                       ๐‘’ = โŠค๐‘“ , (ยฌโ„Ž)๐‘ โ‡’ ยฌ(โŠค, โ„Ž)
  For the sake of completion, we recall the I/O system out3 here and some known results [5].
Proposition 1 ([5]). Let ๐’ฆโ†“ = โŸจโ„ฑ, ๐’ฉ , ๐’žโŸฉ be ๐’ฆ stripped from its labels. Let ฮ” โІ โ„’, ๐ถ๐‘›(ฮ”) =
โ‹ƒ๏ธ€ | ฮ” โŠขL ๐œ™}, and out(๐’ฉ , ฮ”) = ๐ถ๐‘›({๐œ™ | (๐œ“, ๐œ™) โˆˆ ๐’ฉ and ๐œ“ โˆˆ ๐ถ๐‘›(ฮ”)}). Let out3 (๐’ฉ , โ„ฑ) =
{๐œ™
  ๐‘–โ‰ฅ0 ๐‘‚๐‘– , where ๐‘‚0 = out(๐’ฉ , โ„ฑ) and ๐‘‚๐‘–+1 = ๐ถ๐‘›(๐‘‚๐‘– โˆช out(๐’ฉ , ๐‘‚๐‘– โˆช โ„ฑ)). In words, out3 is a
closure of ๐’ฉ under successive (deontic) detachment with respect to โ„ฑ. Then ๐’ฉ โІ โ„’๐‘› is ๐’ž-consistent
in ๐’ฆ, if โŠฅ โˆˆ/ ๐ถ๐‘›(out3 (๐’ฉ , โ„ฑ) โˆช ๐’ž). Let ฮ˜ โІ ๐’ฉ โІ โ„’๐‘› , ฮ” โІ โ„ฑ โІ โ„’ and ฮฉ โІ ๐’ž โІ โ„’, we have:
   1. ๐œ™ โˆˆ out3 (ฮ˜, ฮ”) iff โŠขDAC ฮ˜, ฮ”๐‘“ โ‡’ ๐œ™๐‘œ ;
   2. ฮ˜ is ๐’ž-inconsistent iff there are ฮ” โІ โ„ฑ and ฮฉ โІ ๐’ž for which โŠขDAC ฮ˜, ฮ”๐‘“ , ฮฉ๐‘ โ‡’ ;
   3. โŠฅ โˆˆ ๐ถ๐‘›(out3 (ฮ˜, ฮ”) โˆช ฮฉ) iff for all (๐œ™, ๐œ“) โˆˆ ฮ˜, โŠขDAC ฮ˜ โˆ– {(๐œ™, ๐œ“)}, ฮ”๐‘“ , ฮฉ๐‘ โ‡’ ยฌ(๐œ™, ๐œ“).



                                                  32
Kees van Berkel et al. CEUR Workshop Proceedings                                                 29โ€“40


3. Formal Argumentation with DAC-arguments
We use formal argumentation [13] to capture the defeasibility of normative reasoning and to
explicate norm conflicts [5]. An Argumentation Framework [13] contains a set of arguments
and an attack relation between arguments, where semantics stipulate conditions under which
sets of arguments are jointly acceptable. We instantiate such frameworks with DAC-arguments.

DAC-induced Argumentation Frameworks Let ๐’ฆ = โŸจโ„ฑ, ๐’ฉ , ๐’žโŸฉ be a normative system. A
    DAC-induced argumentation framework ๐’œโ„ฑ(๐’ฆ) = โŸจArg, AttโŸฉ is defined as follows:
          โ€ข ฮ” โ‡’ ฮ“ โˆˆ Arg iff ฮ” โ‡’ ฮ“ is DAC-derivable and ๐’ฆ-based.
          โ€ข ๐‘Ž defeats ๐‘, i.e., (๐‘Ž, ๐‘) โˆˆ Att โІ Arg ร—Arg iff conc(๐‘Ž) = ยฌฮ” โˆˆ โ„’๐‘› , and ฮ” โІ prem(๐‘).
      We write Arg(ฮฃ) = {โŠขDAC ๐‘Ž | prem(๐‘Ž) โІ ฮฃ}.
Argumentative Semantics and Entailment Let โŸจArg, AttโŸฉ be an ๐’œโ„ฑ and let ๐’ฎ โІ Arg: ๐’ฎ
    defeats an argument ๐‘Ž โˆˆ Arg if there is a ๐‘ โˆˆ ๐’ฎ that defeats ๐‘Ž; and ๐’ฎ defends ๐‘Ž if ๐’ฎ defeats
    every argument that defeats ๐‘Ž. Let Defended(๐’ฎ) be the set of arguments defended by ๐’ฎ.
      We recall the following semantic definitions [13]: ๐’ฎ is conflict-free if it does not defeat
      any of its own elements; ๐’ฎ is admissible if it is conflict-free and defends all ๐‘ โˆˆ ๐’ฎ; ๐’ฎ
      is preferred if it is maximally admissible; ๐’ฎ is stable if it is conflict-free and defeats all
      ๐‘ โˆˆ Arg โˆ– ๐’ฎ; ๐’ฎ is grounded if ๐’ฎ = ๐‘–โ‰ฅ0 ๐’ข๐‘– where ๐’ข0 = โˆ… and ๐’ข๐‘–+1 = Defended(๐’ข๐‘– ).
                                         โ‹ƒ๏ธ€

      Let sem โˆˆ {admissible,preferred,stable,grounded}, we define two entailment relations:
          โ€ข ๐’œโ„ฑ |โˆผโˆฉrea
                 sem ๐œ™ iff there is an ๐‘Ž contained in every sem-extension that concludes ๐œ™;
          โ€ข ๐’œโ„ฑ |โˆผโˆช
                 sem ๐œ™ iff there is a sem-extension โ„ฐ for which there is an ๐‘Ž โˆˆ โ„ฐ concluding ๐œ™.

   |โˆผโˆฉrea
     sem captures the shared arguments (shared reasons) by all sem-extensions. The resulting
conclusions are called the free consequences of ๐’ฆ, which are obligations from unproblematic
norms compatible with any sem-extension. Credulous entailment |โˆผโˆช   sem captures the existence
of reasons in favor of a conclusion for some sem-extension, expressing a defensible stance.

Example 2. The partial ๐’œโ„ฑ in Figure 2 captures the scenario from Ex. 1. There is only one
stable extension {๐‘Ž, ๐‘, ๐‘”, ๐‘ฅโ€ฒ } (the arrows from ๐‘, ๐‘’, ๐‘“, and ๐‘” to ๐‘ฅ are implicit), which is also the
grounded extension (cf. Prop. 3-2 below). We may, thus, conclude ๐’œโ„ฑ |โˆผ(ยฌ๐‘ก)๐‘œ (where |โˆผ โˆˆ {|โˆผโ‹†sem |
โ‹† โˆˆ {โˆฉrea, โˆช} and sem โˆˆ {stable, grounded}}). As desired, since Billie does not go to help her
neighbors, she ought not to tell them she is coming. Billie may now ask โ€œWhy am I obliged to not
tell my neighbors, despite my seeming duty to tell them I am coming to help?โ€ To this we turn next.

  We recall [5, Theorem 2] that for the system adopted in this paper, DAC-induced ๐’œโ„ฑs are
sound and complete for the system out3 of constrained Input/Output logic [3].

Proposition 2. Let ๐’ฆ = โŸจโ„ฑ, ๐’ฉ , ๐’žโŸฉ and let maxfam(๐’ฆ) = {๐’ฉ โ€ฒ โІ ๐’ฉ | โŠฅ ฬธโˆˆ ๐ถ๐‘›(out3 (๐’ฉ โ€ฒ , โ„ฑ โ†“ ) โˆช
๐’ž โ†“ ) and for each ๐’ฉ โ€ฒ โŠ‚ ๐’ฉ โ€ฒโ€ฒ , โŠฅ โˆˆ ๐ถ๐‘›(out3 (๐’ฉ โ€ฒโ€ฒ , โ„ฑ โ†“ ) โˆช ๐’ž โ†“ )} be the set of maximal consistent sets
of norms over ๐’ฆ. Let ๐’œโ„ฑ be induced by DAC and ๐’ฆ with the set of stable extensions stable(๐’œโ„ฑ):
   1. If ๐’ฉ โ€ฒ โˆˆ maxfam(๐’ฆ), then Arg(โ„ฑ โˆช ๐’ฉ โ€ฒ โˆช ๐’ž) โˆˆ stable(๐’œโ„ฑ);
   2. If ๐’œ โˆˆ stable(๐’œโ„ฑ), then there is a ๐’ฉ โ€ฒ โˆˆ maxfam(๐’ฆ) for which ๐’œ = Arg(โ„ฑ โˆช ๐’ฉ โ€ฒ โˆช ๐’ž).



                                                  33
Kees van Berkel et al. CEUR Workshop Proceedings                                                                                           29โ€“40




                                                              (ยฌโ„Ž)๐‘
                                                        [๏ธ‚               ]๏ธ‚                  [๏ธ‚                    ]๏ธ‚             [๏ธ‚            ]๏ธ‚
              (ยฌโ„Ž)๐‘“ , (ยฌโ„Ž, ยฌ๐‘ก)                                                                    (โŠค, โ„Ž), (โ„Ž, ๐‘ก)                       (โŠค, โ„Ž)
        [๏ธ‚                         ]๏ธ‚
 ๐‘Ž=                                              ๐‘=                                     ๐‘‘=                                   ๐‘=
                 โ‡’ (ยฌ๐‘ก)๐‘œ                                     โ‡’ ยฌ(โŠค, โ„Ž)                                โ‡’ ๐‘ก๐‘œ                             โ‡’ โ„Ž๐‘œ


                  (ยฌโ„Ž)๐‘“ , (โŠค, โ„Ž), (โ„Ž, ๐‘ก)                                           (ยฌโ„Ž)๐‘“ , (โŠค, โ„Ž), (ยฌโ„Ž, ยฌ๐‘ก)
             [๏ธ‚                            ]๏ธ‚                                 [๏ธ‚                                        ]๏ธ‚
   ๐‘’=                                                                    ๐‘“=
                     โ‡’ ยฌ(ยฌโ„Ž, ยฌ๐‘ก)                                                          โ‡’ ยฌ(โ„Ž, ๐‘ก)



                                                     (ยฌโ„Ž)๐‘“ , (โ„Ž, ๐‘ก), (ยฌโ„Ž, ยฌ๐‘ก)
                                                [๏ธ‚                                 ]๏ธ‚
    ๐‘ฅ                     ๐‘ฅโ€ฒ            ๐‘”=
                                                          โ‡’ ยฌ(โŠค, โ„Ž)


Figure 2: DAC-induced ๐’œโ„ฑ of Ex. 1. Arrows denote defeats relative to the constraint ๐’ž = {(ยฌโ„Ž)๐‘ }.


   Our aim is to employ dialogue models for contrastive deontic explanations and for this we need
some additional results. Since the grounded extension is unique [13], |โˆผโˆฉrea              โˆช
                                                                           grounded and |โˆผgrounded
coincide and we simply write |โˆผgrounded . The proofs below do not reference specific DAC-rules
(outside the base system [5, 12]), and thus generalize to all DAC systems in [5] (augmented
with InaC). Proposition 3 tells us that for reasoning about the free consequences under the
stable semantics, it suffices to reason with the grounded semantics. Proposition 4 shows that the
preferred and stable semantics coincide for DAC. Consequently, these propositions allow us to
apply well-developed dialogue techniques to DAC for skeptical (in terms of free consequences)
and credulous reasoning under the grounded, respectively the preferred semantics.
Proposition 3. Let ๐’œโ„ฑ be a DAC-induced ๐’œโ„ฑ for ๐’ฆ = โŸจโ„ฑ, ๐’ฉ , ๐’žโŸฉ and let stb(๐’œโ„ฑ) and
grd(๐’œโ„ฑ) be the set of stable extensions, respectively the grounded extension of ๐’œโ„ฑ:
   1. ๐‘Ž โˆˆ stb(๐’œโ„ฑ) iff every defeater ๐‘ โˆˆ Arg of ๐‘Ž is ๐’ž-inconsistent (i.e., it is defeated by an
          โ‹‚๏ธ€
      argument ๐‘ โˆˆ Arg(โ„ฑ โˆช ๐’ž)).
   2. grd(๐’œโ„ฑ) = stb(๐’œโ„ฑ) = ๐’ข2 = Defended(Arg(โ„ฑ โˆช ๐’ž)) (and so |โˆผgrounded = |โˆผโˆฉrea                  1
                  โ‹‚๏ธ€
                                                                                        stable ).

Proof. Ad 1. Let ๐‘Ž = ฮ”๐‘“1 , ฮ˜1 , ฮ“๐‘1 โ‡’ ฮฃ โˆˆ Arg(๐’ฆ). Left-to-Right. Consider a defeater
     ฮ”๐‘“2 , ฮ˜2 , ฮ“๐‘2 โ‡’ ยฌ(๐œ™, ๐œ“) of ๐‘Ž. By Proposition 1, ฮ˜2 โˆช {(๐œ™, ๐œ“)} is ๐’ž-inconsistent in ๐’ฆ. Since
๐‘ = โ‹‚๏ธ€
๐‘Ž โˆˆ stb(๐’œโ„ฑ) and (๐œ™, ๐œ“) โˆˆ prem(๐‘Ž), and by Proposition 3, ฮ˜2 is not contained in a consistent
set of norms in ๐’ฆ and it is therefore inconsistent. By Proposition 1, there are ฮ”๐‘“3 โˆช ฮ“๐‘3 โІ โ„ฑ โˆช ๐’ž
such that ฮ”๐‘“3 , ฮ“๐‘3 โ‡’ ยฌฮ˜2 defeats ๐‘. Right-to-Left. It is easy to see that stb(๐’œโ„ฑ) contains
                                                                            โ‹‚๏ธ€
every argument it defends. Suppose now that ๐‘ = ฮ“ โ‡’ ฮ” is ๐’ž-inconsistent. By Proposition 1,
thereโ‹‚๏ธ€is a ๐‘ = ฮฉ โ‡’ ยฌ(ฮ“โ‹‚๏ธ€ โˆฉ โ„’ ) that defeats ๐‘ and for which ฮฉ โˆฉ โ„’โ‹‚๏ธ€ = โˆ…. Since ๐‘ has no defeaters,
                             ๐‘›                                     ๐‘›

๐‘ โˆˆ stb(๐’œโ„ฑ). So, stb(๐’œโ„ฑ) defends ๐‘Ž and therefore ๐‘Ž โˆˆ stb(๐’œโ„ฑ). โ‹‚๏ธ€
   Ad 2. Left-to-Right. Straightforward. We show Right-to-Left. Let ๐‘Ž โˆˆ stb(๐’œโ„ฑ). By
Item 1, ๐‘Ž is defended by Arg(โ„ฑ โˆช ๐’ž). Clearly, Arg(โ„ฑ โˆช ๐’ž) โІ ๐’ข1 โІ grd(๐’œโ„ฑ) since arguments
in this set do not have defeaters. So, ๐‘Ž โˆˆ ๐’ข2 โІ grd(๐’œโ„ฑ).
    1
      Recall that ๐‘–โ‰ฅ0 ๐’ข๐‘– where ๐’ข0 = โˆ… and ๐’ข๐‘–+1 = Defended(๐’ข๐‘– ). The proposition states the computationally
                  โ‹ƒ๏ธ€
interesting result that the fixed-point construction of the grounded extension terminates on the second iteration.



                                                                         34
Kees van Berkel et al. CEUR Workshop Proceedings                                                             29โ€“40


Proposition 4. Let ๐’œโ„ฑ be a DAC-induced ๐’œโ„ฑ for ๐’ฆ = โŸจโ„ฑ, ๐’ฉ , ๐’žโŸฉ and let ๐’œ โІ Arg: ๐’œ is a
stable extension iff ๐’œ is a preferred extension (and so |โˆผ*preferred = |โˆผ*stable , for * โˆˆ {โˆช, โˆฉrea}).

Proof. Left-to-Right. Lemma 15 in [13]. Right-to-Left. Suppose not, then there is an ๐‘Ž โˆˆ
Arg(๐’ฆ) โˆ– ๐’œ but no ๐‘ โˆˆ ๐’œ for which (๐‘, ๐‘Ž) โˆˆ Att. Since ๐’œ is preferred it is conflict-free. By
Proposition 2, ๐’ฉ โ€ฒ = {(๐œ™, ๐œ“) | (๐œ™, ๐œ“) โˆˆ ฮ” โˆฉ โ„’๐‘› and ฮ” โ‡’ ฮ“ โˆˆ ๐’œ} is ๐’ž-consistent and so ๐’ฉ โ€ฒ โІ
๐’ฉ โ€ฒโ€ฒ for some ๐’ฉ โ€ฒโ€ฒ โˆˆ maxfam(๐’ฆ) and there is a stable extension ๐’œโ€ฒ = Arg(โ„ฑ โˆช ๐’ฉ โ€ฒโ€ฒ โˆช ๐’ž). Clearly,
Arg(โ„ฑ โˆช ๐’ฉ โ€ฒ โˆช ๐’ž) โІ Arg(โ„ฑ โˆช ๐’ฉ โ€ฒโ€ฒ โˆช ๐’ž) and so, ๐’œ is not maximally admissible. Contradiction.


4. Dialogues and Contrastive Explanations
We now provide dialogue models for contrastive deontic explanations. A contrastive explanatory
dialogue starts with a command โ€œ๐œ™๐‘œ !โ€ issued by the explainer, immediately followed by the
explainee asking a question of the form: โ€œWhy ๐œ™๐‘œ , despite ๐œ“ ๐‘œ ?โ€
   Due to the conflict sensitivity of norm systems [3, 11], we consider contrastive why-questions
as the starting point of explanatory episodes [14]. We refer in what follows to ๐œ™๐‘œ as the claim
and to ๐œ“ ๐‘œ as the counter-claim. 2 We do not assume that ๐œ™๐‘œ and ๐œ“ ๐‘œ are derivable from the given
normative system ๐’ฆ, nor do we assume that there is a dialectical relation between ๐œ™๐‘œ and ๐œ“ ๐‘œ ,
referred to as the contrastive link. Both must become explicit (if existent) through the dialogue
itself. We say there exists a contrastive link when two arguments ๐‘Ž and ๐‘ exist concluding ๐œ™๐‘œ ,
respectively ๐œ“ ๐‘œ , which are incompatible, meaning that there is no stable extension containing
both. In such a case, an incompatibility argument can be provided using the premises in ๐‘Ž and ๐‘:

Proposition 5. Let ๐’ฆ = โŸจโ„ฑ, ๐’ฉ , ๐’žโŸฉ and ๐’œโ„ฑ(๐’ฆ) = โŸจArg, AttโŸฉ. For any two arguments ๐‘Ž = ฮ” โ‡’
๐œ™๐‘œ , ๐‘ = ฮ“ โ‡’ ๐œ“ ๐‘œ โˆˆ Arg: there is no stable extension ๐’ฎ with ๐‘Ž, ๐‘ โˆˆ ๐’ฎ iff there is a DAC-derivable
argument ฮ”, ฮ“, ฮฉ โ‡’ with ฮฉ โІ ๐’ž (we call ๐‘Ž and ๐‘ ๐’ž-incompatible).

Proof. Left-to-Right. Let ๐’ฉ โ€ฒ = (ฮ“ โˆช ฮ”) โˆฉ โ„’๐‘› , and โ„ฑ โ€ฒ = {๐œ™ | ๐œ™๐‘“ โˆˆ ฮ“ โˆช ฮ”}. By Prop. 3, there is
no โ„ณ โˆˆ maxfam(๐’ฆ) with ๐’ฉ โ€ฒ โІ โ„ณ. So, there is a ฮฉ โІ ๐’ž for which โŠฅ โˆˆ ๐ถ๐‘›(out3 (๐’ฉ โ€ฒ , โ„ฑ โ€ฒ )โˆชฮฉ).
By Prop. 1 and a Cut application, โŠขDAC ๐’ฉ โ€ฒ , โ„ฑ โ€ฒ , ฮฉ โ‡’ . Right-to-Left. Straightforward.

   An explanatory dialogue addressing โ€œWhy ๐œ™๐‘œ , despite ๐œ“ ๐‘œ ?โ€ is successful whenever it contains

c1 an argument ๐‘Ž for ๐œ™๐‘œ and a demonstration that all (indirect) objections to ๐‘Ž can be met;
c2 an argument ๐‘ for ๐œ“ ๐‘œ such that ๐‘Ž and ๐‘ are ๐’ž-incompatible (recall Prop. 5);
c3 an argument ๐‘ defeating ๐‘ and a demonstration that all (indirect) objections to ๐‘ can be met;
c4 a demonstration that the demonstrations in c1 and c3 are ๐’ž-compatible.

  Informally, c1 provides the โ€˜illative explanationโ€™ of โ€œ๐œ™๐‘œ !โ€ by stating ๐‘Ž containing the facts and
norms in view of which ๐œ™๐‘œ holds. It also provides the โ€˜dialectic explanationโ€™ of ๐œ™๐‘œ by refuting all

    2
      In the philosophical literature deontic explanations are relatively unexplored. Our account takes the question-
oriented pragmatic approach to contrastive explanation (cf. [14]), which naturally extends to dialogue models. It
accords with [15] who calls upon defeasible moral principles (here interpreted as norms) to substantiate explanations
and with [16] who takes defeasible norms to serve as justifications, namely, norms ground as to why the called-upon
facts are explanatory. See also [17] for the role of justification in the context of normative explanations.



                                                        35
Kees van Berkel et al. CEUR Workshop Proceedings                                                                 29โ€“40


                                             R: ๐œ‘๐‘œ !



                                 H: why(๐œ‘๐‘œ )despite(๐œ“ ๐‘œ )?



                R: argue(๐‘Ž = ฮ” โ‡’ ๐œ‘๐‘œ )                        R: despite(๐œ“ ๐‘œ )?




                                        H: argue(๐‘ = ฮ“ โ‡’ ๐œ“ ๐‘œ )               H: argue ฮ”, ฮ“, ฮฉ โ‡’
                                                                                     ฮฉโІ๐’ž

                                                                                                โ„ฐcontrast (c2)
                                       R: argue ๐‘ = ฮฃ โ‡’ ยฌฮ“โ€ฒ
                                           ฮ“โ€ฒ โІ (ฮ“ โˆฉ โ„’ ๐‘› )




            Sub-dialogue for              H: argue ฮ , ฮฃ, ฮ˜ โ‡’
                                                                                  Sub-dialogue for
          the acceptance of ๐‘Ž                ฮ  โІprem โ„ฐR ,
                                                                                 the acceptance of ๐‘
                                                ฮ˜โІ๐’ž



                         โ„ฐclaim (c1)          โ„ฐcomp (c4)                 โ„ฐcounter (c3)

Figure 3: A DAC-based contrastive explanatory dialogue (CED) for explainer (R) and explainee (H).


possible objections the explainee may have against concluding ๐œ™๐‘œ . c2 makes explicit the contrast
between the argument for ๐œ™๐‘œ with an argument for ๐œ“ ๐‘œ , and c3 provides illative and dialectical
explanations for why ๐œ“ ๐‘œ can be successfully objected to (our terminology mirrors Johnsonโ€™s
well-known two-tier model of argument [18]). Last, c4 ensures that the two sub-explanations
in c1 and c3 form a ๐’ž-compatible view. The intuitive idea of contrastive explanatory dialogues,
following c1-c4, is provided in Figure 3. Below, we make this formally precise.
   Although explanatory dialogues are collaborative, we assume a burden of proof for the
explainer with respect to c1 and c3, and for explainee with respect to c2 and c4. For the sake of
simplified reference, we call the explainee โ€˜humanโ€™ (H) and the explainer โ€˜robotโ€™ (R).
Explanatory Dialogues Let ๐’œโ„ฑ(๐’ฆ) be a DAC-induced argumentation framework. Let R be
     the explainer and H the explainee. A contrastive explanatory dialogue (CED) is a sequence
     โ„ฐ = โŸจ๐‘š1 , . . . , ๐‘š๐‘› โŸฉ of tuples ๐‘š๐‘– = โŸจpl, lo, taโŸฉ called moves such that ๐‘– is ๐‘š๐‘– โ€™s position in
     the dialogue, pl(๐‘š๐‘– ) โˆˆ {R, H} is the player making move ๐‘š๐‘– , lo(๐‘š๐‘– ) โˆˆ Locutions is the
     locution in ๐‘š๐‘– , and ta(๐‘š๐‘– ) โˆˆ {๐‘š1 , . . . , ๐‘š๐‘–โˆ’1 } โˆช {โˆ…} is the target of ๐‘š๐‘– . Locutions =
     {(๐‘ฅ)!, why(๐‘ฅ)despite(๐‘ฆ)?, despite(๐‘ฅ)?, argue(๐‘ง)} is the set of expressions that inter-
     locutors may use, where ๐‘ฅ and ๐‘ฆ range over โ„’๐‘œ and ๐‘ง ranges over Arg(๐’ฆ). โ„ฐ is a sem-CED
     (for sem โˆˆ {preferred, grounded}) whenever โ„ฐ satisfies the protocol stipulated by P1-P5.
P1 Dialogue Commencement Rules for ๐‘š๐‘– โˆˆ โ„ฐ with ๐‘– โˆˆ {1, 2, 3, 4}:
      ๐‘š1 = โŸจR, ๐œ™๐‘œ !, โˆ…โŸฉ                  ๐‘š3 = โŸจR, argue(๐‘Ž), ๐‘š2 โŸฉ with conc(๐‘Ž) = ๐œ™๐‘œ
      ๐‘š2 = โŸจH, why(๐œ™ )despite(๐œ“ )?, ๐‘š1 โŸฉ ๐‘š4 = โŸจR, despite(๐œ“ ๐‘œ )?, ๐‘š2 โŸฉ
                       ๐‘œ       ๐‘œ




                                                             36
Kees van Berkel et al. CEUR Workshop Proceedings                                              29โ€“40


P2 General Rules for each ๐‘š๐‘– , ๐‘š๐‘˜ โˆˆ โ„ฐ:
          i) if ๐‘– > 4, then lo(๐‘š๐‘– ) = argue(๐‘ฅ), and ta(๐‘š๐‘– ) = ๐‘š๐‘˜ with ๐‘˜ < ๐‘–.
         ii) if ta(๐‘š๐‘– ) = ๐‘š๐‘˜ , lo(๐‘š๐‘– ) = argue(๐‘ฅ) and lo(๐‘š๐‘˜ ) = argue(๐‘ฆ), then conc(๐‘ฅ) =
             ยฌฮ” โˆˆ โ„’๐‘› for some ฮ” โІ prem(๐‘ฆ);
        iii) if ta(๐‘š๐‘– ) = ta(๐‘š๐‘˜ ) and ๐‘– ฬธ= ๐‘˜, then lo(๐‘š๐‘– ) ฬธ= lo(๐‘š๐‘˜ );
        iv) if ta(๐‘š๐‘– ) = ๐‘š๐‘˜ , then pl(๐‘š๐‘– ) ฬธ= pl(๐‘š๐‘˜ ).

  P1 stipulates that (1) R starts the dialogue with a command to which, (2) H responds with a
contrastive why-question. Then, (3) R must provide reasons for the command and, after that, (4)
shifts the burden of proof to H requesting support for the contrastive claim. P2 stipulates rules
that hold for both R and H: (i) after the start of the dialogue any player may continue making
moves that target previous moves by stating arguments, where (ii) arguments moved against
other arguments express undermining defeats, (iii) players may not move an argument twice
against the same move, and (iv) they may not attack their own claims.

P3 Explainer Rules for each ๐‘š๐‘– , ๐‘š๐‘˜ โˆˆ โ„ฐ, if ta(๐‘š๐‘– ) = ๐‘š2 , then ๐‘– = 3 or ๐‘– = 4.
P4 Explainee Rules for each ๐‘š๐‘– , ๐‘š๐‘— , ๐‘š๐‘˜ โˆˆ โ„ฐ:
          i) if ta(๐‘š๐‘– )=ta(๐‘š๐‘— )=ta(๐‘š๐‘˜ )=๐‘š4 (and, so, pl(๐‘š๐‘– ) = H), then |{๐‘–, ๐‘—, ๐‘˜}| โ‰ค 2.
         ii) if ta(๐‘š๐‘– )=ta(๐‘š๐‘— )=๐‘š4 , ๐‘–ฬธ=๐‘—, {lo(๐‘š๐‘– ), lo(๐‘š๐‘— )} = {argue(๐‘), argue(๐‘)}, lo(๐‘š3 ) =
             argue(๐‘Ž), then conc(๐‘) = ๐œ“ ๐‘œ and ๐‘ = prem(๐‘Ž), prem(๐‘), ฮฉ โ‡’ with ฮฉ โІ ๐’ž.
        iii) if ta(๐‘š๐‘– ) = ๐‘š๐‘˜ , ta(๐‘š๐‘˜ ) = ๐‘š๐‘— , ta(๐‘š๐‘— ) = ๐‘š4 , lo(๐‘š๐‘— ) = argue(๐‘), with conc(๐‘) ฬธ=
             โˆ…, and lo(๐‘š๐‘˜ ) = argue(๐‘‘), then
                โ€“ either lo(๐‘š๐‘– ) = argue(ฮฃ, prem(๐‘‘), ๐’ž โ‡’ ), and ฮฃ โІ prem(โ„ฐR ) with โ„ฐR =
                   {๐‘Ž | ๐‘š โˆˆ โ„ฐ, pl(๐‘š) = R, lo(๐‘š) = argue(๐‘Ž)};
                โ€“ or lo(๐‘š๐‘– ) = argue(๐‘’) for some ๐‘’ with conc(๐‘’) = ยฌฮ” โˆˆ โ„’๐‘› and ฮ” โІ prem(๐‘‘).

    P3 states that R must provide exactly two moves against the contrastive why-question (one
of which provides reasons, and the other questioning the contrastive claim). P4 stipulates that
the explainee H may (i) make at most two moves against Rโ€™s questioning of the contrastive link,
(ii) one of which is an argument providing reasons for the counter-claim, one which shows the
๐’ž-incompatibility of the arguments for the claim and the counter-claim ๐œ“ ๐‘œ . Then, (iii) H may
also move against the explainerโ€™s argument ๐‘‘ opposing the reasons for the counter-claim. In
case ๐‘‘ is incompatible with the other arguments offered by R, R engages in incoherent reasoning.
H may, thus, oppose by demonstrating the ๐’ž-incompatibility of ๐‘‘ and other R arguments.
    A CED has a tree-structure since each move has exactly one predecessor (except for the
root). A branch of โ„ฐ containing ๐‘š๐‘– is the maximal linear sequence branch(๐‘š๐‘– ) = โŸจ๐‘š๐‘—1 , . . . ,
๐‘š๐‘–=๐‘—๐‘˜ , . . . , ๐‘š๐‘—๐‘› โŸฉ such that for each ๐‘š๐‘—๐‘™ and ๐‘š๐‘—๐‘™+1 , ta(๐‘š๐‘—๐‘™+1 ) = ๐‘—๐‘™ . We say ๐‘š๐‘—๐‘› is a leaf.
Each CED consists of four subdialogues (see Fig. 3), which constitute four sub-explanations: a
subdialogue โ„ฐclaim (cf. c1) that engages with the argument ๐‘Ž given in favour of the claim ๐œ™๐‘œ
(generated from ๐‘š3 down); a subdialogue โ„ฐcounter (cf. c3) that engages with the argument ๐‘ given
in favour of the counter-claim ๐œ“ ๐‘œ (generated from the move attacking the argument providing
reasons for ๐œ“ ๐‘œ ); and the subdialogues โ„ฐcontrast (cf. c2) and โ„ฐcomp (cf. c4) each containing at most
one node with an argument that shows the ๐’ž-incompatibility of ๐‘Ž and ๐‘, respectively, joint



                                                 37
Kees van Berkel et al. CEUR Workshop Proceedings                                              29โ€“40


๐’ž-incompatibility of Rโ€™s explanations โ„ฐclaim and โ„ฐcounter . These four subdialogues determine
when a given dialogues is (un/semi)successful.
   Before defining success, we add P5 to the protocol to accommodate reasoning with preferred
and grounded acceptance of arguments. For sem โˆˆ {preferred, grounded}, (i) R may move at
most one counter-argument to each H argument. For preferred dialogues, (ii) H is not allowed to
move the same argument twice on a branch in โ„ฐclaim or โ„ฐcounter . For grounded dialogues, (iii) R
is not allowed to move the same argument twice on a branch. We note that (i)-(iii) follow the
protocols for admissible (and, so, preferred) and grounded argumentation games [10].

P5 Preferred and Grounded Rules for each ๐‘š๐‘– , ๐‘š๐‘— โˆˆ โ„ฐ, and sem โˆˆ {preferred, grounded}:
          i) ta(๐‘š๐‘– ) = ta(๐‘š๐‘˜ ) = ๐‘š๐‘— with ๐‘— > 3 and pl(๐‘š๐‘– ) = R, then ๐‘š๐‘– = ๐‘š๐‘˜ ;
         ii) if sem = preferred, pl(๐‘š๐‘– ) = H, and ta(๐‘š๐‘– ) = ๐‘š๐‘— , then there is no ๐‘š๐‘˜ โˆˆ
             branch(๐‘š๐‘— ) for which pl(๐‘š๐‘˜ ) = H, lo(๐‘š๐‘– ) = lo(๐‘š๐‘˜ ) and ๐‘– ฬธ= ๐‘˜;
        iii) if sem = grounded, pl(๐‘š๐‘– ) = R, and ta(๐‘š๐‘– ) = ๐‘š๐‘— , then there is no ๐‘š๐‘˜ โˆˆ
             branch(๐‘š๐‘— ) for which pl(๐‘š๐‘˜ ) = R, lo(๐‘š๐‘– ) = lo(๐‘š๐‘˜ ) and ๐‘– ฬธ= ๐‘˜.

Successful dialogues Let ๐’œโ„ฑ(๐’ฆ) be DAC-induced. A CED โ„ฐ satisfying P1-P5 is:
      โˆ™ successful if โ„ฐcontrast ฬธ= โˆ…, โ„ฐcomp = โˆ…, and โ„ฐclaim and โ„ฐcounter both contain R-leaves only;
      โˆ™ semi-successful if โ„ฐcontrast = โˆ… = โ„ฐcomp , and โ„ฐclaim contains R-leaves only;
      โˆ™ unsuccessful if neither of the above holds.
      Then, โ„ฐ is sem-successful when it is saturated (i.e., all movable arguments from ๐’œโ„ฑ(๐’ฆ)
      are moved in โ„ฐ; cf. [9]) and โ„ฐ is successful (similar for semi- and unsuccessful).

  In brief, a successful CED features (c1) an illative explanation that is supplemented by
a dialectical explanation (โ„ฐclaim contains only R-leaves), where (c2) the explainee is able to
demonstrate the incompatibility of the contrastive claim (โ„ฐcontrast ฬธ= โˆ…), the latter which (c3) the
explainer successfully counters (โ„ฐcounter contains only R-leaves). Furthermore, (c4) the position
taken by R in โ„ฐclaim and โ„ฐcounter must be ๐’ž-compatible. A semi-successful dialogue features
(c1), but H is not able to demonstrate the adequacy of the contrastive link (โ„ฐcontrast = โˆ…). A
dialogue can be unsuccessful for various reasons, e.g., R cannot provide an illative or dialectic
explanation, or R cannot argue against Hโ€™s counter-claim.
  Under saturation, it can be easily checked that the sub-dialogues in โ„ฐclaim and โ„ฐcounter are for
sem โˆˆ {preferred, grounded} instances of credulous preferred and grounded argumentation
games [10] (where for the latter credulous equals skeptical entailment). Hence, we obtain
dialogue models that construct explanations for credulous I/O entailment (i.e., when sem =
preferred) and for skeptical I/O entailment under shared reasons (i.e., when sem = grounded).
Proposition 6. Let ๐’œโ„ฑ(๐’ฆ) = โŸจArg, AttโŸฉ be DAC-induced, sem โˆˆ {preferred, grounded}, and
โ„ฐ* = โŸจ๐‘š๐‘– , . . . , ๐‘š๐‘› โŸฉ be ๐’œโ„ฑ(๐’ฆ)-based with lo(๐‘š๐‘– ) = argue(๐‘Ž) and * โˆˆ {claim, counter}:
    โ€ข if โ„ฐ* is saturated and contains only R leaves then ๐‘Ž โˆˆ ๐’ฎ for some sem-extension;
    โ€ข if ๐‘Ž โˆˆ ๐’ฎ for some sem-extension ๐’ฎ, there is a saturated extension of โ„ฐ* with only R leaves.
Proof. Straightforward modification of the proofs of Theorems 6.2 and 6.5 in [10].



                                                38
Kees van Berkel et al. CEUR Workshop Proceedings                                                                             29โ€“40


Example 3. Let sem โˆˆ {preferred,grounded}: Figure i) provides a successful saturated sem-CED
for โ€œWhy ยฌ๐‘ก๐‘œ , despite ๐‘ก๐‘œ ?,โ€ notice that โ„ฐcomp is empty since โ„ฐR = {๐‘Ž, ๐‘} is ๐’ž-compatible. Figure ii)
contains an unsuccessful saturated sem-CED for โ€œWhy ๐‘ก๐‘œ , despite ยฌ๐‘ก๐‘œ ?โ€ where H refutes the claim
(with ๐‘) and defends the counter-claim (with ๐‘Ž and ๐‘). The arguments in i) and ii) reference those
in Fig. 2 of Ex. 2 (for space reasons, we adopted an example for which grounded equals stable).
                       R: ยฌ๐‘ก ๐‘œ !                                                             R: ๐‘ก ๐‘œ !

            H: why(ยฌ๐‘ก ๐‘œ )despite(๐‘ก ๐‘œ )?                                             H: why(๐‘ก ๐‘œ )despite(ยฌ๐‘ก ๐‘œ )?

      R: argue(a)                  R: despite(๐‘ก ๐‘œ )?                          R: argue(๐‘‘)               R: despite(ยฌ๐‘ก ๐‘œ )?

      H: argue(๐‘’)             H: argue(๐‘‘)         H: argue(๐‘ฅ)                 H: argue(๐‘)        H: argue(๐‘Ž)         H: argue(๐‘ฅ)
                                                            โ„ฐcontrast   โ„ฐclaim                                               โ„ฐcontrast
      R: argue(๐‘)           H: โˆ…    R: argue ๐‘                                              H: โˆ…        R: argue ๐‘’
โ„ฐclaim              โ„ฐcomp                        โ„ฐcounter                               โ„ฐcomp
 i)                                                                     ii)                             H: argue ๐‘     โ„ฐcounter

An alternative successful CED for i) exists that includes R moving ๐‘” against ๐‘’, followed by H moving
๐‘“ , which is then defeated by R moving ๐‘. However, here we can use Proposition 3-1, giving us for
sem = grounded the existence of a strategic shortcut by directly moving argument ๐‘ against ๐‘’.


5. Challenges for Dialogical Deontic Explanations
This paper shows how to incorporate existing results in formal argumentation and refine them to
yield contrastive explanatory dialogues (CEDs) in the context of defeasible normative reasoning,
Input/Output logic in particular. This work is the first of its kind and, so, we end by highlighting
some key challenges for deontic explanations (through formal argumentation).

Challenge 1: Conflict Types The contrastive claims offered by the explainee may give rise
     to various kinds of conflicts with the main claim. Two particularly interesting cases when
     dealing with (conditional) norms are specificity (you are not allowed to park, unless you
     are medical personnel) and contrary-to-duty (donโ€™t be late, but if you are, not more than
     10 minutes). Good explanations should make transparent the type of conflicts involved.
Challenge 2: Cognitive Adequacy A good explainer seeks to understand the explainee in
     order to tailor the given explanation to precisely target the gaps in the explaineeโ€™s
     understanding. For this the explainer may use queries and strategic argumentation, com-
     plemented by a theory of (the explaineeโ€™s) mind. Moreover, the knowledge bases of the
     explainer and the explainee may be disjoint and incomplete. Tailored explanations must
     additionally keep track of commitments and shifts therein throughout a dialogue.
Challenge 3: Richer Handling of Contrastives The explainee may offer contrastive claims
     that are, under thorough analysis, not really incompatible with the offered claim. A good
     explainer should catch such cases and provide an argument concerning the compatibility
     of the claims. For this, more proof-theoretic resources have to be developed. In such
     cases, the explainee should be able to withdraw or replace the contrast.
Challenge 4: Richer Deontic Vocabulary Often, normative codes are richer than the ones
     studied here, e.g., they may contain priority orderings over norms and permissive norms.



                                                                  39
Kees van Berkel et al. CEUR Workshop Proceedings                                              29โ€“40


      These come with challenges, for instance concerning reinstatement (e.g., permissions
      generally do not reinstate obligations). Dialogues ideally accommodate such complexity.
Challenge 5: Casuistry In many application contexts of ethical (e.g., in bioethics) and legal
     reasoning, we find case-based reasoning when reasoning towards obligations, rights, and
     permissions. Deontic explanations of such conclusions need a different conceptual base
     than the one provided here, posing their own specific challenges (e.g., balancing reasons).

Acknowledgements. This work was partially funded by the โ€œLogical Methods of Deontic
Explanationsโ€ (LoDeX) project, Deutsche Forschungsgemeinschaft, Project number 511915728.


References
 [1] T. Miller, Explanation in artificial intelligence: Insights from the social sciences, Artificial
     Intelligence 267 (2019) 1โ€“38.
 [2] O. Arieli, A. Borg, J. Heyninck, C. StraรŸer, Logic-based approaches to formal argumen-
     tation, in: D. Gabbay, M. Giacomin, G. R. Simari, M. Thimm (Eds.), Handbook of Formal
     Argumentation, Volume 2, College Publications, 2021, pp. 1793โ€“1898.
 [3] D. Makinson, L. van der Torre, Constraints for Input/Output logics, Journal of Philosophical
     Logic 30 (2001) 155โ€“185.
 [4] J. F. Horty, Reasons as defaults, Oxford University Press, 2012.
 [5] K. van Berkel, C. StraรŸer, Reasoning with and about norms in logical argumentation, in:
     Proceedings of COMMA 2022, volume 353, IOS press, 2022, pp. 332 โ€“ 343.
 [6] K. van Berkel, C. StraรŸer, Z. Zhou, Towards an argumentative unification of default
     reasoning, in: Proceeding of COMMA 2024, IOS press, 2024, p. TBA.
 [7] K. ฤŒyras, A. Rago, E. Albini, P. Baroni, F. Toni, Argumentative XAI: A survey, 2021.
 [8] L. Amgoud, N. Maudet, S. Parsons, Modelling dialogues using argumentation, in: Proceed-
     ings Fourth International Conference on MultiAgent Systems, IEEE, 2000, pp. 31โ€“38.
 [9] H. Prakken, Coherence and flexibility in dialogue games for argumentation, Journal of
     Logic and Computation 15 (2005) 1009โ€“1040.
[10] S. Modgil, M. Caminada, Proof theories and algorithms for abstract argumentation frame-
     works, in: Argumentation in artificial intelligence, Springer, 2009, pp. 105โ€“129.
[11] D. Gabbay, J. F. Horty, X. Parent, R. van der Meyden, L. van der Torre, Handbook of Deontic
     Logic and Normative Systems, Volume 1, College Publications, United Kingdom, 2013.
[12] O. Arieli, K. van Berkel, C. StraรŸer, Defeasible normative reasoning: A proof-theoretic
     integration of logical argumentation, in: Proceedings of AAAI 2024, 2024, pp. 10450โ€“10458.
[13] P. M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic
     reasoning, logic programming and n-person games, Art. Int. 77 (1995) 321โ€“357.
[14] P. Lipton, Contrastive explanation, Royal Inst. of Philosophy Suppl. 27 (1990) 247โ€“266.
[15] U. D. Leibowitz, Scientific explanation and moral explanation, Noรปs 45 (2011) 472โ€“503.
[16] M. Scriven, Explanations, predictions, and laws, in: Minnesota Studies in the Philosophy
     of Science, Vol. 3, University of Minnesota Press, Minneapolis, 1962, pp. 170โ€“230.
[17] P. Vรคyrynen, Normative explanation and justification, Noรปs 55 (2021) 3โ€“22.
[18] R. H. Johnson, Manifest rationality: A pragmatic theory of argument, Routledge, 2000.



                                                 40