<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards Deontic Explanations Through Dialogue</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Kees van Berkel</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christian Straßer</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute for Logic and Computation</institution>
          ,
          <addr-line>TU Wien</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute of Philosophy II, Ruhr University Bochum</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <fpage>29</fpage>
      <lpage>40</lpage>
      <abstract>
        <p>Deontic explanations answer why-questions concerning agents' obligations and permissions. Normative systems are notoriously conflict sensitive, making contrastive explanations pressing: “Why am I obliged to do  , despite my (seemingly) conflicting obligation to do  ?” In this paper, we develop a model of contrastive explanatory dialogues for the well-established defeasible reasoning formalism Input/Output logic. Our model distinguishes between successful, semi-successful, and unsuccessful deontic dialogues. We prove that the credulous and skeptical (under shared reasons) entailment relation of Input/Output logic, can be characterized in formal argumentation using preferred and grounded semantics. This result allows us to leverage known results for dialogue models of the latter two semantics. Since this work is the first of its kind, we discuss 5 key challenges for deontic explanations through dialogue.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Defeasible normative reasoning</kwd>
        <kwd>Contrastive deontic explanations</kwd>
        <kwd>Logical argumentation</kwd>
        <kwd>Dialogues</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Norms are indispensable in many aspects of society, ranging from law, ethics, to business
protocols and AI. They motivate, guide, and regulate agents, whether they are human or
artificial. Often, agents afected by norms do not only need to know that they are bound by
obligations or that they may appeal to rights: they need to understand why. Such understanding
may enhance compliance and collaboration and is especially pressing when conflicts between
norms arise. For instance, I may want to know why I may take over on the left, despite being
obliged to drive on the right. Here, a good explanation not only explains that I am permitted, but
also why the obligation to the contrary does not currently apply: the permission is an exception
to the obligation. Answers to this type of why-question are called deontic explanations.</p>
      <p>Deontic logic is the well-established field exploring formal methods to model normative
reasoning. However, the focus has been nearly exclusively on formal systems that determine
which obligations and permissions can be inferred from a normative system, rather than to
explain why. This gap is remarkable, especially given the increasingly vital role that normative
systems play in alignment and compliance requirements for AI. This paper investigates how
knowledge representation methods can be used to generate explanatory deontic dialogues.</p>
      <p>
        The demand for explanatory models in AI is increasing [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and formal argumentation provides
a promising method in this respect. First of all, formal argumentation has proven to be a unifying
framework for nonmonotonic reasoning [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In particular, two central paradigms of defeasible
reasoning, constrained Input/Output (I/O) logic [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and default logic [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], can be argumentatively
characterized [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. Second, a wide variety of methods has been proposed in Argumentation for
Explainable AI (ArgXAI, [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]). Finally, dialogue models and argumentation games [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ], ofer
dynamic characterizations of formal argumentation, that have the potential to yield interactive
(or even tailor-made) explanatory episodes through dialogues.
      </p>
      <p>
        Once a given nonmonotonic logic is represented in logical argumentation, such as I/O- or
default logic, dialogical methods can be leveraged for explanatory purposes. However, a first
obstacle, in this respect, is that most characterization results are shown with respect to stable
semantics (including [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]; also see [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]), whereas other semantics such as preferred, admissible,
and grounded are more suitable for dialogical generalization. In brief, the problem with stable
extensions is that they reference the entire set of arguments (each argument is either ‘in’ or ‘out’),
while we expect explanatory dialogues to focus on reasons relevant to the explanatory purpose.
Furthermore, defining dialogue models and argumentation games for skeptical reasoning is
challenging (in the context of multi-extension semantics such as preferred; cf. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]).
      </p>
      <p>
        Contributions. We provide dialogue models for one of the central defeasible normative
reasoning formalisms in the literature: Input/Output logic [
        <xref ref-type="bibr" rid="ref11 ref3">3, 11</xref>
        ]. Unfortunately, the original
formalism does not naturally lend itself to explanatory reasoning. Recently, a highly modular
rule-based proof system – the Deontic Argumentation Calculus (DAC) – was developed with
the aim of making I/O suitable for explanatory purposes and it was shown that DAC-induced
argumentation frameworks are sound and complete for a large class of constrained I/O logics [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]
and default logic [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Despite these promising results, the correspondences were only obtained
for stable semantics, making them seemingly unsuitable for dialogical deontic explanations.
      </p>
      <p>
        In this article, we extend these results by a model of dialogue episodes for deontic explanations:
(1) As a preparatory step, we first prove that for DAC-induced argumentation frameworks
the stable and the preferred semantics coincide. This allows us to use well-developed
preferred dialogue models in the context of I/O reasoning.
(2) Furthermore, we lift recent results [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] that show that the ‘free consequences’ of skeptical
entailment under stable semantics is identical to entailment under the grounded semantics.
      </p>
      <p>In other words, we may also use grounded dialogue models for I/O reasoning.
(3) Using (1) and (2), we enhance dialogue models and define contrastive deontic explanations
that explain certain obligations in contrast to seeming obligations to the contrary.</p>
      <p>Outline. Section 2 introduces the DAC formalism. In Section 3, we define DAC-induced
argumentation frameworks and prove that stable equals preferred and the free consequences
correspond to grounded entailment. We harness these results to specify contrastive dialogue
models in Section 4. This paper lays the foundations for a more extensive study of dialogue
models of deontic explanation and, for this reason, we discuss five key challenges in Section 5.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Preliminaries: A Deontic Argumentation Calculus (DAC)</title>
      <p>
        We recall the basics of the Deontic Argumentation Calculus (DAC). Although the results in this
paper hold for a range of languages, base logics, and DAC systems, for readability we assume a
propositional language ℒ and classical logic L, and illustrate our approach for one DAC system
from [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. To enhance explainability, ℒ is labeled and augmented with a language of norms:
Labeled propositional languages: ℒ = {  |  ∈ ℒ} where  ∈ {, , }.
Norm languages: ℒ = {(, 
) | , 
∈ ℒ} and ℒ = {¬Δ | ∅ ⊂
Δ ⊆ ℒ , Δ is finite }.
      </p>
      <p>Normative systems:  = ⟨ℱ ,  , ⟩ is a normative system, where ℱ ⊆ ℒ  is a factual context,
 ⊆ ℒ  a normative code, and  ⊆ ℒ  a set of constraints (and ℱ and  are L-consistent).
Labels explicate the roles that propositional formulas adopt in the reasoning process:   denotes
that  is a fact,   that  is obligatory, and   that obligations must be consistent with  . We
take (,  ) ∈ ℒ to expresses the norm “given  , it is obligatory that  ” and ¬Δ ∈ ℒ is
read as “the norms in Δ are jointly inapplicable.” For ¬{(,  )}, we simply write ¬(,  ). The
latter type of expression plays an essential role in defeasible reasoning with norms. The entire
enhanced I/O language is defined as the union ℒ = ℒ ∪ ℒ ∪ ℒ ∪ ℒ ∪ ℒ. We write
Γ, Δ, . . . for finite sets of -labeled formulas, where  ∈ {, , }. We write Γ, Δ, . . . for any
ifnite subset of ℒ and Δ↓ for a set Δ ⊆ ℒ  stripped from its label  ∈ {, , }.</p>
      <p>
        Defeasible normative reasoning occurs with respect to a normative system . The basic idea
of I/O reasoning [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and DAC is that facts (input) trigger norms from which obligations (output)
are detached where the constraints filter the output to ensure consistency. Our aim is to construct
arguments from . Our approach belongs to logical argumentation (a subfield of structured
argumentation [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]). We write arguments as sequents:  = Γ ⇒  , where prem() = Γ is
a (possibly empty) set of premises, and conc() =  is the conclusion of the argument. An
explanatory argument is an argument stating reasons for a conclusion. We take facts, constraints,
and norms as reasons and diferentiate two types of argument:
  , (, 
) ⇒   and   , ¬  ⇒ ¬(, 
)
The first type (left) contains arguments providing reasons for obligations, where the fact   and
norm (,  ) provide reasons for the obligation  . The second type (right) contains arguments
that attack reasons, expressing which norms are inapplicable in the given context, where given
  , the norm (,  ) is inapplicable since its detachable obligation is inconsistent with the
constraint ¬ . The latter type attacks all arguments using (,  ) as a reason.
      </p>
      <p>
        A DAC is a sequent-style, that is, rule-based proof system for deriving these two types of
argument [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. We assume that LC is the sound and complete sequent calculus for L.
      </p>
      <sec id="sec-2-1">
        <title>Deontic Argumentation Calculus (DAC): Let DAC be a system consisting of the rules Ax,</title>
        <p>FDet, DDet, Con, Ina, InaC, Taut, and Cut (Figure 1). A DAC-derivation of Γ ⇒ Δ is a
tree-like structure whose leaves are initial sequents, whose root is Γ ⇒ Δ, and whose
ruleapplications are instances of the rules of DAC. We say Γ ⇒ Δ is DAC-derivable (written
⊢DAC Γ ⇒ Δ) whenever there exists a DAC-derivation for it, Γ ⊆ ℒ , and Δ ⊆ ℒ 
contains at most one formula. We say Γ ⇒ Δ is -based whenever Γ ⊆ ℱ ∪  ∪  .</p>
        <p>There are three initial sequent rules: Ax introduces labeled versions of any classically
derivable Γ ⇒ Δ to a DAC-derivation (and so LC rules are not part of DAC). Taut guarantees
that all propositional tautologies are among the output. FDet expresses factual detachment and
gives an initial explanatory argument stating that the fact   and the norm (,  ) are reasons
for concluding the obligation  . DDet corresponds to deontic detachment and makes it possible</p>
        <p>Ax ,  ∈ {, , } and Γ, Δ ⊆ ℒ
  , Γ ⇒ Δ
 , Γ ⇒ Δ</p>
      </sec>
      <sec id="sec-2-2">
        <title>DDet</title>
        <p>Γ ⇒  
Γ, (¬ ) ⇒
⇒ (⊤, ⊤)</p>
        <p>Con</p>
        <p>Taut
Γ, (,  ) ⇒
Γ ⇒ ¬(,  )
Γ ⇒
Γ ∖ ℒ ⇒ ¬(Γ ∩ ℒ) InaC
Γ ⇒</p>
        <p>, Γ′ ⇒ Δ
Γ, Γ′ ⇒ Δ</p>
        <p>Ina</p>
        <p>
          Cut
 , (,  ) ⇒   FDet
that a norm may be triggered by obligations detached from other norms (see Ex. 1). The rules
Con, Ina, and InaC deal with the defeasibility of normative reasoning and yield attacking
arguments. The Con rule expresses the consistency constraint that if Γ constitutes reasons for
 , then Γ is inconsistent with the constraint ¬  (where an empty right-hand side denotes
inconsistent reasons). We also refer to Γ ⇒ as an inconsistent argument. When an argument
expresses inconsistent reasons, at least one of its involved norms is inapplicable (Ina) and all
involved norms are jointly inapplicable (InaC). We refer to [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] for other DAC systems.
Example 1. We look at Chisholm’s scenario [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], an archetype of contrary-to-duty reasoning.
Billie is obligated to go and help her neighbors (⊤, ℎ) (⊤ denotes that ℎ is detached by default). If
Billie goes to help, she must tell the neighbors she goes (ℎ, ), otherwise she ought not to tell them
she goes (¬ℎ, ¬). Suppose that Billie does not go and help ¬ℎ and, so, violates the default duty in
(⊤, ℎ). To know what Billie must do in light of her violation ¬ℎ , the constraint is imposed that the
obligations must be consistent with the fact that Billie does not help ¬ℎ [
          <xref ref-type="bibr" rid="ref11 ref5">5, 11</xref>
          ]. Let ℱ = {¬ℎ },
 = {(⊤, ℎ), (ℎ, ), (¬ℎ, ¬)}, and  = {¬ℎ} be the normative system . The desired outcome
is that Billie ought not to tell the neighbors she goes ¬ given that she does not go.
        </p>
        <p>Argument  (below left), stating that Billie ought to tell, is derived with deontic detachment.
Argument  (below right), expresses the inapplicability of ¬(⊤, ℎ) given the set constraint. Similar
reasoning gives the inconsistent argument  = ¬ℎ , (⊤, ℎ), (ℎ, ), (¬ℎ, ¬) ⇒ , which with Con,
Cut, and InaC derives the unattackable ′ = ¬ℎ ⇒ ¬{(⊤, ℎ), (ℎ, ), (¬ℎ, ¬)}.
⊤ , (⊤, ℎ) ⇒ ℎ FDet
 = ⊤ , (⊤, ℎ), (ℎ, ) ⇒ 
ℎ , (ℎ, ) ⇒  FDet
ℎ, (ℎ, ) ⇒  DDet</p>
        <p>Cut
⊤ , (¬ℎ), (⊤, ℎ) ⇒
⊤ , (⊤, ℎ) ⇒ ℎ FDet</p>
        <p>Con</p>
        <p>Ina
 = ⊤ , (¬ℎ) ⇒ ¬(⊤, ℎ)</p>
        <p>
          For the sake of completion, we recall the I/O system out3 here and some known results [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ].
Proposition 1 ([
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]). Let ↓ = ⟨ℱ ,  , ⟩ be  stripped from its labels. Let Δ ⊆ ℒ , (Δ) =
{ | Δ ⊢L  }, and out( , Δ) = ({ | (,  ) ∈  and  ∈ (Δ)}). Let out3( , ℱ ) =
⋃︀≥ 0 , where 0 = out( , ℱ ) and +1 = ( ∪ out( ,  ∪ ℱ )). In words, out3 is a
closure of  under successive (deontic) detachment with respect to ℱ . Then  ⊆ ℒ  is -consistent
in , if ⊥ ∈/ (out3( , ℱ ) ∪ ). Let Θ ⊆  ⊆ ℒ , Δ ⊆ ℱ ⊆ ℒ and Ω ⊆  ⊆ ℒ , we have:
1.  ∈ out3(Θ, Δ) if ⊢DAC Θ, Δ ⇒  ;
2. Θ is -inconsistent if there are Δ ⊆ ℱ and Ω ⊆  for which ⊢DAC Θ, Δ , Ω ⇒ ;
3. ⊥ ∈ (out3(Θ, Δ) ∪ Ω) if for all (,  ) ∈ Θ, ⊢DAC Θ ∖ {(,  )}, Δ , Ω ⇒ ¬(,  ).
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Formal Argumentation with DAC-arguments</title>
      <p>
        We use formal argumentation [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] to capture the defeasibility of normative reasoning and to
explicate norm conflicts [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. An Argumentation Framework [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] contains a set of arguments
and an attack relation between arguments, where semantics stipulate conditions under which
sets of arguments are jointly acceptable. We instantiate such frameworks with DAC-arguments.
DAC-induced Argumentation Frameworks Let  = ⟨ℱ ,  , ⟩ be a normative system. A
DAC-induced argumentation framework ℱ () = ⟨Arg, Att⟩ is defined as follows:
• Δ ⇒ Γ ∈ Arg if Δ ⇒ Γ is DAC-derivable and -based.
      </p>
      <p>•  defeats , i.e., (, ) ∈ Att ⊆ Arg × Arg if conc() = ¬Δ ∈ ℒ, and Δ ⊆ prem().</p>
      <p>We write Arg(Σ) = {⊢DAC  | prem() ⊆ Σ}.</p>
    </sec>
    <sec id="sec-4">
      <title>Argumentative Semantics and Entailment Let ⟨Arg, Att⟩ be an ℱ and let  ⊆ Arg:</title>
      <p>
        defeats an argument  ∈ Arg if there is a  ∈  that defeats ; and  defends  if  defeats
every argument that defeats . Let Defended() be the set of arguments defended by .
We recall the following semantic definitions [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]:  is conflict-free if it does not defeat
any of its own elements;  is admissible if it is conflict-free and defends all  ∈ ; 
is preferred if it is maximally admissible;  is stable if it is conflict-free and defeats all
 ∈ Arg ∖ ;  is grounded if  = ⋃︀≥ 0  where 0 = ∅ and +1 = Defended().
Let sem ∈ {admissible,preferred,stable,grounded}, we define two entailment relations:
• ℱ |∼ s∩ermea  if there is an  contained in every sem-extension that concludes  ;
• ℱ |∼ s∪em  if there is a sem-extension ℰ for which there is an  ∈ ℰ concluding  .
|∼ s∩ermea captures the shared arguments (shared reasons) by all sem-extensions. The resulting
conclusions are called the free consequences of , which are obligations from unproblematic
norms compatible with any sem-extension. Credulous entailment |∼ s∪em captures the existence
of reasons in favor of a conclusion for some sem-extension, expressing a defensible stance.
Example 2. The partial ℱ in Figure 2 captures the scenario from Ex. 1. There is only one
stable extension {, , , ′} (the arrows from , , , and  to  are implicit), which is also the
grounded extension (cf. Prop. 3-2 below). We may, thus, conclude ℱ |∼ (¬) (where |∼ ∈ { |∼ s⋆em |
⋆ ∈ {∩rea, ∪} and sem ∈ {stable, grounded}}). As desired, since Billie does not go to help her
neighbors, she ought not to tell them she is coming. Billie may now ask “Why am I obliged to not
tell my neighbors, despite my seeming duty to tell them I am coming to help?” To this we turn next.
      </p>
      <p>
        We recall [5, Theorem 2] that for the system adopted in this paper, DAC-induced ℱ s are
sound and complete for the system out3 of constrained Input/Output logic [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
Proposition 2. Let  = ⟨ℱ ,  , ⟩ and let maxfam() = { ′ ⊆  | ⊥ ̸∈ (out3( ′, ℱ ↓) ∪
↓) and for each  ′ ⊂  ′′, ⊥ ∈ (out3( ′′, ℱ ↓) ∪ ↓)} be the set of maximal consistent sets
of norms over . Let ℱ be induced by DAC and  with the set of stable extensions stable(ℱ ):
1. If  ′ ∈ maxfam(), then Arg(ℱ ∪  ′ ∪ ) ∈ stable(ℱ );
2. If  ∈ stable(ℱ ), then there is a  ′ ∈ maxfam() for which  = Arg(ℱ ∪  ′ ∪ ).
      </p>
      <p>argument  ∈ Arg(ℱ ∪ )).</p>
      <p>
        Our aim is to employ dialogue models for contrastive deontic explanations and for this we need
coincide and we simply write |∼ grounded. The proofs below do not reference specific
some additional results. Since the grounded extension is unique [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], |∼ grounded and |∼ g∪rounded
DAC-rules
(outside the base system [
        <xref ref-type="bibr" rid="ref12 ref5">5, 12</xref>
        ]), and thus generalize to all DAC systems in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] (augmented
with InaC). Proposition 3 tells us that for reasoning about the free consequences under the
stable semantics, it sufices to reason with the grounded semantics. Proposition 4 shows that the
preferred and stable semantics coincide for DAC. Consequently, these propositions allow us to
apply well-developed dialogue techniques to DAC for skeptical (in terms of free consequences)
and credulous reasoning under the grounded, respectively the preferred semantics.
∩rea
grd(ℱ ) be the set of stable extensions, respectively the grounded extension of ℱ :
Proposition 3. Let ℱ be a DAC-induced ℱ for  = ⟨ℱ ,  , ⟩ and let stb(ℱ ) and
1.  ∈
      </p>
      <p>⋂︀ stb(ℱ ) if every defeater  ∈ Arg of  is -inconsistent (i.e., it is defeated by an
2. grd(ℱ ) = ⋂︀ stb(ℱ ) = 2 = Defended(Arg(ℱ ∪ )) (and so |∼ grounded = |∼ s∩tarebale).1
 ∈
 = Δ2 , Θ2, Γ2 ⇒ ¬(,</p>
      <p>⋂︀ stb(ℱ ) and (, 
Proof. Ad 1. Let  = Δ1 , Θ1, Γ1 ⇒ Σ ∈ Arg().</p>
      <p>) of . By Proposition 1, Θ2 ∪ {(,</p>
      <p>)} is -inconsistent in . Since
) ∈ prem(), and by Proposition 3, Θ2 is not contained in a consistent</p>
      <p>Left-to-Right. Consider a defeater
in this set do not have defeaters. So,  ∈ 2 ⊆ grd(ℱ ).

set of norms in  and it is therefore inconsistent. By Proposition 1, there are Δ3 ∪ Γ3 ⊆ ℱ ∪ 
such that Δ3 , Γ</p>
      <p>3 ⇒ ¬Θ2 defeats . Right-to-Left. It is easy to see that ⋂︀ stb(ℱ ) contains
every argument it defends. Suppose now that  = Γ ⇒ Δ is -inconsistent. By Proposition 1,
 ∈
there is a  = Ω ⇒ ¬(Γ ∩ ℒ) that defeats  and for which Ω ∩ ℒ
⋂︀ stb(ℱ ). So, ⋂︀ stb(ℱ ) defends  and therefore  ∈
⋂︀ stb(ℱ ).
 = ∅. Since  has no defeaters,</p>
      <p>
        Ad 2. Left-to-Right. Straightforward. We show Right-to-Left. Let  ∈
Item 1,  is defended by Arg(ℱ ∪ ). Clearly, Arg(ℱ ∪ ) ⊆  1 ⊆ grd(ℱ ) since arguments
⋂︀ stb(ℱ ). By
1Recall that ⋃︀
interesting result that the fixed-point construction of the grounded extension terminates on the second iteration.

≥ 0  where 0 = ∅ and +1 = Defended(). The proposition states the computationally
Proposition 4. Let ℱ be a DAC-induced ℱ for  = ⟨ℱ ,  , ⟩ and let  ⊆ Arg:  is a
stable extension if  is a preferred extension (and so |∼ *preferred = |∼ *stable, for * ∈ {∪ , ∩rea}).
Proof. Left-to-Right. Lemma 15 in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Right-to-Left. Suppose not, then there is an  ∈
Arg() ∖  but no  ∈  for which (, ) ∈ Att. Since  is preferred it is conflict-free. By
Proposition 2,  ′ = {(,  ) | (,  ) ∈ Δ ∩ ℒ and Δ ⇒ Γ ∈ } is -consistent and so  ′ ⊆
 ′′ for some  ′′ ∈ maxfam() and there is a stable extension ′ = Arg(ℱ ∪  ′′ ∪ ). Clearly,
Arg(ℱ ∪  ′ ∪ ) ⊆ Arg(ℱ ∪  ′′ ∪ ) and so,  is not maximally admissible. Contradiction.
      </p>
    </sec>
    <sec id="sec-5">
      <title>4. Dialogues and Contrastive Explanations</title>
      <p>We now provide dialogue models for contrastive deontic explanations. A contrastive explanatory
dialogue starts with a command “ !” issued by the explainer, immediately followed by the
explainee asking a question of the form: “Why  , despite  ? ”</p>
      <p>
        Due to the conflict sensitivity of norm systems [
        <xref ref-type="bibr" rid="ref11 ref3">3, 11</xref>
        ], we consider contrastive why-questions
as the starting point of explanatory episodes [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. We refer in what follows to   as the claim
and to   as the counter-claim. 2 We do not assume that   and   are derivable from the given
normative system , nor do we assume that there is a dialectical relation between   and  ,
referred to as the contrastive link. Both must become explicit (if existent) through the dialogue
itself. We say there exists a contrastive link when two arguments  and  exist concluding  ,
respectively  , which are incompatible, meaning that there is no stable extension containing
both. In such a case, an incompatibility argument can be provided using the premises in  and :
Proposition 5. Let  = ⟨ℱ ,  , ⟩ and ℱ () = ⟨Arg, Att⟩. For any two arguments  = Δ ⇒
 ,  = Γ ⇒   ∈ Arg: there is no stable extension  with ,  ∈  if there is a DAC-derivable
argument Δ, Γ, Ω ⇒ with Ω ⊆  (we call  and  -incompatible).
      </p>
      <p>Proof. Left-to-Right. Let  ′ = (Γ ∪ Δ) ∩ ℒ, and ℱ ′ = { |   ∈ Γ ∪ Δ}. By Prop. 3, there is
no ℳ ∈ maxfam() with  ′ ⊆ ℳ . So, there is a Ω ⊆  for which ⊥ ∈ (out3( ′, ℱ ′) ∪ Ω).
By Prop. 1 and a Cut application, ⊢DAC  ′, ℱ ′, Ω ⇒ . Right-to-Left. Straightforward.</p>
      <p>An explanatory dialogue addressing “Why  , despite  ?” is successful whenever it contains
c1 an argument  for   and a demonstration that all (indirect) objections to  can be met;
c2 an argument  for   such that  and  are -incompatible (recall Prop. 5);
c3 an argument  defeating  and a demonstration that all (indirect) objections to  can be met;
c4 a demonstration that the demonstrations in c1 and c3 are -compatible.</p>
      <p>
        Informally, c1 provides the ‘illative explanation’ of “ !” by stating  containing the facts and
norms in view of which   holds. It also provides the ‘dialectic explanation’ of   by refuting all
2In the philosophical literature deontic explanations are relatively unexplored. Our account takes the
questionoriented pragmatic approach to contrastive explanation (cf. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]), which naturally extends to dialogue models. It
accords with [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] who calls upon defeasible moral principles (here interpreted as norms) to substantiate explanations
and with [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] who takes defeasible norms to serve as justifications, namely, norms ground as to why the called-upon
facts are explanatory. See also [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] for the role of justification in the context of normative explanations.
      </p>
      <p>R: despite(  )?</p>
      <p>R:   !
H: why(  )despite(  )?</p>
      <p>H: argue( = Γ ⇒   )
R: argue  = Σ ⇒ ¬Γ′</p>
      <p>Γ′ ⊆ (Γ ∩ ℒ )
H: argue Π, Σ, Θ ⇒
Π ⊆prem ℰR ,</p>
      <p>Θ ⊆ 
H: argue Δ, Γ, Ω ⇒
Ω ⊆</p>
      <p>
        ℰcontrast (c2)
Sub-dialogue for
the acceptance of 
Sub-dialogue for
the acceptance of 
ℰclaim (c1)
ℰcomp (c4)
ℰcounter (c3)
possible objections the explainee may have against concluding  . c2 makes explicit the contrast
between the argument for   with an argument for  , and c3 provides illative and dialectical
explanations for why   can be successfully objected to (our terminology mirrors Johnson’s
well-known two-tier model of argument [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]). Last, c4 ensures that the two sub-explanations
in c1 and c3 form a -compatible view. The intuitive idea of contrastive explanatory dialogues,
following c1-c4, is provided in Figure 3. Below, we make this formally precise.
      </p>
      <p>Although explanatory dialogues are collaborative, we assume a burden of proof for the
explainer with respect to c1 and c3, and for explainee with respect to c2 and c4. For the sake of
simplified reference, we call the explainee ‘human’ ( H) and the explainer ‘robot’ (R).
Explanatory Dialogues Let ℱ () be a DAC-induced argumentation framework. Let R be
the explainer and H the explainee. A contrastive explanatory dialogue (CED) is a sequence
ℰ = ⟨1, . . . , ⟩ of tuples  = ⟨pl, lo, ta⟩ called moves such that  is ’s position in
the dialogue, pl() ∈ {R, H} is the player making move , lo() ∈ Locutions is the
locution in , and ta() ∈ {1, . . . , − 1} ∪ {∅} is the target of . Locutions =
{()!, why()despite()?, despite()?, argue()} is the set of expressions that
interlocutors may use, where  and  range over ℒ and  ranges over Arg(). ℰ is a sem-CED
(for sem ∈ {preferred, grounded}) whenever ℰ satisfies the protocol stipulated by P1-P5.</p>
    </sec>
    <sec id="sec-6">
      <title>P1 Dialogue Commencement Rules for  ∈ ℰ with  ∈ {1, 2, 3, 4}:</title>
      <p>1 = ⟨R,  !, ∅⟩ 3 = ⟨R, argue(), 2⟩ with conc() =  
2 = ⟨H, why( )despite( )?, 1⟩ 4 = ⟨R, despite( )?, 2⟩
P2 General Rules for each ,  ∈ ℰ :
i) if  &gt; 4, then lo() = argue(), and ta() =  with  &lt; .
ii) if ta() = , lo() = argue() and lo() = argue(), then conc() =
¬Δ ∈ ℒ for some Δ ⊆ prem();
iii) if ta() = ta() and  ̸= , then lo() ̸= lo();
iv) if ta() = , then pl() ̸= pl().</p>
      <p>P1 stipulates that (1) R starts the dialogue with a command to which, (2) H responds with a
contrastive why-question. Then, (3) R must provide reasons for the command and, after that, (4)
shifts the burden of proof to H requesting support for the contrastive claim. P2 stipulates rules
that hold for both R and H: (i) after the start of the dialogue any player may continue making
moves that target previous moves by stating arguments, where (ii) arguments moved against
other arguments express undermining defeats, (iii) players may not move an argument twice
against the same move, and (iv) they may not attack their own claims.</p>
      <p>P3 Explainer Rules for each ,  ∈ ℰ , if ta() = 2, then  = 3 or  = 4.
P4 Explainee Rules for each ,  ,  ∈ ℰ :
i) if ta()=ta( )=ta()=4 (and, so, pl() = H), then |{, , }| ≤ 2.
ii) if ta()=ta( )=4, ̸=, {lo(), lo( )} = {argue(), argue()}, lo(3) =
argue(), then conc() =   and  = prem(), prem(), Ω ⇒ with Ω ⊆  .
iii) if ta() = , ta() =  , ta( ) = 4, lo( ) = argue(), with conc() ̸=
∅, and lo() = argue(), then
– either lo() = argue(Σ, prem(),  ⇒ ), and Σ ⊆ prem(ℰR) with ℰR =
{ |  ∈ ℰ , pl() = R, lo() = argue()};
– or lo() = argue() for some  with conc() = ¬Δ ∈ ℒ and Δ ⊆ prem().</p>
      <p>P3 states that R must provide exactly two moves against the contrastive why-question (one
of which provides reasons, and the other questioning the contrastive claim). P4 stipulates that
the explainee H may (i) make at most two moves against R’s questioning of the contrastive link,
(ii) one of which is an argument providing reasons for the counter-claim, one which shows the
-incompatibility of the arguments for the claim and the counter-claim  . Then, (iii) H may
also move against the explainer’s argument  opposing the reasons for the counter-claim. In
case  is incompatible with the other arguments ofered by R, R engages in incoherent reasoning.
H may, thus, oppose by demonstrating the -incompatibility of  and other R arguments.</p>
      <p>A CED has a tree-structure since each move has exactly one predecessor (except for the
root). A branch of ℰ containing  is the maximal linear sequence branch() = ⟨1 , . . . ,
= , . . . ,  ⟩ such that for each  and +1 , ta(+1 ) = . We say  is a leaf.
Each CED consists of four subdialogues (see Fig. 3), which constitute four sub-explanations: a
subdialogue ℰclaim (cf. c1) that engages with the argument  given in favour of the claim  
(generated from 3 down); a subdialogue ℰcounter (cf. c3) that engages with the argument  given
in favour of the counter-claim   (generated from the move attacking the argument providing
reasons for  ); and the subdialogues ℰcontrast (cf. c2) and ℰcomp (cf. c4) each containing at most
one node with an argument that shows the -incompatibility of  and , respectively, joint
-incompatibility of R’s explanations ℰclaim and ℰcounter. These four subdialogues determine
when a given dialogues is (un/semi)successful.</p>
      <p>
        Before defining success, we add P5 to the protocol to accommodate reasoning with preferred
and grounded acceptance of arguments. For sem ∈ {preferred, grounded}, (i) R may move at
most one counter-argument to each H argument. For preferred dialogues, (ii) H is not allowed to
move the same argument twice on a branch in ℰclaim or ℰcounter. For grounded dialogues, (iii) R
is not allowed to move the same argument twice on a branch. We note that (i)-(iii) follow the
protocols for admissible (and, so, preferred) and grounded argumentation games [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
P5 Preferred and Grounded Rules for each ,  ∈ ℰ , and sem ∈ {preferred, grounded}:
i) ta() = ta() =  with  &gt; 3 and pl() = R, then  = ;
ii) if sem = preferred, pl() = H, and ta() =  , then there is no  ∈
branch( ) for which pl() = H, lo() = lo() and  ̸= ;
iii) if sem = grounded, pl() = R, and ta() =  , then there is no  ∈
branch( ) for which pl() = R, lo() = lo() and  ̸= .
      </p>
      <p>Successful dialogues Let ℱ () be DAC-induced. A CED ℰ satisfying P1-P5 is:
∙ successful if ℰcontrast ̸= ∅, ℰcomp = ∅, and ℰclaim and ℰcounter both contain R-leaves only;
∙ semi-successful if ℰcontrast = ∅ = ℰcomp, and ℰclaim contains R-leaves only;
∙ unsuccessful if neither of the above holds.</p>
      <p>
        Then, ℰ is sem-successful when it is saturated (i.e., all movable arguments from ℱ ()
are moved in ℰ ; cf. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]) and ℰ is successful (similar for semi- and unsuccessful).
      </p>
      <p>In brief, a successful CED features (c1) an illative explanation that is supplemented by
a dialectical explanation (ℰclaim contains only R-leaves), where (c2) the explainee is able to
demonstrate the incompatibility of the contrastive claim (ℰcontrast ̸= ∅), the latter which (c3) the
explainer successfully counters (ℰcounter contains only R-leaves). Furthermore, (c4) the position
taken by R in ℰclaim and ℰcounter must be -compatible. A semi-successful dialogue features
(c1), but H is not able to demonstrate the adequacy of the contrastive link (ℰcontrast = ∅). A
dialogue can be unsuccessful for various reasons, e.g., R cannot provide an illative or dialectic
explanation, or R cannot argue against H’s counter-claim.</p>
      <p>
        Under saturation, it can be easily checked that the sub-dialogues in ℰclaim and ℰcounter are for
sem ∈ {preferred, grounded} instances of credulous preferred and grounded argumentation
games [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] (where for the latter credulous equals skeptical entailment). Hence, we obtain
dialogue models that construct explanations for credulous I/O entailment (i.e., when sem =
preferred) and for skeptical I/O entailment under shared reasons (i.e., when sem = grounded).
Proposition 6. Let ℱ () = ⟨Arg, Att⟩ be DAC-induced, sem ∈ {preferred, grounded}, and
ℰ* = ⟨, . . . , ⟩ be ℱ ()-based with lo() = argue() and * ∈ { claim, counter}:
• if ℰ* is saturated and contains only R leaves then  ∈  for some sem-extension;
• if  ∈  for some sem-extension , there is a saturated extension of ℰ* with only R leaves.
Proof. Straightforward modification of the proofs of Theorems 6.2 and 6.5 in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
Example 3. Let sem ∈ {preferred,grounded}: Figure i) provides a successful saturated sem-CED
for “Why ¬, despite ?,” notice that ℰcomp is empty since ℰR = {, } is -compatible. Figure ii)
contains an unsuccessful saturated sem-CED for “Why , despite ¬?” where H refutes the claim
(with ) and defends the counter-claim (with  and ). The arguments in i) and ii) reference those
in Fig. 2 of Ex. 2 (for space reasons, we adopted an example for which grounded equals stable).
      </p>
      <p>R: ¬  ! R:   !
H: why(¬  )despite(  )?
H: why(  )despite(¬  )?
R: argue(a)
H: argue( )</p>
      <p>R: argue( )
ℰclaim
i)</p>
      <p>R: despite(  )?
ℰcomp</p>
      <p>H: argue( )
H: ∅</p>
      <p>R: argue</p>
      <p>H: argue( )</p>
      <p>ℰcontrast
ℰcounter</p>
      <p>R: argue( )</p>
      <p>H: argue( )
ℰclaim
ii)</p>
      <p>H: ∅
ℰcomp</p>
      <p>H: argue( )</p>
      <p>R: despite(¬  )?
R: argue 
H: argue  ℰcounter</p>
      <p>H: argue( )
ℰcontrast
An alternative successful CED for i) exists that includes R moving  against , followed by H moving
 , which is then defeated by R moving . However, here we can use Proposition 3-1, giving us for
sem = grounded the existence of a strategic shortcut by directly moving argument  against .</p>
    </sec>
    <sec id="sec-7">
      <title>5. Challenges for Dialogical Deontic Explanations</title>
      <p>This paper shows how to incorporate existing results in formal argumentation and refine them to
yield contrastive explanatory dialogues (CEDs) in the context of defeasible normative reasoning,
Input/Output logic in particular. This work is the first of its kind and, so, we end by highlighting
some key challenges for deontic explanations (through formal argumentation).
Challenge 1: Conflict Types The contrastive claims ofered by the explainee may give rise
to various kinds of conflicts with the main claim. Two particularly interesting cases when
dealing with (conditional) norms are specificity (you are not allowed to park, unless you
are medical personnel) and contrary-to-duty (don’t be late, but if you are, not more than
10 minutes). Good explanations should make transparent the type of conflicts involved.
Challenge 2: Cognitive Adequacy A good explainer seeks to understand the explainee in
order to tailor the given explanation to precisely target the gaps in the explainee’s
understanding. For this the explainer may use queries and strategic argumentation,
complemented by a theory of (the explainee’s) mind. Moreover, the knowledge bases of the
explainer and the explainee may be disjoint and incomplete. Tailored explanations must
additionally keep track of commitments and shifts therein throughout a dialogue.</p>
      <sec id="sec-7-1">
        <title>Challenge 3: Richer Handling of Contrastives The explainee may ofer contrastive claims</title>
        <p>that are, under thorough analysis, not really incompatible with the ofered claim. A good
explainer should catch such cases and provide an argument concerning the compatibility
of the claims. For this, more proof-theoretic resources have to be developed. In such
cases, the explainee should be able to withdraw or replace the contrast.</p>
        <p>Challenge 4: Richer Deontic Vocabulary Often, normative codes are richer than the ones
studied here, e.g., they may contain priority orderings over norms and permissive norms.
These come with challenges, for instance concerning reinstatement (e.g., permissions
generally do not reinstate obligations). Dialogues ideally accommodate such complexity.
Challenge 5: Casuistry In many application contexts of ethical (e.g., in bioethics) and legal
reasoning, we find case-based reasoning when reasoning towards obligations, rights, and
permissions. Deontic explanations of such conclusions need a diferent conceptual base
than the one provided here, posing their own specific challenges (e.g., balancing reasons).
Acknowledgements. This work was partially funded by the “Logical Methods of Deontic
Explanations” (LoDeX) project, Deutsche Forschungsgemeinschaft, Project number 511915728.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Explanation in artificial intelligence: Insights from the social sciences</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>267</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>O.</given-names>
            <surname>Arieli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Borg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heyninck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Straßer</surname>
          </string-name>
          ,
          <article-title>Logic-based approaches to formal argumentation</article-title>
          , in: D.
          <string-name>
            <surname>Gabbay</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Giacomin</surname>
            ,
            <given-names>G. R.</given-names>
          </string-name>
          <string-name>
            <surname>Simari</surname>
          </string-name>
          , M. Thimm (Eds.),
          <source>Handbook of Formal Argumentation</source>
          , Volume
          <volume>2</volume>
          ,
          <string-name>
            <surname>College</surname>
            <given-names>Publications</given-names>
          </string-name>
          ,
          <year>2021</year>
          , pp.
          <fpage>1793</fpage>
          -
          <lpage>1898</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Makinson</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. van der Torre</surname>
          </string-name>
          ,
          <article-title>Constraints for Input/Output logics</article-title>
          ,
          <source>Journal of Philosophical Logic</source>
          <volume>30</volume>
          (
          <year>2001</year>
          )
          <fpage>155</fpage>
          -
          <lpage>185</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. F.</given-names>
            <surname>Horty</surname>
          </string-name>
          , Reasons as defaults, Oxford University Press,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>K. van Berkel</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Straßer</surname>
          </string-name>
          ,
          <article-title>Reasoning with and about norms in logical argumentation</article-title>
          ,
          <source>in: Proceedings of COMMA</source>
          <year>2022</year>
          , volume
          <volume>353</volume>
          , IOS press,
          <year>2022</year>
          , pp.
          <fpage>332</fpage>
          -
          <lpage>343</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>K. van Berkel</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Straßer</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Zhou</surname>
          </string-name>
          ,
          <article-title>Towards an argumentative unification of default reasoning</article-title>
          ,
          <source>in: Proceeding of COMMA</source>
          <year>2024</year>
          , IOS press,
          <year>2024</year>
          , p.
          <source>TBA.</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K.</given-names>
            <surname>Čyras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rago</surname>
          </string-name>
          , E. Albini,
          <string-name>
            <given-names>P.</given-names>
            <surname>Baroni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          ,
          <string-name>
            <surname>Argumentative</surname>
            <given-names>XAI</given-names>
          </string-name>
          : A survey,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>L.</given-names>
            <surname>Amgoud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Maudet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Parsons</surname>
          </string-name>
          ,
          <article-title>Modelling dialogues using argumentation</article-title>
          ,
          <source>in: Proceedings Fourth International Conference on MultiAgent Systems</source>
          , IEEE,
          <year>2000</year>
          , pp.
          <fpage>31</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>H.</given-names>
            <surname>Prakken</surname>
          </string-name>
          ,
          <article-title>Coherence and flexibility in dialogue games for argumentation</article-title>
          ,
          <source>Journal of Logic and Computation</source>
          <volume>15</volume>
          (
          <year>2005</year>
          )
          <fpage>1009</fpage>
          -
          <lpage>1040</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Modgil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Caminada</surname>
          </string-name>
          ,
          <article-title>Proof theories and algorithms for abstract argumentation frameworks</article-title>
          ,
          <source>in: Argumentation in artificial intelligence</source>
          , Springer,
          <year>2009</year>
          , pp.
          <fpage>105</fpage>
          -
          <lpage>129</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gabbay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. F.</given-names>
            <surname>Horty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Parent</surname>
          </string-name>
          , R. van der Meyden, L. van der Torre,
          <source>Handbook of Deontic Logic and Normative Systems</source>
          , Volume
          <volume>1</volume>
          ,
          <string-name>
            <surname>College</surname>
            <given-names>Publications</given-names>
          </string-name>
          , United Kingdom,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>O.</given-names>
            <surname>Arieli</surname>
          </string-name>
          , K. van
          <string-name>
            <surname>Berkel</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Straßer</surname>
          </string-name>
          ,
          <article-title>Defeasible normative reasoning: A proof-theoretic integration of logical argumentation</article-title>
          ,
          <source>in: Proceedings of AAAI</source>
          <year>2024</year>
          ,
          <year>2024</year>
          , pp.
          <fpage>10450</fpage>
          -
          <lpage>10458</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>P. M. Dung</surname>
          </string-name>
          ,
          <article-title>On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games</article-title>
          ,
          <source>Art. Int</source>
          .
          <volume>77</volume>
          (
          <year>1995</year>
          )
          <fpage>321</fpage>
          -
          <lpage>357</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lipton</surname>
          </string-name>
          , Contrastive explanation,
          <source>Royal Inst. of Philosophy Suppl</source>
          .
          <volume>27</volume>
          (
          <year>1990</year>
          )
          <fpage>247</fpage>
          -
          <lpage>266</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>U. D. Leibowitz</surname>
          </string-name>
          ,
          <article-title>Scientific explanation and moral explanation</article-title>
          ,
          <source>Noûs</source>
          <volume>45</volume>
          (
          <year>2011</year>
          )
          <fpage>472</fpage>
          -
          <lpage>503</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Scriven</surname>
          </string-name>
          , Explanations, predictions, and laws,
          <source>in: Minnesota Studies in the Philosophy of Science</source>
          , Vol.
          <volume>3</volume>
          , University of Minnesota Press, Minneapolis,
          <year>1962</year>
          , pp.
          <fpage>170</fpage>
          -
          <lpage>230</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>P.</given-names>
            <surname>Väyrynen</surname>
          </string-name>
          ,
          <article-title>Normative explanation and justification</article-title>
          ,
          <source>Noûs</source>
          <volume>55</volume>
          (
          <year>2021</year>
          )
          <fpage>3</fpage>
          -
          <lpage>22</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>R. H.</given-names>
            <surname>Johnson</surname>
          </string-name>
          ,
          <article-title>Manifest rationality: A pragmatic theory of argument</article-title>
          , Routledge,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>