<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Towards Deontic Explanations Through Dialogue</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Kees</forename><surname>Van Berkel</surname></persName>
							<email>kees.van.berkel@tuwien.ac.at</email>
							<affiliation key="aff0">
								<orgName type="department">Institute for Logic and Computation</orgName>
								<orgName type="institution">TU Wien</orgName>
								<address>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Christian</forename><surname>Straßer</surname></persName>
							<email>christian.strasser@rub.de</email>
							<affiliation key="aff1">
								<orgName type="department">Institute of Philosophy II</orgName>
								<orgName type="institution">Ruhr University Bochum</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Towards Deontic Explanations Through Dialogue</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">82C7CA0B7B8806A456017930DDD6DA6A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:16+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Defeasible normative reasoning</term>
					<term>Contrastive deontic explanations</term>
					<term>Logical argumentation</term>
					<term>Dialogues</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Deontic explanations answer why-questions concerning agents' obligations and permissions. Normative systems are notoriously conflict sensitive, making contrastive explanations pressing: "Why am I obliged to do 𝜙, despite my (seemingly) conflicting obligation to do 𝜓?" In this paper, we develop a model of contrastive explanatory dialogues for the well-established defeasible reasoning formalism Input/Output logic. Our model distinguishes between successful, semi-successful, and unsuccessful deontic dialogues. We prove that the credulous and skeptical (under shared reasons) entailment relation of Input/Output logic, can be characterized in formal argumentation using preferred and grounded semantics. This result allows us to leverage known results for dialogue models of the latter two semantics. Since this work is the first of its kind, we discuss 5 key challenges for deontic explanations through dialogue.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Norms are indispensable in many aspects of society, ranging from law, ethics, to business protocols and AI. They motivate, guide, and regulate agents, whether they are human or artificial. Often, agents affected by norms do not only need to know that they are bound by obligations or that they may appeal to rights: they need to understand why. Such understanding may enhance compliance and collaboration and is especially pressing when conflicts between norms arise. For instance, I may want to know why I may take over on the left, despite being obliged to drive on the right. Here, a good explanation not only explains that I am permitted, but also why the obligation to the contrary does not currently apply: the permission is an exception to the obligation. Answers to this type of why-question are called deontic explanations.</p><p>Deontic logic is the well-established field exploring formal methods to model normative reasoning. However, the focus has been nearly exclusively on formal systems that determine which obligations and permissions can be inferred from a normative system, rather than to explain why. This gap is remarkable, especially given the increasingly vital role that normative systems play in alignment and compliance requirements for AI. This paper investigates how knowledge representation methods can be used to generate explanatory deontic dialogues.</p><p>The demand for explanatory models in AI is increasing <ref type="bibr" target="#b0">[1]</ref> and formal argumentation provides a promising method in this respect. First of all, formal argumentation has proven to be a unifying framework for nonmonotonic reasoning <ref type="bibr" target="#b1">[2]</ref>. In particular, two central paradigms of defeasible reasoning, constrained Input/Output (I/O) logic <ref type="bibr" target="#b2">[3]</ref> and default logic <ref type="bibr" target="#b3">[4]</ref>, can be argumentatively characterized <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>. Second, a wide variety of methods has been proposed in Argumentation for Explainable AI (ArgXAI, <ref type="bibr" target="#b6">[7]</ref>). Finally, dialogue models and argumentation games <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10]</ref>, offer dynamic characterizations of formal argumentation, that have the potential to yield interactive (or even tailor-made) explanatory episodes through dialogues.</p><p>Once a given nonmonotonic logic is represented in logical argumentation, such as I/O-or default logic, dialogical methods can be leveraged for explanatory purposes. However, a first obstacle, in this respect, is that most characterization results are shown with respect to stable semantics (including <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>; also see <ref type="bibr" target="#b1">[2]</ref>), whereas other semantics such as preferred, admissible, and grounded are more suitable for dialogical generalization. In brief, the problem with stable extensions is that they reference the entire set of arguments (each argument is either 'in' or 'out'), while we expect explanatory dialogues to focus on reasons relevant to the explanatory purpose. Furthermore, defining dialogue models and argumentation games for skeptical reasoning is challenging (in the context of multi-extension semantics such as preferred; cf. <ref type="bibr" target="#b9">[10]</ref>).</p><p>Contributions. We provide dialogue models for one of the central defeasible normative reasoning formalisms in the literature: Input/Output logic <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b10">11]</ref>. Unfortunately, the original formalism does not naturally lend itself to explanatory reasoning. Recently, a highly modular rule-based proof system -the Deontic Argumentation Calculus (DAC) -was developed with the aim of making I/O suitable for explanatory purposes and it was shown that DAC-induced argumentation frameworks are sound and complete for a large class of constrained I/O logics <ref type="bibr" target="#b4">[5]</ref> and default logic <ref type="bibr" target="#b5">[6]</ref>. Despite these promising results, the correspondences were only obtained for stable semantics, making them seemingly unsuitable for dialogical deontic explanations.</p><p>In this article, we extend these results by a model of dialogue episodes for deontic explanations:</p><p>(1) As a preparatory step, we first prove that for DAC-induced argumentation frameworks the stable and the preferred semantics coincide. This allows us to use well-developed preferred dialogue models in the context of I/O reasoning. (2) Furthermore, we lift recent results <ref type="bibr" target="#b11">[12]</ref> that show that the 'free consequences' of skeptical entailment under stable semantics is identical to entailment under the grounded semantics.</p><p>In other words, we may also use grounded dialogue models for I/O reasoning. (3) Using (1) and (2), we enhance dialogue models and define contrastive deontic explanations that explain certain obligations in contrast to seeming obligations to the contrary.</p><p>Outline. Section 2 introduces the DAC formalism. In Section 3, we define DAC-induced argumentation frameworks and prove that stable equals preferred and the free consequences correspond to grounded entailment. We harness these results to specify contrastive dialogue models in Section 4. This paper lays the foundations for a more extensive study of dialogue models of deontic explanation and, for this reason, we discuss five key challenges in Section 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Preliminaries: A Deontic Argumentation Calculus (DAC)</head><p>We recall the basics of the Deontic Argumentation Calculus (DAC). Although the results in this paper hold for a range of languages, base logics, and DAC systems, for readability we assume a propositional language ℒ and classical logic L, and illustrate our approach for one DAC system from <ref type="bibr" target="#b4">[5]</ref>. To enhance explainability, ℒ is labeled and augmented with a language of norms: Labeled propositional languages: ℒ 𝑖 = {𝜙 𝑖 | 𝜙 ∈ ℒ} where 𝑖 ∈ {𝑓, 𝑜, 𝑐}.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Norm languages: ℒ</head><formula xml:id="formula_0">𝑛 = {(𝜙, 𝜓) | 𝜙, 𝜓 ∈ ℒ} and ℒ 𝑛 = {¬Δ | ∅ ⊂ Δ ⊆ ℒ 𝑛 , Δ is finite}.</formula><p>Normative systems: 𝒦 = ⟨ℱ, 𝒩 , 𝒞⟩ is a normative system, where ℱ ⊆ ℒ 𝑓 is a factual context, 𝒩 ⊆ ℒ 𝑛 a normative code, and 𝒞 ⊆ ℒ 𝑐 a set of constraints (and ℱ and 𝒞 are L-consistent).</p><p>Labels explicate the roles that propositional formulas adopt in the reasoning process: 𝜙 𝑓 denotes that 𝜙 is a fact, 𝜙 𝑜 that 𝜙 is obligatory, and 𝜙 𝑐 that obligations must be consistent with 𝜙. We take (𝜙, 𝜓) ∈ ℒ 𝑛 to expresses the norm "given 𝜙, it is obligatory that 𝜓" and ¬Δ ∈ ℒ 𝑛 is read as "the norms in Δ are jointly inapplicable. " For ¬{(𝜙, 𝜓)}, we simply write ¬(𝜙, 𝜓). The latter type of expression plays an essential role in defeasible reasoning with norms. The entire enhanced I/O language is defined as the union</p><formula xml:id="formula_1">ℒ 𝑖𝑜 = ℒ 𝑓 ∪ ℒ 𝑜 ∪ ℒ 𝑐 ∪ ℒ 𝑛 ∪ ℒ 𝑛 .</formula><p>We write Γ 𝑖 , Δ 𝑖 , . . . for finite sets of 𝑖-labeled formulas, where 𝑖 ∈ {𝑓, 𝑜, 𝑐}. We write Γ, Δ, . . . for any finite subset of ℒ 𝑖𝑜 and Δ ↓ for a set Δ ⊆ ℒ 𝑖 stripped from its label 𝑖 ∈ {𝑓, 𝑜, 𝑐}.</p><p>Defeasible normative reasoning occurs with respect to a normative system 𝒦. The basic idea of I/O reasoning <ref type="bibr" target="#b2">[3]</ref> and DAC is that facts (input) trigger norms from which obligations (output) are detached where the constraints filter the output to ensure consistency. Our aim is to construct arguments from 𝒦. Our approach belongs to logical argumentation (a subfield of structured argumentation <ref type="bibr" target="#b1">[2]</ref>). We write arguments as sequents: 𝑎 = Γ ⇒ 𝜙, where prem(𝑎) = Γ is a (possibly empty) set of premises, and conc(𝑎) = 𝜙 is the conclusion of the argument. An explanatory argument is an argument stating reasons for a conclusion. We take facts, constraints, and norms as reasons and differentiate two types of argument:</p><formula xml:id="formula_2">𝜙 𝑓 , (𝜙, 𝜓) ⇒ 𝜓 𝑜 and 𝜙 𝑓 , ¬𝜓 𝑐 ⇒ ¬(𝜙, 𝜓)</formula><p>The first type (left) contains arguments providing reasons for obligations, where the fact 𝜙 𝑓 and norm (𝜙, 𝜓) provide reasons for the obligation 𝜓 𝑜 . The second type (right) contains arguments that attack reasons, expressing which norms are inapplicable in the given context, where given 𝜙 𝑓 , the norm (𝜙, 𝜓) is inapplicable since its detachable obligation is inconsistent with the constraint ¬𝜓 𝑐 . The latter type attacks all arguments using (𝜙, 𝜓) as a reason.</p><p>A DAC is a sequent-style, that is, rule-based proof system for deriving these two types of argument <ref type="bibr" target="#b4">[5]</ref>. We assume that LC is the sound and complete sequent calculus for L.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Deontic Argumentation Calculus (DAC):</head><p>Let DAC be a system consisting of the rules Ax, FDet, DDet, Con, Ina, InaC, Taut, and Cut (Figure <ref type="figure" target="#fig_0">1</ref>). A DAC-derivation of Γ ⇒ Δ is a tree-like structure whose leaves are initial sequents, whose root is Γ ⇒ Δ, and whose ruleapplications are instances of the rules of DAC. We say Γ ⇒ Δ is DAC-derivable (written ⊢ DAC Γ ⇒ Δ) whenever there exists a DAC-derivation for it, Γ ⊆ ℒ 𝑖𝑜 , and Δ ⊆ ℒ 𝑖𝑜 contains at most one formula. We say</p><formula xml:id="formula_3">Γ ⇒ Δ is 𝒦-based whenever Γ ⊆ ℱ ∪ 𝒩 ∪ 𝒞.</formula><p>There are three initial sequent rules: Ax introduces labeled versions of any classically derivable Γ ⇒ Δ to a DAC-derivation (and so LC rules are not part of DAC). Taut guarantees that all propositional tautologies are among the output. FDet expresses factual detachment and gives an initial explanatory argument stating that the fact 𝜙 𝑓 and the norm (𝜙, 𝜓) are reasons for concluding the obligation 𝜓 𝑜 . DDet corresponds to deontic detachment and makes it possible that a norm may be triggered by obligations detached from other norms (see Ex. 1). The rules Con, Ina, and InaC deal with the defeasibility of normative reasoning and yield attacking arguments. The Con rule expresses the consistency constraint that if Γ constitutes reasons for 𝜙 𝑜 , then Γ is inconsistent with the constraint ¬𝜙 𝑐 (where an empty right-hand side denotes inconsistent reasons). We also refer to Γ ⇒ as an inconsistent argument. When an argument expresses inconsistent reasons, at least one of its involved norms is inapplicable (Ina) and all involved norms are jointly inapplicable (InaC). We refer to <ref type="bibr" target="#b4">[5]</ref> for other DAC systems.</p><formula xml:id="formula_4">⊢ LC Γ ⇒ Δ Ax Γ 𝑖 ⇒ Δ 𝑖 , 𝑖 ∈ {𝑓, 𝑜, 𝑐} and Γ, Δ ⊆ ℒ Taut ⇒ (⊤, ⊤) FDet 𝜙 𝑓 , (𝜙, 𝜓) ⇒ 𝜓 𝑜 𝜙 𝑓 , Γ ⇒ Δ DDet 𝑎 𝜙 𝑜 , Γ ⇒ Δ Γ ⇒ 𝜙 𝑜 Con Γ, (¬𝜙) 𝑐 ⇒ Γ, (𝜙, 𝜓) ⇒ Ina Γ ⇒ ¬(𝜙, 𝜓) Γ ⇒ InaC Γ ∖ ℒ 𝑛 ⇒ ¬(Γ ∩ ℒ 𝑛 ) Γ ⇒ 𝜙 𝜙, Γ ′ ⇒ Δ Cut 𝑏 Γ, Γ ′ ⇒ Δ</formula><p>Example 1. We look at Chisholm's scenario <ref type="bibr" target="#b10">[11]</ref>, an archetype of contrary-to-duty reasoning. Billie is obligated to go and help her neighbors (⊤, ℎ) (⊤ denotes that ℎ is detached by default). If Billie goes to help, she must tell the neighbors she goes (ℎ, 𝑡), otherwise she ought not to tell them she goes (¬ℎ, ¬𝑡). Suppose that Billie does not go and help ¬ℎ 𝑓 and, so, violates the default duty in (⊤, ℎ). To know what Billie must do in light of her violation ¬ℎ 𝑓 , the constraint is imposed that the obligations must be consistent with the fact that Billie does not help ¬ℎ 𝑐 <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b10">11]</ref>. Let ℱ = {¬ℎ 𝑓 }, 𝒩 = {(⊤, ℎ), (ℎ, 𝑡), (¬ℎ, ¬𝑡)}, and 𝒞 = {¬ℎ 𝑐 } be the normative system 𝒦. The desired outcome is that Billie ought not to tell the neighbors she goes ¬𝑡 𝑜 given that she does not go.</p><p>Argument 𝑑 (below left), stating that Billie ought to tell, is derived with deontic detachment. Argument 𝑒 (below right), expresses the inapplicability of ¬(⊤, ℎ) given the set constraint. Similar reasoning gives the inconsistent argument 𝑥 = ¬ℎ 𝑓 , (⊤, ℎ), (ℎ, 𝑡), (¬ℎ, ¬𝑡) ⇒ , which with Con, Cut, and InaC derives the unattackable 𝑥 ′ = ¬ℎ 𝑓 ⇒ ¬{(⊤, ℎ), (ℎ, 𝑡), (¬ℎ, ¬𝑡)}.</p><formula xml:id="formula_5">FDet ⊤ 𝑓 , (⊤, ℎ) ⇒ ℎ 𝑜 FDet ℎ 𝑓 , (ℎ, 𝑡) ⇒ 𝑡 𝑜 DDet ℎ 𝑜 , (ℎ, 𝑡) ⇒ 𝑡 𝑜 Cut 𝑏 = ⊤ 𝑓 , (⊤, ℎ), (ℎ, 𝑡) ⇒ 𝑡 𝑜 FDet ⊤ 𝑓 , (⊤, ℎ) ⇒ ℎ 𝑜 Con ⊤ 𝑓 , (¬ℎ) 𝑐 , (⊤, ℎ) ⇒ Ina 𝑒 = ⊤ 𝑓 , (¬ℎ) 𝑐 ⇒ ¬(⊤, ℎ)</formula><p>For the sake of completion, we recall the I/O system out 3 here and some known results <ref type="bibr" target="#b4">[5]</ref>.</p><formula xml:id="formula_6">Proposition 1 ([5]). Let 𝒦 ↓ = ⟨ℱ, 𝒩 , 𝒞⟩ be 𝒦 stripped from its labels. Let Δ ⊆ ℒ, 𝐶𝑛(Δ) = {𝜙 | Δ ⊢ L 𝜙}, and out(𝒩 , Δ) = 𝐶𝑛({𝜙 | (𝜓, 𝜙) ∈ 𝒩 and 𝜓 ∈ 𝐶𝑛(Δ)}). Let out 3 (𝒩 , ℱ) = ⋃︀ 𝑖≥0 𝑂 𝑖 , where 𝑂 0 = out(𝒩 , ℱ) and 𝑂 𝑖+1 = 𝐶𝑛(𝑂 𝑖 ∪ out(𝒩 , 𝑂 𝑖 ∪ ℱ)). In words, out 3 is a closure of 𝒩 under successive (deontic) detachment with respect to ℱ. Then 𝒩 ⊆ ℒ 𝑛 is 𝒞-consistent in 𝒦, if ⊥ / ∈ 𝐶𝑛(out 3 (𝒩 , ℱ) ∪ 𝒞). Let Θ ⊆ 𝒩 ⊆ ℒ 𝑛 , Δ ⊆ ℱ ⊆ ℒ</formula><p>and Ω ⊆ 𝒞 ⊆ ℒ, we have:</p><formula xml:id="formula_7">1. 𝜙 ∈ out 3 (Θ, Δ) iff ⊢ DAC Θ, Δ 𝑓 ⇒ 𝜙 𝑜 ; 2. Θ is 𝒞-inconsistent iff there are Δ ⊆ ℱ and Ω ⊆ 𝒞 for which ⊢ DAC Θ, Δ 𝑓 , Ω 𝑐 ⇒ ; 3. ⊥ ∈ 𝐶𝑛(out 3 (Θ, Δ) ∪ Ω) iff for all (𝜙, 𝜓) ∈ Θ, ⊢ DAC Θ ∖ {(𝜙, 𝜓)}, Δ 𝑓 , Ω 𝑐 ⇒ ¬(𝜙, 𝜓).</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Formal Argumentation with DAC-arguments</head><p>We use formal argumentation <ref type="bibr" target="#b12">[13]</ref> to capture the defeasibility of normative reasoning and to explicate norm conflicts <ref type="bibr" target="#b4">[5]</ref>. An Argumentation Framework <ref type="bibr" target="#b12">[13]</ref> contains a set of arguments and an attack relation between arguments, where semantics stipulate conditions under which sets of arguments are jointly acceptable. We instantiate such frameworks with DAC-arguments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>DAC-induced Argumentation Frameworks</head><p>Let 𝒦 = ⟨ℱ, 𝒩 , 𝒞⟩ be a normative system. A DAC-induced argumentation framework 𝒜ℱ(𝒦) = ⟨Arg, Att⟩ is defined as follows: • 𝒜ℱ |∼ ∩rea sem 𝜙 iff there is an 𝑎 contained in every sem-extension that concludes 𝜙; • 𝒜ℱ |∼ ∪ sem 𝜙 iff there is a sem-extension ℰ for which there is an 𝑎 ∈ ℰ concluding 𝜙.</p><formula xml:id="formula_8">• Δ ⇒ Γ ∈ Arg iff Δ ⇒ Γ is DAC-</formula><p>|∼ ∩rea sem captures the shared arguments (shared reasons) by all sem-extensions. The resulting conclusions are called the free consequences of 𝒦, which are obligations from unproblematic norms compatible with any sem-extension. Credulous entailment |∼ ∪ sem captures the existence of reasons in favor of a conclusion for some sem-extension, expressing a defensible stance.</p><p>Example 2. The partial 𝒜ℱ in Figure <ref type="figure" target="#fig_1">2</ref> captures the scenario from Ex. 1. There is only one stable extension {𝑎, 𝑏, 𝑔, 𝑥 ′ } (the arrows from 𝑏, 𝑒, 𝑓, and 𝑔 to 𝑥 are implicit), which is also the grounded extension (cf. Prop. 3-2 below). We may, thus, conclude 𝒜ℱ |∼(¬𝑡) 𝑜 (where |∼ ∈ {|∼ ⋆ sem | ⋆ ∈ {∩rea, ∪} and sem ∈ {stable, grounded}}). As desired, since Billie does not go to help her neighbors, she ought not to tell them she is coming. Billie may now ask "Why am I obliged to not tell my neighbors, despite my seeming duty to tell them I am coming to help?" To this we turn next.</p><p>We recall [5, Theorem 2] that for the system adopted in this paper, DAC-induced 𝒜ℱs are sound and complete for the system out 3 of constrained Input/Output logic <ref type="bibr" target="#b2">[3]</ref>.</p><formula xml:id="formula_9">Proposition 2. Let 𝒦 = ⟨ℱ, 𝒩 , 𝒞⟩ and let maxfam(𝒦) = {𝒩 ′ ⊆ 𝒩 | ⊥ ̸ ∈ 𝐶𝑛(out 3 (𝒩 ′ , ℱ ↓ ) ∪ 𝒞 ↓</formula><p>) and for each 𝒩 ′ ⊂ 𝒩 ′′ , ⊥ ∈ 𝐶𝑛(out 3 (𝒩 ′′ , ℱ ↓ ) ∪ 𝒞 ↓ )} be the set of maximal consistent sets of norms over 𝒦. Let 𝒜ℱ be induced by DAC and 𝒦 with the set of stable extensions stable(𝒜ℱ): Our aim is to employ dialogue models for contrastive deontic explanations and for this we need some additional results. Since the grounded extension is unique <ref type="bibr" target="#b12">[13]</ref>, |∼ ∩rea grounded and |∼ ∪ grounded coincide and we simply write |∼ grounded . The proofs below do not reference specific DAC-rules (outside the base system <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b11">12]</ref>), and thus generalize to all DAC systems in <ref type="bibr" target="#b4">[5]</ref> (augmented with InaC). Proposition 3 tells us that for reasoning about the free consequences under the stable semantics, it suffices to reason with the grounded semantics. Proposition 4 shows that the preferred and stable semantics coincide for DAC. Consequently, these propositions allow us to apply well-developed dialogue techniques to DAC for skeptical (in terms of free consequences) and credulous reasoning under the grounded, respectively the preferred semantics.</p><formula xml:id="formula_10">1. If 𝒩 ′ ∈ maxfam(𝒦), then Arg(ℱ ∪ 𝒩 ′ ∪ 𝒞) ∈ stable(𝒜ℱ); 2. If 𝒜 ∈ stable(𝒜ℱ), then there is a 𝒩 ′ ∈ maxfam(𝒦) for which 𝒜 = Arg(ℱ ∪ 𝒩 ′ ∪ 𝒞). 29-40 𝑎 = [︂ (¬ℎ) 𝑓 , (¬ℎ, ¬𝑡) ⇒ (¬𝑡) 𝑜 ]︂ 𝑏 = [︂ (¬ℎ) 𝑐 ⇒ ¬(⊤, ℎ) ]︂ 𝑑 = [︂ (⊤, ℎ), (ℎ, 𝑡) ⇒ 𝑡 𝑜 ]︂ 𝑐 = [︂ (⊤, ℎ) ⇒ ℎ 𝑜 ]︂ 𝑒 = [︂ (¬ℎ) 𝑓 , (⊤, ℎ), (ℎ, 𝑡) ⇒ ¬(¬ℎ, ¬𝑡) ]︂ 𝑓 = [︂ (¬ℎ) 𝑓 , (⊤, ℎ), (¬ℎ, ¬𝑡) ⇒ ¬(ℎ, 𝑡) ]︂ 𝑔 = [︂ (¬ℎ) 𝑓 , (ℎ, 𝑡), (¬ℎ, ¬𝑡) ⇒ ¬(⊤, ℎ) ]︂ 𝑥 ′ 𝑥</formula><p>Proposition 3. Let 𝒜ℱ be a DAC-induced 𝒜ℱ for 𝒦 = ⟨ℱ, 𝒩 , 𝒞⟩ and let stb(𝒜ℱ) and grd(𝒜ℱ) be the set of stable extensions, respectively the grounded extension of 𝒜ℱ:</p><p>1. 𝑎 ∈ ⋂︀ stb(𝒜ℱ) iff every defeater 𝑏 ∈ Arg of 𝑎 is 𝒞-inconsistent (i.e., it is defeated by an argument 𝑐 ∈ Arg(ℱ ∪ 𝒞)).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">grd(𝒜ℱ) =</head><p>⋂︀ stb(𝒜ℱ) = 𝒢 2 = Defended(Arg(ℱ ∪ 𝒞)) (and so |∼ grounded = |∼ ∩rea stable ). 1   Proof. Ad 1.</p><formula xml:id="formula_11">Let 𝑎 = Δ 𝑓 1 , Θ 1 , Γ 𝑐 1 ⇒ Σ ∈ Arg(𝒦). Left-to-Right. Consider a defeater 𝑏 = Δ 𝑓 2 , Θ 2 , Γ 𝑐 2 ⇒ ¬(𝜙, 𝜓) of 𝑎. By Proposition 1, Θ 2 ∪ {(𝜙, 𝜓)} is 𝒞-inconsistent in 𝒦. Since 𝑎 ∈</formula><p>⋂︀ stb(𝒜ℱ) and (𝜙, 𝜓) ∈ prem(𝑎), and by Proposition 3, Θ 2 is not contained in a consistent set of norms in 𝒦 and it is therefore inconsistent. By Proposition 1, there are</p><formula xml:id="formula_12">Δ 𝑓 3 ∪ Γ 𝑐 3 ⊆ ℱ ∪ 𝒞 such that Δ 𝑓 3 , Γ 𝑐 3 ⇒ ¬Θ 2 defeats 𝑏. Right-to-Left.</formula><p>It is easy to see that ⋂︀ stb(𝒜ℱ) contains every argument it defends. Suppose now that 𝑏 = Γ ⇒ Δ is 𝒞-inconsistent. By Proposition 1, there is a 𝑐 = Ω ⇒ ¬(Γ ∩ ℒ 𝑛 ) that defeats 𝑏 and for which Ω ∩ ℒ 𝑛 = ∅. Since 𝑐 has no defeaters, 𝑐 ∈ ⋂︀ stb(𝒜ℱ). So, ⋂︀ stb(𝒜ℱ) defends 𝑎 and therefore 𝑎 ∈ ⋂︀ stb(𝒜ℱ). Ad 2. Left-to-Right. Straightforward. We show Right-to-Left. Let 𝑎 ∈ ⋂︀ stb(𝒜ℱ). By Item 1, 𝑎 is defended by Arg(ℱ ∪ 𝒞). Clearly, Arg(ℱ ∪ 𝒞) ⊆ 𝒢 1 ⊆ grd(𝒜ℱ) since arguments in this set do not have defeaters. So, 𝑎 ∈ 𝒢 2 ⊆ grd(𝒜ℱ). 1 Recall that ⋃︀ 𝑖≥0 𝒢𝑖 where 𝒢0 = ∅ and 𝒢𝑖+1 = Defended(𝒢𝑖). The proposition states the computationally interesting result that the fixed-point construction of the grounded extension terminates on the second iteration. Proposition 4. Let 𝒜ℱ be a DAC-induced 𝒜ℱ for 𝒦 = ⟨ℱ, 𝒩 , 𝒞⟩ and let 𝒜 ⊆ Arg: 𝒜 is a stable extension iff 𝒜 is a preferred extension (and so |∼ * preferred = |∼ * stable , for * ∈ {∪, ∩rea}).</p><p>Proof. Left-to-Right. Lemma 15 in <ref type="bibr" target="#b12">[13]</ref>. Right-to-Left. Suppose not, then there is an 𝑎 ∈ Arg(𝒦) ∖ 𝒜 but no 𝑏 ∈ 𝒜 for which (𝑏, 𝑎) ∈ Att. Since 𝒜 is preferred it is conflict-free. By Proposition 2, 𝒩 ′ = {(𝜙, 𝜓) | (𝜙, 𝜓) ∈ Δ ∩ ℒ 𝑛 and Δ ⇒ Γ ∈ 𝒜} is 𝒞-consistent and so 𝒩 ′ ⊆ 𝒩 ′′ for some 𝒩 ′′ ∈ maxfam(𝒦) and there is a stable extension 𝒜 ′ = Arg(ℱ ∪ 𝒩 ′′ ∪ 𝒞). Clearly, Arg(ℱ ∪ 𝒩 ′ ∪ 𝒞) ⊆ Arg(ℱ ∪ 𝒩 ′′ ∪ 𝒞) and so, 𝒜 is not maximally admissible. Contradiction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Dialogues and Contrastive Explanations</head><p>We now provide dialogue models for contrastive deontic explanations. A contrastive explanatory dialogue starts with a command "𝜙 𝑜 !" issued by the explainer, immediately followed by the explainee asking a question of the form: "Why 𝜙 𝑜 , despite 𝜓 𝑜 ?" Due to the conflict sensitivity of norm systems <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b10">11]</ref>, we consider contrastive why-questions as the starting point of explanatory episodes <ref type="bibr" target="#b13">[14]</ref>. We refer in what follows to 𝜙 𝑜 as the claim and to 𝜓 𝑜 as the counter-claim. <ref type="foot" target="#foot_0">2</ref> We do not assume that 𝜙 𝑜 and 𝜓 𝑜 are derivable from the given normative system 𝒦, nor do we assume that there is a dialectical relation between 𝜙 𝑜 and 𝜓 𝑜 , referred to as the contrastive link. Both must become explicit (if existent) through the dialogue itself. We say there exists a contrastive link when two arguments 𝑎 and 𝑏 exist concluding 𝜙 𝑜 , respectively 𝜓 𝑜 , which are incompatible, meaning that there is no stable extension containing both. In such a case, an incompatibility argument can be provided using the premises in 𝑎 and 𝑏: Proposition 5. Let 𝒦 = ⟨ℱ, 𝒩 , 𝒞⟩ and 𝒜ℱ(𝒦) = ⟨Arg, Att⟩. For any two arguments 𝑎 = Δ ⇒ 𝜙 𝑜 , 𝑏 = Γ ⇒ 𝜓 𝑜 ∈ Arg: there is no stable extension 𝒮 with 𝑎, 𝑏 ∈ 𝒮 iff there is a DAC-derivable argument Δ, Γ, Ω ⇒ with Ω ⊆ 𝒞 (we call 𝑎 and 𝑏 𝒞-incompatible).</p><p>Proof. Left-to-Right. Let 𝒩 ′ = (Γ ∪ Δ) ∩ ℒ 𝑛 , and ℱ ′ = {𝜙 | 𝜙 𝑓 ∈ Γ ∪ Δ}. By Prop. 3, there is no ℳ ∈ maxfam(𝒦) with 𝒩 ′ ⊆ ℳ. So, there is a Ω ⊆ 𝒞 for which ⊥ ∈ 𝐶𝑛(out 3 (𝒩 ′ , ℱ ′ )∪Ω). By Prop. 1 and a Cut application, ⊢ DAC 𝒩 ′ , ℱ ′ , Ω ⇒ . Right-to-Left. Straightforward.</p><p>An explanatory dialogue addressing "Why 𝜙 𝑜 , despite 𝜓 𝑜 ?" is successful whenever it contains c1 an argument 𝑎 for 𝜙 𝑜 and a demonstration that all (indirect) objections to 𝑎 can be met; c2 an argument 𝑏 for 𝜓 𝑜 such that 𝑎 and 𝑏 are 𝒞-incompatible (recall Prop. 5); c3 an argument 𝑐 defeating 𝑏 and a demonstration that all (indirect) objections to 𝑐 can be met; c4 a demonstration that the demonstrations in c1 and c3 are 𝒞-compatible.</p><p>Informally, c1 provides the 'illative explanation' of "𝜙 𝑜 !" by stating 𝑎 containing the facts and norms in view of which 𝜙 𝑜 holds. It also provides the 'dialectic explanation' of 𝜙 𝑜 by refuting all Example 3. Let sem ∈ {preferred,grounded}: Figure i) provides a successful saturated sem-CED for "Why ¬𝑡 𝑜 , despite 𝑡 𝑜 ?, " notice that ℰ comp is empty since ℰ R = {𝑎, 𝑏} is 𝒞-compatible. Figure <ref type="figure">ii</ref>) contains an unsuccessful saturated sem-CED for "Why 𝑡 𝑜 , despite ¬𝑡 𝑜 ?" where H refutes the claim (with 𝑏) and defends the counter-claim (with 𝑎 and 𝑏). The arguments in i) and ii) reference those in Fig. <ref type="figure" target="#fig_1">2</ref> of Ex. 2 (for space reasons, we adopted an example for which grounded equals stable). ii)</p><p>An alternative successful CED for i) exists that includes R moving 𝑔 against 𝑒, followed by H moving 𝑓 , which is then defeated by R moving 𝑏. However, here we can use Proposition 3-1, giving us for sem = grounded the existence of a strategic shortcut by directly moving argument 𝑏 against 𝑒.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Challenges for Dialogical Deontic Explanations</head><p>This paper shows how to incorporate existing results in formal argumentation and refine them to yield contrastive explanatory dialogues (CEDs) in the context of defeasible normative reasoning, Input/Output logic in particular. This work is the first of its kind and, so, we end by highlighting some key challenges for deontic explanations (through formal argumentation).</p><p>Challenge 1: Conflict Types The contrastive claims offered by the explainee may give rise to various kinds of conflicts with the main claim. Two particularly interesting cases when dealing with (conditional) norms are specificity (you are not allowed to park, unless you are medical personnel) and contrary-to-duty (don't be late, but if you are, not more than 10 minutes). Good explanations should make transparent the type of conflicts involved.</p><p>Challenge 2: Cognitive Adequacy A good explainer seeks to understand the explainee in order to tailor the given explanation to precisely target the gaps in the explainee's understanding. For this the explainer may use queries and strategic argumentation, complemented by a theory of (the explainee's) mind. Moreover, the knowledge bases of the explainer and the explainee may be disjoint and incomplete. Tailored explanations must additionally keep track of commitments and shifts therein throughout a dialogue.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Challenge 3: Richer Handling of Contrastives</head><p>The explainee may offer contrastive claims that are, under thorough analysis, not really incompatible with the offered claim. A good explainer should catch such cases and provide an argument concerning the compatibility of the claims. For this, more proof-theoretic resources have to be developed. In such cases, the explainee should be able to withdraw or replace the contrast.</p><p>Challenge 4: Richer Deontic Vocabulary Often, normative codes are richer than the ones studied here, e.g., they may contain priority orderings over norms and permissive norms.</p><p>These come with challenges, for instance concerning reinstatement (e.g., permissions generally do not reinstate obligations). Dialogues ideally accommodate such complexity.</p><p>Challenge 5: Casuistry In many application contexts of ethical (e.g., in bioethics) and legal reasoning, we find case-based reasoning when reasoning towards obligations, rights, and permissions. Deontic explanations of such conclusions need a different conceptual base than the one provided here, posing their own specific challenges (e.g., balancing reasons).</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: A Deontic Argumentation Calculus (DAC). The upper row represents initial sequent rules. Side-condition (𝑎) on DDet stipulates Γ ∩ ℒ 𝑛 ̸ = ∅; and (𝑏) on Cut requires that 𝜙 ∈ ℒ 𝑖𝑜 .</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: DAC-induced 𝒜ℱ of Ex. 1. Arrows denote defeats relative to the constraint 𝒞 = {(¬ℎ) 𝑐 }.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>R:</head><label></label><figDesc>¬𝑡 𝑜 ! H: why(¬𝑡 𝑜 )despite(𝑡 𝑜 )? R: argue(a) R: despite(𝑡 𝑜 )? 𝑏 H: ∅ R: 𝑡 𝑜 ! H: why(𝑡 𝑜 )despite(¬𝑡 𝑜 )? R: argue(𝑑) R: despite(¬𝑡 𝑜 )?</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="8,90.04,87.78,383.40,287.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>derivable and 𝒦-based. • 𝑎 defeats 𝑏, i.e., (𝑎, 𝑏) ∈ Att ⊆ Arg ×Arg iff conc(𝑎) = ¬Δ ∈ ℒ 𝑛 , and Δ ⊆ prem(𝑏). We write Arg(Σ) = {⊢ DAC 𝑎 | prem(𝑎) ⊆ Σ}. Argumentative Semantics and Entailment Let ⟨Arg, Att⟩ be an 𝒜ℱ and let 𝒮 ⊆ Arg: 𝒮 defeats an argument 𝑎 ∈ Arg if there is a 𝑏 ∈ 𝒮 that defeats 𝑎; and 𝒮 defends 𝑎 if 𝒮 defeats every argument that defeats 𝑎. Let Defended(𝒮) be the set of arguments defended by 𝒮.</figDesc><table /><note>We recall the following semantic definitions<ref type="bibr" target="#b12">[13]</ref>: 𝒮 is conflict-free if it does not defeat any of its own elements; 𝒮 is admissible if it is conflict-free and defends all 𝑏 ∈ 𝒮; 𝒮 is preferred if it is maximally admissible; 𝒮 is stable if it is conflict-free and defeats all 𝑏 ∈ Arg ∖ 𝒮; 𝒮 is grounded if 𝒮 = ⋃︀ 𝑖≥0 𝒢 𝑖 where 𝒢 0 = ∅ and 𝒢 𝑖+1 = Defended(𝒢 𝑖 ). Let sem ∈ {admissible,preferred,stable,grounded}, we define two entailment relations:</note></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_0">In the philosophical literature deontic explanations are relatively unexplored. Our account takes the questionoriented pragmatic approach to contrastive explanation (cf.<ref type="bibr" target="#b13">[14]</ref>), which naturally extends to dialogue models. It accords with<ref type="bibr" target="#b14">[15]</ref> who calls upon defeasible moral principles (here interpreted as norms) to substantiate explanations and with<ref type="bibr" target="#b15">[16]</ref> who takes defeasible norms to serve as justifications, namely, norms ground as to why the called-upon facts are explanatory. See also<ref type="bibr" target="#b16">[17]</ref> for the role of justification in the context of normative explanations.</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgements. This work was partially funded by the "Logical Methods of Deontic Explanations" (LoDeX) project, Deutsche Forschungsgemeinschaft, Project number 511915728.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>possible objections the explainee may have against concluding 𝜙 𝑜 . c2 makes explicit the contrast between the argument for 𝜙 𝑜 with an argument for 𝜓 𝑜 , and c3 provides illative and dialectical explanations for why 𝜓 𝑜 can be successfully objected to (our terminology mirrors Johnson's well-known two-tier model of argument <ref type="bibr" target="#b17">[18]</ref>). Last, c4 ensures that the two sub-explanations in c1 and c3 form a 𝒞-compatible view. The intuitive idea of contrastive explanatory dialogues, following c1-c4, is provided in Figure <ref type="figure">3</ref>. Below, we make this formally precise.</p><p>Although explanatory dialogues are collaborative, we assume a burden of proof for the explainer with respect to c1 and c3, and for explainee with respect to c2 and c4. For the sake of simplified reference, we call the explainee 'human' (H) and the explainer 'robot' (R). </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Explanatory Dialogues</head><p>P1 stipulates that (1) R starts the dialogue with a command to which, (2) H responds with a contrastive why-question. Then, (3) R must provide reasons for the command and, after that, (4) shifts the burden of proof to H requesting support for the contrastive claim. P2 stipulates rules that hold for both R and H: (i) after the start of the dialogue any player may continue making moves that target previous moves by stating arguments, where (ii) arguments moved against other arguments express undermining defeats, (iii) players may not move an argument twice against the same move, and (iv) they may not attack their own claims.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>P3 Explainer Rules for each 𝑚</head><p>Explainee Rules for each 𝑚 𝑖 , 𝑚 𝑗 , 𝑚 𝑘 ∈ ℰ: P3 states that R must provide exactly two moves against the contrastive why-question (one of which provides reasons, and the other questioning the contrastive claim). P4 stipulates that the explainee H may (i) make at most two moves against R's questioning of the contrastive link, (ii) one of which is an argument providing reasons for the counter-claim, one which shows the 𝒞-incompatibility of the arguments for the claim and the counter-claim 𝜓 𝑜 . Then, (iii) H may also move against the explainer's argument 𝑑 opposing the reasons for the counter-claim. In case 𝑑 is incompatible with the other arguments offered by R, R engages in incoherent reasoning. H may, thus, oppose by demonstrating the 𝒞-incompatibility of 𝑑 and other R arguments.</p><p>A CED has a tree-structure since each move has exactly one predecessor (except for the root). A branch of ℰ containing 𝑚 𝑖 is the maximal linear sequence branch(𝑚 𝑖 ) = ⟨𝑚 𝑗 1 , . . . , 𝑚 𝑖=𝑗 𝑘 , . . . , 𝑚 𝑗𝑛 ⟩ such that for each 𝑚 𝑗 𝑙 and 𝑚 𝑗 𝑙+1 , ta(𝑚 𝑗 𝑙+1 ) = 𝑗 𝑙 . We say 𝑚 𝑗𝑛 is a leaf. Each CED consists of four subdialogues (see Fig. <ref type="figure">3</ref>), which constitute four sub-explanations: a subdialogue ℰ claim (cf. c1) that engages with the argument 𝑎 given in favour of the claim 𝜙 𝑜 (generated from 𝑚 3 down); a subdialogue ℰ counter (cf. c3) that engages with the argument 𝑏 given in favour of the counter-claim 𝜓 𝑜 (generated from the move attacking the argument providing reasons for 𝜓 𝑜 ); and the subdialogues ℰ contrast (cf. c2) and ℰ comp (cf. c4) each containing at most one node with an argument that shows the 𝒞-incompatibility of 𝑎 and 𝑏, respectively, joint 𝒞-incompatibility of R's explanations ℰ claim and ℰ counter . These four subdialogues determine when a given dialogues is (un/semi)successful.</p><p>Before defining success, we add P5 to the protocol to accommodate reasoning with preferred and grounded acceptance of arguments. For sem ∈ {preferred, grounded}, (i) R may move at most one counter-argument to each H argument. For preferred dialogues, (ii) H is not allowed to move the same argument twice on a branch in ℰ claim or ℰ counter . For grounded dialogues, (iii) R is not allowed to move the same argument twice on a branch. We note that (i)-(iii) follow the protocols for admissible (and, so, preferred) and grounded argumentation games <ref type="bibr" target="#b9">[10]</ref>.</p><p>P5 Preferred and Grounded Rules for each 𝑚 𝑖 , 𝑚 𝑗 ∈ ℰ, and sem ∈ {preferred, grounded}: i) ta(𝑚 𝑖 ) = ta(𝑚 𝑘 ) = 𝑚 𝑗 with 𝑗 &gt; 3 and pl(𝑚 𝑖 ) = R, then 𝑚 𝑖 = 𝑚 𝑘 ; ii) if sem = preferred, pl(𝑚 𝑖 ) = H, and ta(𝑚 𝑖 ) = 𝑚 𝑗 , then there is no 𝑚 𝑘 ∈ branch(𝑚 𝑗 ) for which pl(𝑚 𝑘 ) = H, lo(𝑚 𝑖 ) = lo(𝑚 𝑘 ) and 𝑖 ̸ = 𝑘; iii) if sem = grounded, pl(𝑚 𝑖 ) = R, and ta(𝑚 𝑖 ) = 𝑚 𝑗 , then there is no 𝑚 𝑘 ∈ branch(𝑚 𝑗 ) for which pl(𝑚 𝑘 ) = R, lo(𝑚 𝑖 ) = lo(𝑚 𝑘 ) and 𝑖 ̸ = 𝑘.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Successful dialogues</head><p>Let 𝒜ℱ(𝒦) be DAC-induced. A CED ℰ satisfying P1-P5 is:</p><p>• successful if ℰ contrast ̸ = ∅, ℰ comp = ∅, and ℰ claim and ℰ counter both contain R-leaves only;</p><p>• semi-successful if ℰ contrast = ∅ = ℰ comp , and ℰ claim contains R-leaves only;</p><p>• unsuccessful if neither of the above holds.</p><p>Then, ℰ is sem-successful when it is saturated (i.e., all movable arguments from 𝒜ℱ(𝒦) are moved in ℰ; cf. <ref type="bibr" target="#b8">[9]</ref>) and ℰ is successful (similar for semi-and unsuccessful).</p><p>In brief, a successful CED features (c1) an illative explanation that is supplemented by a dialectical explanation (ℰ claim contains only R-leaves), where (c2) the explainee is able to demonstrate the incompatibility of the contrastive claim (ℰ contrast ̸ = ∅), the latter which (c3) the explainer successfully counters (ℰ counter contains only R-leaves). Furthermore, (c4) the position taken by R in ℰ claim and ℰ counter must be 𝒞-compatible. A semi-successful dialogue features (c1), but H is not able to demonstrate the adequacy of the contrastive link (ℰ contrast = ∅). A dialogue can be unsuccessful for various reasons, e.g., R cannot provide an illative or dialectic explanation, or R cannot argue against H's counter-claim.</p><p>Under saturation, it can be easily checked that the sub-dialogues in ℰ claim and ℰ counter are for sem ∈ {preferred, grounded} instances of credulous preferred and grounded argumentation games <ref type="bibr" target="#b9">[10]</ref> (where for the latter credulous equals skeptical entailment). Hence, we obtain dialogue models that construct explanations for credulous I/O entailment (i.e., when sem = preferred) and for skeptical I/O entailment under shared reasons (i.e., when sem = grounded). Proposition 6. Let 𝒜ℱ(𝒦) = ⟨Arg, Att⟩ be DAC-induced, sem ∈ {preferred, grounded}, and ℰ * = ⟨𝑚 𝑖 , . . . , 𝑚 𝑛 ⟩ be 𝒜ℱ(𝒦)-based with lo(𝑚 𝑖 ) = argue(𝑎) and * ∈ {claim, counter}:</p><p>• if ℰ * is saturated and contains only R leaves then 𝑎 ∈ 𝒮 for some sem-extension;</p><p>• if 𝑎 ∈ 𝒮 for some sem-extension 𝒮, there is a saturated extension of ℰ * with only R leaves.</p><p>Proof. Straightforward modification of the proofs of Theorems 6.2 and 6.5 in <ref type="bibr" target="#b9">[10]</ref>.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Explanation in artificial intelligence: Insights from the social sciences</title>
		<author>
			<persName><forename type="first">T</forename><surname>Miller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">267</biblScope>
			<biblScope unit="page" from="1" to="38" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Logic-based approaches to formal argumentation</title>
		<author>
			<persName><forename type="first">O</forename><surname>Arieli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Borg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Heyninck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Straßer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Handbook of Formal Argumentation</title>
				<editor>
			<persName><forename type="first">D</forename><surname>Gabbay</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Giacomin</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><forename type="middle">R</forename><surname>Simari</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Thimm</surname></persName>
		</editor>
		<imprint>
			<publisher>College Publications</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="1793" to="1898" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Constraints for Input/Output logics</title>
		<author>
			<persName><forename type="first">D</forename><surname>Makinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Van Der Torre</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Philosophical Logic</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="page" from="155" to="185" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Reasons as defaults</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Horty</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2012">2012</date>
			<publisher>Oxford University Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Reasoning with and about norms in logical argumentation</title>
		<author>
			<persName><forename type="first">K</forename><surname>Van Berkel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Straßer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of COMMA 2022</title>
				<meeting>COMMA 2022</meeting>
		<imprint>
			<publisher>IOS press</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">353</biblScope>
			<biblScope unit="page" from="332" to="343" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Towards an argumentative unification of default reasoning</title>
		<author>
			<persName><forename type="first">K</forename><surname>Van Berkel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Straßer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceeding of COMMA 2024</title>
				<meeting>eeding of COMMA 2024</meeting>
		<imprint>
			<publisher>IOS press</publisher>
			<date type="published" when="2024">2024. TBA</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Argumentative XAI: A survey</title>
		<author>
			<persName><forename type="first">K</forename><surname>Čyras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rago</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Albini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Baroni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Toni</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Modelling dialogues using argumentation</title>
		<author>
			<persName><forename type="first">L</forename><surname>Amgoud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Maudet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Parsons</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings Fourth International Conference on MultiAgent Systems</title>
				<meeting>Fourth International Conference on MultiAgent Systems</meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2000">2000</date>
			<biblScope unit="page" from="31" to="38" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Coherence and flexibility in dialogue games for argumentation</title>
		<author>
			<persName><forename type="first">H</forename><surname>Prakken</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Logic and Computation</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="1009" to="1040" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Proof theories and algorithms for abstract argumentation frameworks</title>
		<author>
			<persName><forename type="first">S</forename><surname>Modgil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Caminada</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Argumentation in artificial intelligence</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="105" to="129" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Gabbay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Horty</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Parent</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Van Der Meyden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Van Der Torre</surname></persName>
		</author>
		<title level="m">Handbook of Deontic Logic and Normative Systems</title>
				<meeting><address><addrLine>United Kingdom</addrLine></address></meeting>
		<imprint>
			<publisher>College Publications</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="volume">1</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Defeasible normative reasoning: A proof-theoretic integration of logical argumentation</title>
		<author>
			<persName><forename type="first">O</forename><surname>Arieli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Van Berkel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Straßer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of AAAI 2024</title>
				<meeting>AAAI 2024</meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="10450" to="10458" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">M</forename><surname>Dung</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Art. Int</title>
		<imprint>
			<biblScope unit="volume">77</biblScope>
			<biblScope unit="page" from="321" to="357" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Contrastive explanation</title>
		<author>
			<persName><forename type="first">P</forename><surname>Lipton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Royal Inst. of Philosophy Suppl</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page" from="247" to="266" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Scientific explanation and moral explanation</title>
		<author>
			<persName><forename type="first">U</forename><forename type="middle">D</forename><surname>Leibowitz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Noûs</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page" from="472" to="503" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Explanations, predictions, and laws</title>
		<author>
			<persName><forename type="first">M</forename><surname>Scriven</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">Minnesota Studies in the Philosophy of Science</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="170" to="230" />
			<date type="published" when="1962">1962</date>
			<publisher>University of Minnesota Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Normative explanation and justification</title>
		<author>
			<persName><forename type="first">P</forename><surname>Väyrynen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Noûs</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page" from="3" to="22" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Manifest rationality: A pragmatic theory of argument</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">H</forename><surname>Johnson</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2000">2000</date>
			<publisher>Routledge</publisher>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
