Burden of persuasion in argumentation: A meta-argumentation approach Giuseppe Pisano1 , Roberta Calegari1 , Andrea Omicini2 and Giovanni Sartor1 1 Alma AI – Alma Mater Research Institute for Human-Centered Artificial Intelligence, Alma Mater Studiorum—Università di Bologna, Italy 2 Dipartimento di Informatica – Scienza e Ingegneria (DISI), Alma Mater Studiorum—Università di Bologna, Italy Abstract This paper examines the view of the burden of persuasion as meta argument and elaborates the meta-argumentative aspects of a burden-of-persuasion semantics in argumentation. An argumentation framework composed of a meta level (dealing with the burden) and an object level (dealing with standard arguments) is proposed and discussed, and its equivalence with the burden-of-persuasion model in argumentation is proved. Finally, a computationally-feasible implementation of the meta-argumentation approach is presented. Keywords burdens of persuasion, argumentation, meta-argumentation, reasoning over burdens 1. Introduction Given that argumentation also involves putting forward arguments about arguments, in this paper we start from the view that arguments and dialogues are inherently meta- logical, as already acknowledged by many research works [1]. For instance, a statement that serves as a justification of an argument is a statement about an argument: the argument for which the justification serves should itself be referred to in the justification. Accordingly, any proper formalisation of arguments and related abstractions should embrace this aspect. A meta-level argumentation framework is instantiated by arguments that make state- ments about arguments, their interactions, and their evaluation at an object-level argu- mentation framework. Here we focus on the very concept of burden of persuasion and on the design of the meta model for dealing with burdens. Generally speaking, we can say that burdens of persuasion distribute dialectical responsibilities between the parties in a dialogue. In other words, when a party has a burden of persuasion of type 𝑏 relative to a claim 𝜑 and does not provide the kind of arguments or evidence required by 𝑏, then the party will lose on 𝜑. Losing on the burdened claim means that, for the purpose of AI3 ’21: 5th Workshop on Advances In Argumentation In Artificial Intelligence $ g.pisano@unibo.it (G. Pisano); roberta.calegari@unibo.it (R. Calegari); andrea.omicini@unibo.it (A. Omicini); giovanni.sartor@unibo.it (G. Sartor)  0000-0003-0230-8212 (G. Pisano); 0000-0003-3794-2942 (R. Calegari); 0000-0002-6655-3869 (A. Omicini); 0000-0003-2210-0398 (G. Sartor) © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) the dialectic interaction at stake, it will be assumed that 𝜑 has not been established, not even as a relevant possibility. Burdens of persuasion complement the analysis of dialectical frameworks that are provided by argumentation systems. In particular, they are important in adversarial contexts: they facilitate the process of reaching a single outcome in contexts of doubt and lack of information. This is obtained by ruling out (considering them as unacceptable) those arguments that fail to meet any applicable burden. In this work we discuss the model of the burden of persuasion in structured argumenta- tion [2, 3] under a meta-argumentative and meta-logic approach, which leads to (i) a clear separation of concerns in the model, (ii) a simpler and more efficient implementation, (iii) a natural model extension for dealing also with reasoning over the burden of persuasion concepts. Moving [2, 3] from a semantics standpoint to a meta-level approach enable flexibility in terms of opening the model to further extensions – which can also be captured at the meta level, i.e., fully interoperable – and enable the natural exploitation of all the argumentative mechanisms [4] in a manner similar to the way in which argumentation systems can be expanded to include argumentation about priorities. Our approach relies on the works from [5, 6] introducing only the required abstraction at the meta level. The proposed meta-argumentation framework for the burden of persuasion includes three ingredients: (i) object-level argumentation – to create arguments from defeasible and strict rules –, (ii) meta-level argumentation – to create arguments dealing with abstractions related to the burden concept using argument schemes (or meta-level rules) –, and (iii) bimodal graphs to define interaction between the object level and the meta level—following the account in [5]. Accordingly, Section 2 introduces the meta-argumentation framework, defining the object-level language and concepts, the meta-level language and concepts, and bimodal graphs as the model for dealing with their connection. Section 3 formally defines the meta-argumentation framework for the burden of persuasion introducing related argument schemes and discusses its equivalence with the model presented in [3]. Finally, Section 4 introduces a technological reification in Arg2P. Conclusion are drawn in Section 5. 2. Meta-argumentation framework In this section, we introduce the meta-argumentation framework. For the sake of simplicity, we choose to model our meta-argumentation framework by exploiting bimodal graphs, which are often exploited both to define meta-level concepts and to understand the interactions of object-level and meta-level arguments [6, 5]. Accordingly, Subsection 2.1 presents the object-level argumentation language exploited by our model, leveraging on an ASPIC+ -like argumentation framework [7]. Then, Subsection 2.2 introduces bimodal argumentation graphs main definitions. Finally, in Subsection 2.3, the meta-level argumentation language based on the use of argument schemes [8] is introduced. 2.1. Structured argumentation for object-level argumentation Let a literal be an atomic proposition or its negation. ¯ . That is, if 𝜑 is a proposition Notation 1. For any literal 𝜑, its complement is denoted by 𝜑 ¯ ¯ 𝑝, then 𝜑 = ¬𝑝, while if 𝜑 is ¬𝑝, then 𝜑 is 𝑝. Let us also identify burdens of persuasion, i.e., those literals the proof of which requires a convincing argument. We assume that such literals are consistent (it cannot be the case that there is a burden of persuasion both on 𝜑 and 𝜑 ¯ ). Definition 2.1 (Burdens of persuasion). Burdens of persuasion are represented by predi- cates of the form bp(𝜑), stating the burden is allocated on the literal 𝜑. Literals and bp predicates are brought into relation through defeasible rules. Definition 2.2 (Defeasible rule). A defeasible rule 𝑟 has the form: 𝜌 : 𝜑1 , ..., 𝜑𝑛 , ∼ 𝜑′1 , ..., ∼ 𝜑′𝑚 ⇒ 𝜓 with 0 ≤ 𝑛, 𝑚, and where • 𝜌 is the unique identifier for 𝑟, denoted by N(𝑟); • each 𝜑1 , . . . , 𝜑𝑛 , 𝜑′1 , . . . , 𝜑′𝑚 , 𝜓 is a literal or a bp predicate; • 𝜑1 , . . . 𝜑𝑛 , ∼ 𝜑′1 , ..., ∼ 𝜑′𝑚 are denoted by Antecedent(𝑟) and 𝜓 by Consequent(𝑟); • ∼ 𝜑 denotes the weak negation (negation by failure) of 𝜑—i.e., 𝜑 is an exception that would block the application of the rule whose antecedent includes ∼ 𝜑. The unique identifier of a rule can be used as a literal to specify that the named rule is applicable, and its negation correspondingly to specify that the rule is inapplicable [9]. A superiority relation ≻ is defined over rules: 𝑠 ≻ 𝑟 states that rule 𝑠 prevails over rule 𝑟. Definition 2.3 (Superiority relation). A superiority relation ≻ over a set of rules Rules is an antireflexive and antisymmetric binary relation over Rules. A defeasible theory consists of a set of rules and a superiority relation over the rules. Definition 2.4 (Defeasible theory). A defeasible theory is a tuple ⟨Rules, ≻⟩ where Rules is a set of rules, and ≻ is a superiority relation over Rules. Given a defeasible theory, by chaining rules from the theory, we can construct arguments [9, 10, 11]. Definition 2.5 (Argument). An argument 𝐴 constructed from a defeasible theory ⟨Rules, ≻⟩ is a finite construct of the form: 𝐴 : 𝐴1 , . . . 𝐴𝑛 ⇒𝑟 𝜑 with 0 ≤ 𝑛, where • 𝐴 is the argument’s unique identifier; • 𝐴1 , . . . , 𝐴𝑛 are arguments constructed from the defeasible theory ⟨Rules, ≻⟩; • 𝜑 is the conclusion of the argument, denoted by Conc(𝐴); • 𝑟 : Conc(𝐴1 ), . . . , Conc(𝐴𝑛 ) ⇒ 𝜑 is the top rule of 𝐴, denoted by TopRule(𝐴). Notation 2. Given an argument 𝐴 : 𝐴1 , . . . 𝐴𝑛 ⇒𝑟 𝜑 as in definition 2.5, Sub(𝐴) denotes the set of subarguments of 𝐴, i.e., Sub(𝐴) = Sub(𝐴1 ) ∪ . . . ∪ Sub(𝐴𝑛 ) ∪ {𝐴}. DirectSub(𝐴) denotes the direct subarguments of 𝐴, i.e., DirectSub(𝐴) = {𝐴1 , . . . , 𝐴𝑛 }. Preferences over arguments are defined via a last-link ordering: an argument 𝐴 is preferred over another argument 𝐵 if the top rule of 𝐴 is stronger than the top rule of 𝐵. Definition 2.6 (Preference relation). A preference relation ≻ is a binary relation over a set of arguments 𝒜: an argument 𝐴 is preferred to argument 𝐵, denoted by 𝐴 ≻ 𝐵, iff TopRule(𝐴) ≻ TopRule(𝐵). Arguments are put in relation according to the attack relation. Definition 2.7 (Attack). An argument 𝐴 attacks argument 𝐵 iff 𝐴 undercuts or rebuts 𝐵, where • 𝐴 undercuts 𝐵 (on B’) iff Conc(𝐴) = ¬N(𝜌) for some 𝐵 ′ ∈ Sub(𝐵), where 𝜌 is TopRule(𝐵 ′ ) • 𝐴 rebuts 𝐵 (on B’) iff – Conc(𝐴) = 𝜑¯ for some 𝐵 ′ ∈ Sub(𝐵) of the form 𝐵 ′′ , ..., 𝐵 ′′ ⇒ 𝜑 and 𝐵 ′ ⊁ 𝐴, 1 𝑀 or – Conc(𝐴) = 𝜑 for some 𝐵 ′ ∈ Sub(𝐵) such that ∼ 𝜑 ∈ 𝐴𝑛𝑡𝑒𝑐𝑒𝑑𝑒𝑛𝑡(TopRule(𝐵 ′ )) In short, arguments can be attacked on a conclusion of a defeasible inference (rebutting attack), or on a defeasible inference step itself (undercutting attack). Definition 2.8 (Argumentation graph). An argumentation graph is a tuple ⟨𝒜, ⇝⟩, where 𝒜 is the set of all arguments, and ⇝ is attack relation over 𝒜. Notation 3. Given an argumentation graph 𝐺 = ⟨𝒜, ⇝⟩, we write 𝒜𝐺 , and ⇝𝐺 to denote the graph’s arguments and attacks respectively. Now, let us introduce the notion of the {IN, OUT, UND}-labelling of an argumentation graph, where each argument in the graph is labelled IN, OUT,or UND, depending on whether it is accepted, rejected, or undecided, respectively. Definition 2.9 (Labelling). Let 𝐺 be an argumentation graph. An {IN, OUT, UND}-labelling 𝐿 of 𝐺 is a total function 𝒜𝐺 → {IN, OUT, UND}. The set of all {IN, OUT, UND}-labellings of 𝐺 will be denoted as ℒ({IN, OUT, UND}, 𝐺). A labelling-based semantics prescribes a set of labellings for any argumentation graph according to some criterion embedded in its definition. Definition 2.10 (Labelling-based semantic). Let 𝐺 be an argumentation graph. A labelling-based semantics 𝑆 associates with 𝐺 a subset of ℒ({IN, OUT, UND}, 𝐺), denoted as 𝐿𝑆 (𝐺). 2.2. Object and meta level connection: bimodal graphs In this section we recall the main definitions of bimodal graphs as the model of interaction between object and meta level. Bimodal graphs allow capturing scenarios in which arguments are categorised in multiple levels—only two in our case, the object and the meta level. Accordingly, a bimodal graph is composed of two components: an argumentation graph for the meta level and an argumentation graph for the object level, along with a relation of support that originates from the meta level and targets attacks and arguments on the object level. Every object-level argument and every object-level attack is supported by at least one meta-level argument. Meta-level arguments can only attack meta-level arguments, and object-level arguments can only attack object-level arguments. Definition 2.11 (Bimodal argumentation graph). A bimodal argumentation graph is a tuple ⟨𝒜𝑂 , 𝒜𝑀 , ℛ𝑂 , ℛ𝑀 , 𝒮𝐴 , 𝒮𝑅 ⟩ where 1. 𝒜𝑂 is the set of object-level arguments 2. 𝒜𝑀 is the set of meta-level arguments 3. ℛ𝑂 ⊆ 𝒜𝑂 × 𝒜𝑂 , represents the set of object-level attacks 4. ℛ𝑀 ⊆ 𝒜𝑀 × 𝒜𝑀 , represents the set of meta-level attacks 5. 𝒮𝐴 ⊆ 𝒜𝑀 × 𝒜𝑂 , represents the set of supports from meta-level arguments into object-level arguments 6. 𝒮𝑅 ⊆ 𝒜𝑀 × ℛ𝑂 , represents the set of supports from meta-level arguments into object-level attacks 7. 𝒜𝑂 ∩ 𝒜𝑀 = ∅ 8. ∀𝐴 ∈ 𝒜𝑂 ∃ 𝐵 ∈ 𝒜𝑀 : (𝐵, 𝐴) ∈ 𝒮𝐴 9. ∀𝑅 ∈ ℛ𝑂 ∃ 𝐵 ∈ 𝐴𝑀 : (𝐵, 𝑅) ∈ 𝒮𝑅 The object-level argument graph is represented by the couple (𝒜𝑂 , ℛ𝑂 ), while the meta-level argument graph is represented by the couple (𝒜𝑀 , ℛ𝑀 ). The two distinct components are connected by the support relations represented by 𝒮𝐴 and 𝒮𝑅 . This supports are the only structural interaction between the meta and the object levels. Condition (8) in the above definition ensures that every object-level argument is supported by at least one meta-level argument, while condition (9) ensures that every object-level attack is supported by at least one meta-level argument. Perspectives of the object-level graph can be defined as: Definition 2.12 (Perspective). Let 𝐺 = ⟨𝒜𝑂 , 𝒜𝑀 , ℛ𝑂 , ℛ𝑀 , 𝒮𝐴 , 𝒮𝑅 ⟩ be a bimodal argu- mentation graph and let 𝐿𝑆 be a labelling semantics. A tuple ⟨𝒜′𝑂 , ℛ′𝑂 ⟩ is an 𝐿𝑆 - perspective of 𝐺 if ∃ 𝑙 ∈ 𝐿𝑆 (⟨𝒜𝑀 , ℛ𝑀 ⟩) such that 1. 𝒜′𝑂 = { 𝐴|∃𝐵 ∈ 𝒜𝑀 𝑠.𝑡. 𝑙(𝐵) = IN, (𝐵, 𝐴) ∈ 𝒮𝐴 } 2. ℛ′𝑂 = { 𝑅|∃𝐵 ∈ 𝒜𝑀 𝑠.𝑡. 𝑙(𝐵) = IN, (𝐵, 𝑅) ∈ 𝒮𝑅 } Consequently, an object argument may be present in one perspective and not in another according to the results yielded by the meta-level argumentation graph. 2.3. Argument schemes for meta-level argumentation A fundamental aspect to consider when dealing with a multi-level argumentation graph is how the higher-level graphs can be built starting from the object-level ones. At this purpose, in this work – following the example in [6] – we leverage on argument schemes [8]. In a few words, argumentation schemes are commonly used patterns of reasoning. They can be formalised in a rules-like form [12] where every argument scheme consists of a set of conditions and a conclusion. If the conditions are met, then the conclusion holds. Each scheme comes with a set of critical questions (CQ), identifying possible exceptions to the admissibility of arguments derived from the schemes. Definition 2.13 (Meta-predicate). A meta-predicate 𝑃𝑀 is a symbol which represents a property or a relation between object-level arguments. Let be ℳ the set of all 𝑃𝑀 . Definition 2.14 (Object-relation meta-predicate). An object-relation meta-predicate 𝑂𝑀 is a predicate stating the existence of a relation at the object level—e.g., attacks, prefer- ences, conclusions. Let be 𝒪 the set of all 𝑂𝑀 . Moving from the above definitions we can define an argument scheme as: Definition 2.15 (Argument Scheme). An argument scheme 𝑠 has the form: 𝑠 : ′ ⇒ 𝑄 with 0 ≤ 𝑛, 𝑚, and where 𝑃1 , ..., 𝑃𝑛 , ∼ 𝑃1′ , ..., ∼ 𝑃𝑚 • each 𝑃1 , . . . , 𝑃𝑛 , 𝑃1′ , . . . , 𝑃𝑚 ′ ∈ ℳ ∪ 𝒪, while 𝑄 ∈ ℳ • ∼ 𝑃 denotes weak negation (negation by failure) of 𝑃 —i.e., 𝑃 is an exception that would block the application of the rule whose antecedent includes ∼ 𝑃 • we denote with 𝐶𝑄𝑠 the set of critical questions associated to scheme 𝑠. Using argument schemes we can build meta-arguments: Definition 2.16 (Meta-Argument). A meta-argument 𝐴 constructed from a set of argu- ment schemes 𝑆 and an object-level argumentation graph 𝐺 is a finite construct of the form: 𝐴 : 𝐴1 , . . . 𝐴𝑛 ⇒𝑠 𝑃 with 0 ≤ 𝑛, where • 𝐴 is the argument’s unique identifier; • 𝑠 ∈ 𝑆 is the scheme used to build the argument; • 𝐴1 , . . . , 𝐴𝑛 are arguments constructed from 𝑆 and 𝐺; • 𝑃 is the conclusion of the argument, denoted by Conc(𝐴); • we denote with 𝐶𝑄(𝐴) the critical questions associated to scheme 𝑠. The same notation introduced for standard arguments in Notation 2 applies also to meta-arguments. We can now define attacks over meta-arguments. Definition 2.17 (Meta-Attack). An argument 𝐴 attacks argument 𝐵 (on 𝐵 ′ ) iff ¯ for some 𝐵 ′ ∈ Sub(𝐵) of the form 𝐵 ′′ , ..., 𝐵 ′′ ⇒ 𝑃 or • Conc(𝐴) = 𝑃 1 𝑀 • Conc(𝐴) = 𝑃 for some 𝐵 ′ ∈ Sub(𝐵) such that ∼ 𝑃 ∈ 𝐴𝑛𝑡𝑒𝑐𝑒𝑑𝑒𝑛𝑡(TopRule(𝐵 ′ )). The same definition of argumentation graph and labellings introduced for standard argumentation in Definitions 2.8, 2.9, 2.10 also holds for meta-arguments and for the meta level. 3. Burden of persuasion as meta-argumentation Informally, we can say that when we talk about the notion of the burden of persuasion concerning an argument, we intuitively argue over that argument according to a meta- argumentative approach. Let us consider, for instance, an argument 𝐴: if we allocate the burden over it, we implicitly impose the duty to prove its admissibility on 𝐴. Thus, moving the analysis up to the meta level of the argumentation process, it is like having two arguments, let them be 𝐹𝐵𝑃 and 𝑆𝐵𝑃 , reflecting the burden of persuasion status. According to this perspective, 𝐹𝐵𝑃 states that “the burden is not satisfied if 𝐴 fails to prove its admissibility” – i.e. 𝐴 should be rejected or undefined – and, of course, 𝐹𝐵𝑃 is not compatible with 𝐴 being accepted. Alongside, 𝑆𝐵𝑃 states that “𝐴 is admissible since it satisfies its burden”. 𝐹𝐵𝑃 and 𝑆𝐵𝑃 have a contrasting conclusion and thus they attack each other. Analysing the burden from this perspective makes immediately clear that the notions that the meta model should deal with are: N.1 the notion of the burden itself expressing the possibility for an argument to be allocated with a burden of persuasion (i.e., burdened argument) N.2 the possibility that this burden is satisfied (that is, a burden met) or not satisfied N.3 the possibility of making attacks involving burdened arguments ineffective. The outline of that multi-part evaluation scheme for burdens of persuasion in argumenta- tion is now visible and can be formally designed. In the following, we formally define these concepts by exploiting bimodal argument graphs as techniques for expressing the two main levels of the model – meta level and object level – and the relationships between the two. In particular, we are going to define each set of the bimodal argument graph tuple ⟨𝒜𝑂 , 𝒜𝑀 , ℛ𝑂 , ℛ𝑀 , 𝒮𝐴 , 𝒮𝑅 ⟩. With respect to 𝒜𝑂 and ℛ𝑂 , representing respectively the set of object-level arguments and attacks, they are built accordingly to the argumentation framework discussed in Subsection 2.1. Hence, our analysis focuses on the meta-level graph ⟨𝒜𝑀 , ℛ𝑀 ⟩ and on the support sets connecting the two levels (𝒮𝐴 and 𝒮𝑅 ). 3.1. Meta-level graph We now proceed to detail all the argumentation schemes used to build arguments in the meta-level graph. Every scheme comes along with its critical questions. Let us first introduce the basic argumentation scheme enabling the definition and representation of an argument with an allocation of the burden of persuasion (i.e., reifying N.1). We say that an object-level argument 𝐴 has the burden of persuasion on it if exists an object-level argument 𝐵 such that Conc(𝐵) = bp(Conc(𝐴)). This notion is modelled through the following argument scheme: conclusion(𝐴, 𝜑), conclusion(𝐵, 𝑏𝑝(𝜑)) ⇒ burdened(𝐴) (S0) Are arguments A and B provable? (CQS0 ) where 𝑏𝑝(𝜑) is a predicate stating 𝜑 is a literal with the allocation of the burden, conclusion(𝐴, 𝜑) is a structural meta-predicate stating that Conc(𝐴) = 𝜑 holds, and burdened(𝐴) is a meta-predicate representing the allocation of the burden on 𝐴. Of course an argument produced using this scheme holds only if both the arguments 𝐴 and 𝐵 on which the inference is based hold—critical question CQS0 . Analogously, we introduce the scheme S1 representing the absence of such an alloca- tion: conclusion(𝐴, 𝜑), ∼ conclusion(𝐵, 𝑏𝑝(𝜑)) ⇒ ¬burdened(𝐴) (S1) Is argument 𝐴 provable? Is argument 𝐵 really unprovable? (CQS1 ) Then, as informally introduced at the beginning of this section, we have two schemes reflecting the possibility for a burdened argument to meet or not the burden (N.2). burdened(𝐴) ⇒ bp_met(𝐴) (S2) burdened(𝐴) ⇒ ¬bp_met(𝐴) (S3) Is argument 𝐴 admissible? (CQS2 ) Is argument 𝐴 refuted or undecidable? (CQS3 ) where bp_met is the meta-predicate stating the burden has been met. It is important to notice that these two schemes reach opposite conclusions from the same grounds—i.e., the presence of the burden on argument 𝐴. The discriminating elements are the critical questions they are accompanied by. In the case of S2, we have that only if exists a burden of persuasion on argument 𝐴, and 𝐴 is admissible (CQS3 ), then the burden is satisfied. On the other side, the validity of S3 is linked to the missing admissibility of argument 𝐴. We will see in Section 3.3 how the meta-arguments and the associated questions concur to determine the model results. Let us now consider attacks between arguments and their relation with the burden of persuasion allocation. When a burdened argument fails to meet the burden, the only thing affecting the argument acceptability is the burden itself—i.e., attacks from other arguments do not influence the burdened argument status that only depends on its inability to satisfy the burden. The same applies to attacks issued by an argument that fails to meet the burden: the failure implies the argument rejection and, as a direct consequence, the inability to effectively attack other arguments. In order to capture the nuance to differentiate among effective or ineffective object level attacks w.r.t the concept of burden of persuasion (N.3), we define the following scheme: attack(𝐵, 𝐴), ∼ (¬bp_met(𝐴)), ∼ (¬bp_met(𝐵)) ⇒ effectiveAttack(𝐵, 𝐴) (S4) Can we prove that argument 𝐴 or 𝐵 are not failing to meet their burden? (CQS4 ) where attack is a structural meta-predicate stating an attack relation at the object level, while effectiveAttack is a meta-predicate expressing that an attack should be taken into consideration according to the burden of persuasion allocation. In other words, if an object-level attack involves burdened arguments, and one of these fail to satisfy the burden, then the attack is considered not effective w.r.t. the allocation of the burden. Discussed schemes can be used to create a meta-level graph containing all the informa- tion concerning constraints related to the burden of persuasion concept thus leading to a clear separation of concerns, as demonstrated in the following example. Example 1 (Base Example). Let us consider two object-level arguments 𝐴 and 𝐵, con- cluding the literals 𝑎 and 𝑏𝑝(𝑎) respectively. Using the schemes in Subsection 3.1 we can build the following meta-level arguments: • 𝐴𝑆0 representing the allocation of the burden on argument 𝐴. • 𝐴𝑆1 and 𝐵𝑆1 standing for the absence of a burden on arguments 𝐴 and 𝐵 respectively. The scheme used to build those arguments exploits weak negation in order to cover those scenarios in which an argument concluding a bp literal exists at the object-level, but it is found not admissible. • 𝐴𝑆2 and 𝐴𝑆3 sustaining that i) 𝐴 was capable of meeting the burden on it, ii) 𝐴 was not capable of meeting its burden. The meta-level graph (Figure 1) points out the relations actually implicit in the notion of burden of persuasion over an argument, where, intuitively, we argue over the consequences of 𝐴’s possibly succeeding/failing to meet the burden. At the meta level, all the possible scenarios can be explored by applying different semantics over the meta-level graph. Considering for instance the Dung’s preferred semantics [13], we can obtain two distinct outcomes: the burden is not satisfied, i.e., argument 𝐴𝑆3 is accepted, and consequently, 𝐴𝑆2 is rejected, or we succeed in proving 𝐴𝑆2 , i.e., the burden is met and 𝐴𝑆3 is rejected (𝐴𝑆0 , 𝐴𝑆1 are accepted and rejected accordingly). Although the discussed example is really simple – only basic schemes for reasoning on the burden are considered at the meta-level – it clearly demonstrates the possibility of reasoning over the burdens, i.e., establishes whether or not there is a burden on a literal 𝜑 – argument 𝐵 in the example – and enables the evaluation of the consequences of a burdened argument to meet or not its burden. Meta-level arguments: 𝐴𝑆2 𝐴𝑆3 𝐴𝑆0 :⇒ 𝑏𝑢𝑟𝑑𝑒𝑛𝑒𝑑(𝐴) 𝐴𝑆1 :⇒ ¬𝑏𝑢𝑟𝑑𝑒𝑛𝑒𝑑(𝐴) 𝐴𝑆2 : 𝐴𝑆0 ⇒ 𝑏𝑝_𝑚𝑒𝑡(𝐴) 𝐴𝑆0 𝐴𝑆1 𝐵𝑆1 𝐴𝑆3 : 𝐴𝑆0 ⇒ ¬𝑏𝑝_𝑚𝑒𝑡(𝐴) meta level 𝐵𝑆1 :⇒ ¬𝑏𝑢𝑟𝑑𝑒𝑛𝑒𝑑(𝐵) Object-level arguments: 𝐴 𝐵 object level 𝐴 :⇒ 𝑎 𝐵 :⇒ 𝑏𝑝(𝑎) Figure 1: Object and meta level graphs from Example 1 3.2. Object and meta level connection: supporting sets Let us now define how the meta level and the object level interact. Indeed, it is not enough to reason on the consequences of the burden of persuasion allocation only concerning the burdened argument, but the results of the argument satisfying or not such a burden constraint should affect the entire object-level graph. According to the standard bimodal graph theory, defining how the object level and the meta level interact is the role of the argument support relation 𝒮𝐴 and of the attack support relation 𝒮𝑅 respectively. According to Definition 2.11 (Subsection 2.2), every node at level 𝑛 is connected to an argument at level 𝑛 + 1 by a support edge in 𝒮𝐴 or 𝒮𝑅 depending on whether the node is an argument or an attack. Let us define the support set 𝒮𝐴 of meta arguments supporting object-level arguments as: 𝒮𝐴 = {(𝐴𝑟𝑔1 , 𝐴𝑟𝑔2 ) | 𝐴𝑟𝑔1 ∈ 𝒜𝑀 , 𝐴𝑟𝑔2 ∈ 𝒜𝑂 , (Conc(𝐴𝑟𝑔1 ) = 𝑏𝑝_𝑚𝑒𝑡(𝐴𝑟𝑔2 ) ∨ Conc(𝐴𝑟𝑔1 ) = ¬𝑏𝑢𝑟𝑑𝑒𝑛𝑒𝑑(𝐴𝑟𝑔2 ))} Intuitively, an argument 𝐴 at the object level is supported by arguments at the meta level claiming that the burden on 𝐴 is satisfied (S2) or that there is no burden allocated on it (S1). The set 𝒮𝑅 of meta arguments supporting object-level attacks is defined as: 𝒮𝑅 = {(𝐴𝑟𝑔1 , (𝐵, 𝐴)) | 𝐴𝑟𝑔1 ∈ 𝒜𝑀 , (𝐵, 𝐴) ∈ ℛ𝑂 , Conc(𝐴𝑟𝑔1 ) = effectiveAttack(𝐵, 𝐴)} In other words, an object-level attack is supported by arguments at the meta level claiming its effectiveness w.r.t. the burden of persuasion allocation (S4). 3.3. Equivalence with burden of persuasion semantics The defined meta-framework can be used to achieve the same results of the original burden of persuasion labelling semantics [3]. Let us first introduce the notion of CQ-consistency for a bimodal argumentation graph 𝐺. Definition 3.1 (CQ-consistency). Let 𝐺 = ⟨𝒜𝑂 , 𝒜𝑀 , ℛ𝑂 , ℛ𝑀 , 𝒮𝐴 , 𝒮𝑅 ⟩ be a bimodal argu- mentation graph and let 𝐿𝑆 (𝐺) be a labelling-based semantics. 𝑃 is the set of corresponding 𝐿𝑆 -perspectives. A perspective 𝑝 ∈ 𝑃 is 𝐶𝑄-consistent if every IN argument 𝐴 in the corresponding meta-level labelling satisfies its critical questions (𝐶𝑄(𝐴)). Using this new definition we can introduce the concept of BP-perspective. Definition 3.2 (BP-perspective). Let 𝐺 = ⟨𝒜𝑂 , 𝒜𝑀 , ℛ𝑂 , ℛ𝑀 , 𝒮𝐴 , 𝒮𝑅 ⟩ be a bimodal ar- gumentation graph, and 𝑃 the set of its 𝐿𝑠𝑡𝑎𝑏𝑙𝑒 -perspectives [13]. We say that 𝑝 ∈ 𝑃 is a 𝐵𝑃 -perspective of 𝐺 iff w.r.t. the results given by the grounded evaluation of 𝑝, 𝑝 is 𝐶𝑄-consistent. Proposition 3.1. The results yielded by the grounded evaluation of 𝐺’s 𝐵𝑃 -perspectives are congruent with the evaluation of the object-level graph ⟨𝒜𝑂 , ℛ𝑂 ⟩ under the grounded-bp semantics as presented in [3]. Proof can be found in Appendix A. Example 2 (Antidiscrimination law example). Let us consider a case in which a woman claims to have been discriminated against in her career on the basis of her sex, as she was passed over by male colleagues when promotions came available (ev1), and brings evidence showing that in her company all managerial positions are held by men (ev3), even though the company’s personnel includes many equally qualified women, having worked for a long time in the company, and with equal or better performance (ev2). Assume that this practice is deemed to indicate the existence of gender-based discrimination and that the employer fails to provide prevailing evidence that the woman was not discriminated against. It seems that it may be concluded that the woman was indeed discriminated against on the basis of her sex. Consider, for instance, the following formalisation of the European nondiscrimination law: 𝑒1 : 𝑒𝑣1 𝑒2 : 𝑒𝑣2 𝑒3 : 𝑒𝑣3 𝑒𝑟1 : 𝑒𝑣1 ⇒ indiciaDiscrim 𝑒𝑟2 : 𝑒𝑣2 ⇒ ¬discrim 𝑒𝑟3 : 𝑒𝑣3 ⇒ 𝑑𝑖𝑠𝑐𝑟𝑖𝑚 𝑟1 : indiciaDscrim ⇒ bp(¬discrim) We can then build the following object-level arguments: 𝐴0 :⇒ 𝑒𝑣1 𝐵0 :⇒ 𝑒𝑣2 𝐶0 :⇒ 𝑒𝑣3 𝐴1 : 𝐴0 ⇒ indiciaDiscrim 𝐵1 : 𝐵0 ⇒ ¬discrim 𝐶1 : 𝐶0 ⇒ discrim 𝐴2 : 𝐴1 ⇒ bp(¬discrim) and the following meta-level arguments: 𝐴0𝑆1 :⇒ −burdened(𝐴0 ) 𝐵0 𝑆1 :⇒ −burdened(𝐵0 ) 𝐴1𝑆1 :⇒ −burdened(𝐴1 ) 𝐵1𝑆0 :⇒ burdened(𝐵1 ) 𝐴2𝑆1 :⇒ −burdened(𝐴2 ) 𝐵1𝑆1 :⇒ −burdened(𝐵1 ) 𝐶0𝑆1 :⇒ −burdened(𝐶0 ) 𝐵1𝑆2 : 𝐵1𝑆0 ⇒ bp_met(𝐵1 ) 𝐶1𝑆1 :⇒ −burdened(𝐶1 ) 𝐵1𝑆3 : 𝐵1𝑆0 ⇒ ¬bp_met(𝐵1 ) 𝐶1 𝐵1𝑆4 :⇒ effectiveAttack(𝐶1 , 𝐵1 ) 𝐵1 𝐶1𝑆4 :⇒ effectiveAttack(𝐵1 , 𝐶1 ) The resulting graph is depicted in Figure 2. In this case, at the object-level, since there are indicia of discrimination (𝐴1 ), we can infer the allocation of the burden on non- discrimination (𝐴2 ). Moreover, we can build both arguments for discrimination (𝐶1 ) and non-discrimination (𝐵1 ), leading to a situation of undecidability. At the meta level we can apply the rule S1 for every argument at the object level (𝐴0𝑆1 , 𝐴1𝑆1 , 𝐴2𝑆1 , 𝐵0 𝑆1 , 𝐵1𝑆0 , 𝐶0𝑆1 , 𝐶1𝑆1 ) – where we can establish the absence of the burden for all of them –, and the rule S4 for every attack (𝐶1 𝐵1𝑆4 , 𝐵1 𝐶1𝑆4 ). By exploiting 𝐵1 and 𝐴2 , we can also apply schema S0, and consequently rules S2 and S3. In a few words, we are concluding the meta argumentative structure given by the allocation of the burden of persuasion on argument 𝐵1 . We can now apply the stable labelling to the meta-level graph, thus obtaining three distinct results. For clarity reasons, in the following we ignore the arguments that are admissible under every solution. 1. IN = {𝐵1𝑆1 , 𝐶1 𝐵1𝑆4 , 𝐵1 𝐶1𝑆4 }, OUT = {𝐵1𝑆0 , 𝐵1𝑆2 , 𝐵1𝑆3 }, UND = {}—i.e., 𝐵1 is not burdened; 2. IN = {𝐵1𝑆0 , 𝐵1𝑆2 , 𝐶1 𝐵1𝑆4 , 𝐵1 𝐶1𝑆4 }, OUT = {𝐵1𝑆1 , 𝐵1𝑆3 }, UND = {}—i.e., 𝐵1 is bur- dened and the burden is met; 3. IN = {𝐵1𝑆0 , 𝐵1𝑆3 }, OUT = {𝐵1𝑆1 , 𝐵1𝑆2 , 𝐶1 𝐵1𝑆4 , 𝐵1 𝐶1𝑆4 }, UND = {}—i.e., 𝐵1 is bur- dened and the burden is not met. Then, the meta-level results can be reified to the object-level perspectives taking into account the CQ we have to impose on the solutions and the results given by the perspective evaluation under the grounded semantics. Let us first consider solutions 1 and 2. They lead to the same perspective on the object-level graph—the graph remains unchanged w.r.t. the original graph. If we consider the critical questions attached to the IN arguments, both these solutions are not admissible. Indeed, according to solution 1 the burden is not allocated on argument 𝐵1 , but this is in contrast with argument 𝐴2 ’s conclusion (𝐴2 is IN under grounded labelling)—i.e., 𝐶𝑄𝑆1 is not satisfied. Analogously, solution 2 concludes that 𝐵1 is allocated with the burden and its success to meet the burden, but at the same time, argument 𝐵1 is found undecidable at the object level (𝐵1 is UND under the grounded semantics)—i.e., 𝐶𝑄𝑆2 is not satisfied. The only acceptable result is the one given by solution 3. In this case, argument 𝐵1 is not capable to meet the burden – 𝐵1𝑆3 is IN – and, consequently, it is rejected and deleted from the perspective. Indeed, 𝐶𝑄𝑆3 is satisfied. As a consequence, argument 𝐶1 is labelled IN. In other words, the argument for non-discrimination fails and the argument for discrimination is accepted. meta level 𝐵1𝑆1 𝐵1𝑆0 𝐵0𝑆1 𝐵1𝑆2 𝐵1𝑆3 𝐴0𝑆1 𝐴1𝑆1 𝐴2𝑆1 𝐶1 𝐵1𝑆4 𝐵1 𝐶1𝑆4 𝐶1𝑆1 𝐶0𝑆1 𝐴0 𝐴1 𝐴2 𝐵1 𝐶1 𝐵0 𝐶0 object level Figure 2: Argumentation graph (object- and meta- level) Example 2 4. Implementation in Arg2P Despite the benefits of the meta-approach discussed in Section 3 – such as clear separation of concerns, encapsulations of argumentation abstractions and naturalness in terms of human thinking – it appears very inefficient from a computational perspective. Indeed the meta-level evaluation leads to a stable semantics computation, with a non-polynomial complexity [14]. For this reason, from a technological perspective, we reify the model presented in Section 3 into a more efficient resolution procedure. Generally speaking, the stable semantics is exploited to explore the search space at the meta level. Then, in order to identify the final solution the grounded assessment of the object level is taken into consideration—the acceptable scenario is selected according to the critical questions. The idea behind the technological refinement is exactly to leverage the information of those arguments to guide the search—i.e., to exploit the grounded assessment of the object level as a priori constraint. Following this idea, the computation algorithm becomes really simple. The two argumentation levels (object and meta) are collapsed in a single graph, following the idea in [15]. Then, the graph is modified dynamically, leveraging the information on the burdened arguments. In a sense, we have a multi-stage evaluation that at every stage leads to the modification of the graph itself. First of all, burdened arguments are evaluated. Then, the original graph is modified to include the constraint (the burden). The entire process is so based on a grounded semantics— polynomial complexity [14]. The algorithm requires 𝑚 + 1 evaluation stages to end, and, consequently, the final complexity is polynomial (where 𝑚 is the number of burdened arguments). Formally, given a constraint 𝑏𝑝(𝑥), then for every argument 𝐴 having 𝑥 as its conclusion, a new argument 𝐵 can be introduced in the graph. This argument represents the possibility of 𝐴 failing to meet the burden—expressed by S3 in the meta-model. 𝐴 and 𝐵 interaction is decided according to the 𝐴’s ability to satisfy the burden under grounded semantics: i) iff 𝐴 is IN, then the attack from 𝐴 to 𝐵 is introduced; ii) vice versa, iff 𝐴 is OUT or UND, the attack from 𝐵 to 𝐴 is introduced. In other words, we are forcing the argument stating the burden (𝐵) to defeat the burdened argument when the latter does not meet the burden (condition ii) and vice versa (condition i). Basically, through the first evaluation of the graph, the knowledge required to omit the contrast between arguments generated from schemes S3 and S2 is obtained—i.e. S2 becomes superfluous, leading to the possibility to avoid a stable semantics evaluation. The algorithm has been tested and implemented in the Arg2P framework1 [16, 17], Figure 4 shows the tool evaluation of the example discussed in Example 2. 5. Conclusions In this paper we present a meta-argumentation approach for the burden of persuasion in argumentation. Our approach relies on the work from [5, 6] introducing only the requited abstraction at the meta level. In particular, [5] presents the first formalisation of meta-argumentation synthesising bimodal graphs, structured argumentation, and argument schemes in a unique framework. There, a formal definition of the meta-ASPIC framework is provided as a model for representing object arguments. Along the same line, [6] exploits bimodal graphs for dealing with arguments sources’ trust. In [6] ASPIC+ is used instead of meta-ASPIC at the object level and on a set of meta-predicates related to 1 http://arg2p.apice.unibo.it/ Figure 3: Arg2P evaluation of Example 2 the object level arguments and the schemes in the meta level, as in our approach. Both [5] and [6] use critical questions for managing attacks at the meta level. Our framework and its model are based on these works with a clear definition of all the burdens abstractions at the meta-level. The reification of the meta level at the object level allows the concept of burden of persuasion to be properly dealt with—i.e., arguments burdened with persuasion have to be rejected when there is uncertainty about them. As a consequence, those arguments become irrelevant to the argumentation framework including them: not only they fail to be included in the set of the accepted arguments, but they also are unable to affect the status of the arguments they attack. We show how this model easily deals with all the nuance of burdens such as reasoning over the concept of the burden itself, thus leading to a full-fledged, interoperable framework open to further extensions. The feasibility of the model is tested by the implementation of Arg2P. The model can be expanded in various ways; an open issue that we plan to address in future research concerns how to deal with defeat circles including burdened arguments. More generally, we plan to study the properties of our meta framework and the connection of our framework with meta-ASPIC for argumentation. We also plan to inquire about the way in which our model fits into legal procedures and enables their rational reconstruction. Acknowledgments The work has been supported by the “CompuLaw” project, funded by the European Re- search Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant Agreement No. 833647). References [1] M. Wooldridge, P. McBurney, S. Parsons, On the meta-logic of arguments, in: Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, Association for Computing Machinery, New York, NY, USA, 2005, pp. 560–567. doi:10.1145/1082473.1082558. [2] R. Calegari, G. Sartor, Burden of persuasion in argumentation, in: F. Ricca, A. Russo, S. Greco, N. Leone, A. Artikis, G. Friedrich, P. Fodor, A. Kimmig, F. Lisi, M. Maratea, A. Mileo, F. Riguzzi (Eds.), Proceedings 36th International Conference on Logic Programming (Technical Communications), ICLP 2020, volume 325 of Electronic Proceedings in Theoretical Computer Science, Open Publishing Association, Rende (CS), Italy, 2020, pp. 151–163. doi:10.4204/EPTCS.325.21. [3] R. Calegari, G. Sartor, A model for the burden of persuasion in argumentation, in: S. Villata, J. Harašta, P. Křemen (Eds.), Legal Knowledge and Information Systems. JURIX 2020: The Thirty-third Annual Conference, volume 334 of Frontiers in Artificial Intelligence and Applications, IOS Press, Brno, Czech Republic, 2020, pp. 13–22. doi:10.3233/FAIA200845. [4] H. Prakken, C. Reed, D. N. Walton, Dialogues about the burden of proof, in: 10th International Conference on Artificial Intelligence and Law, ACM, Bologna, Italy, 2005, pp. 115–124. doi:10.1145/1165485.1165503. [5] J. Müller, A. Hunter, P. Taylor, Meta-level argumentation with argument schemes, in: International Conference on Scalable Uncertainty Management, volume 8078 of Lecture Notes in Computer Science, Springer, Springer, Washington, DC, USA, 2013, pp. 92–105. doi:10.1007/978-3-642-40381-1. [6] G. Ogunniye, A. Toniolo, N. Oren, Meta-argumentation frameworks for multi-party dialogues, in: International Conference on Principles and Practice of Multi-Agent Systems, volume 11224 of Lecture Notes in Computer Science, Springer, Springer, Tokyo, Japan, 2018, pp. 585–593. doi:10.1007/978-3-030-03098-8. [7] H. Prakken, An abstract framework for argumentation with structured arguments, Argument and Computation 1 (2010) 93–124. doi:10.1080/19462160903564592. [8] D. Walton, C. Reed, F. Macagno, Argumentation Schemes, Cambridge University Press, United Kingdom, 2008. doi:10.1017/CBO9780511802034. [9] S. Modgil, H. Prakken, The ASPIC + framework for structured argumentation: a tutorial, Argument & Computation 5 (2014) 31–62. doi:10.1080/19462166.2013. 869766. [10] M. Caminada, L. Amgoud, On the evaluation of argumentation formalisms, Artificial Intelligence 171 (2007) 286–310. doi:10.1016/j.artint.2007.02.003. [11] G. Vreeswijk, Abstract argumentation systems, Artificial Intelligence 90 (1997) 225–279. doi:10.1016/S0004-3702(96)00041-0. [12] H. Prakken, Ai & law, logic and argument schemes, Argumentation 19 (2005) 303–320. doi:10.1007/s10503-005-4418-7. [13] P. Baroni, M. Caminada, M. Giacomin, An introduction to argumentation se- mantics, The knowledge engineering review 26 (2011) 365–410. doi:10.1017/ S0269888911000166. [14] M. Kröll, R. Pichler, S. Woltran, On the complexity of enumerating the extensions of abstract argumentation frameworks, in: Proceedings of the Twenty-Sixth Interna- tional Joint Conference on Artificial Intelligence, IJCAI 2017, ijcai.org, Melbourne, Australia, 2017, pp. 1145–1152. doi:10.24963/ijcai.2017/159. [15] G. Boella, D. M. Gabbay, L. van der Torre, S. Villata, Meta-argumentation mod- elling I: Methodology and techniques, Studia Logica 93 (2009) 297. doi:10.1007/ s11225-009-9213-2. [16] G. Pisano, R. Calegari, A. Omicini, G. Sartor, Arg-tuProlog: A tuProlog-based argumentation framework, in: F. Calimeri, S. Perri, E. Zumpano (Eds.), CILC 2020 – Italian Conference on Computational Logic. Proceedings of the 35th Italian Conference on Computational Logic, volume 2719 of CEUR Workshop Proceedings, Sun SITE Central Europe, RWTH Aachen University, CEUR-WS, Aachen, Germany, 2020, pp. 51–66. URL: http://ceur-ws.org/Vol-2710/paper4.pdf. [17] G. Pisano, R. Calegari, A. Omicini, G. Sartor, A mechanism for reasoning over defeasible preferences in Arg2P, in: S. Monica, F. Bergenti (Eds.), CILC 2021 – Ital- ian Conference on Computational Logic. Proceedings of the 36th Italian Conference on Computational Logic, volume 3002 of CEUR Workshop Proceedings, CEUR-WS, Parma, Italy, 2021, pp. 16–30. URL: http://ceur-ws.org/Vol-3002/paper10.pdf. A. Proofs Proof A.1. The proof is straightforward. The burden of persuasion semantics acts like the grounded semantics, with the only difference that the burdened arguments that would have been UND for the latter are posssibly OUT/IN for the former. So, it is a matter of fact, that burdened arguments and arguments connected to them through attack relation can change their state. Let us consider an argumentation graph 𝐴𝐹 ⟨𝒜, ⇝⟩ and let 𝐿𝐺 be the grounded labelling resulting from the evaluation of 𝐴𝐹 under a grounded semantics. With respect to our framework, and in particular, to the bimodal argumentation graph 𝐺 = ⟨𝒜𝑂 , 𝒜𝑀 , ℛ𝑂 , ℛ𝑀 , 𝒮𝐴 , 𝒮𝑅 ⟩, we have, by construction, that every node at the object level, if not burdened, has an undisputed supporting argument at the meta level (S1 or S4). As a consequence, the meta level has no influences on no burdened arguments, and – in the absence of burdened arguments – the evaluation of the object level graph under the grounded semantics would be equal to 𝐿𝐺 . It is a matter of fact that the meta level influences only the burdened arguments’ state. Accordingly, the extent of this influence and the consequences on the object-level graph will be considered in the following. Let us consider a single argument 𝐴 ∈ 𝒜 allocated with the burden of persuasion— thus having the additional argument 𝐵 ∈ 𝒜 stating the burden on 𝐴 (as depicted in Figure 1). Computing the stable semantics on the meta-level graph will produce the following scenarios: Stable.a burden on 𝐴 cannot be proved; Stable.b burden on 𝐴 can be proved and the burden is met; Stable.c burden on 𝐴 can be proved and the burden is not met. Accordingly, the stable evaluation of the meta-graph produces three different perspectives of the object level: i) argument 𝐴 is supported—it is not burdened; ii) argument 𝐴 is supported—it satisfies the burden; iii) argument 𝐴 is not supported, and then it is excluded from the object-level graph—it does not meet the burden then it is refuted. In particular, we have that Stable.a induces i), Stable.b leads to ii), while Stable.c induces iii). Let 𝐿𝐵𝑃 be this new object-level labelling (obtained by the meta-level stable semantics reification at the object level). Also, let us compare 𝐿𝐵𝑃 with the initial object-level grounded labelling 𝐿𝐺 . Then, the following cases can occur (E is exploited for admissible solutions with labelling equivalence, while C is exploited for solutions to be discarded). • 𝐵 is OUT or UND in 𝐿𝐺 . E1 If i) the burden is not allocated and cannot be proven, the meta level does not influence the object level supporting all unburdened arguments. 𝐶𝑄𝑆1 is satisfied and 𝐿𝐵𝑃 is equivalent to 𝐿𝐺 . C1 If ii) or iii), in both cases 𝐶𝑄𝑆0 is not satisfied—the burden is proved at the meta level and not at object level. • 𝐵 is IN and 𝐴 is OUT in 𝐿𝐺 . C2 If i), we have an inconsistency on 𝐶𝑄𝑆1 —the burden is proved at object level and not at meta level. C3 If ii), we have an inconsistency on 𝐶𝑄𝑆2 since 𝐴 is considered admissible at the meta level (supported by the meta-argument) but 𝐴 is OUT at the object level. E2 If iii) 𝐴 is not supported, i.e., removed from the object-level graph. 𝐶𝑄𝑆0 and 𝐶𝑄𝑆3 are both satisfied. Then, under the grounded semantics, the removal of an OUT argument from a graph is not influent w.r.t its evaluation, i.e., 𝐿𝐵𝑃 is equivalent to 𝐿𝐺 .2 • 𝐵 is IN and 𝐴 is IN in 𝐿𝐺 . C4 If i), we have an inconsistency on 𝐶𝑄𝑆1 —the burden is proved at object level and not at meta level. E3 If ii), then 𝐶𝑄𝑆0 and 𝐶𝑄𝑆2 are both satisfied and 𝐿𝐵𝑃 is equal to 𝐿𝐺 . C5 If iii) we have an inconsistency because 𝐶𝑄𝑆3 is not satisfied. • 𝐵 is IN and 𝐴 is UND in 𝐿𝐺 . C6 If i), we have an inconsistency on 𝐶𝑄𝑆1 —the burden is proved at object level and not at meta level. C7 If ii), we have an inconsistency since 𝐴 is considered admissible at the meta level (supported by the meta-argument) but 𝐴 is UND at the object level—𝐶𝑄𝑆2 is not satisfied. E4 If iii) 𝐴 is not supported, i.e., is removed from the object level, i.e., it can be labelled as OUT in 𝐿𝐵𝑃 (see 2 ). 𝐶𝑄𝑆0 and 𝐶𝑄𝑆2 are satisfied. As made evident by the proof, the reification of the meta level upon the object level generates multiple solutions: yet, only one solution for each case can be considered admissible w.r.t. critical questions. Moreover, the only admissible perspective coincides with the one generated from the bp-labelling in [3]—the burdened argument is labelled OUT in case of indecision (E4). Of course, the proof can be generalised to configurations taking into account any number of burdened arguments—where of course combinations grow exponentially with the number of burdened arguments. 2 It can trivially be proved considering that – in the grounded semantics – an OUT argument does not affect other arguments’ state, i.e., it is irrelevant and can be removed; of course, also the dual proposition holds, i.e., if 𝐿𝐵𝑃 build in the meta-frameworks does not consider an argument it can be labelled as OUT in the grounded bp-labelling