<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Mechanism for Reasoning over Defeasible Preferences in Arg2P ⋆ ⋆⋆</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Giuseppe</forename><surname>Pisano</surname></persName>
							<email>g.pisano@unibo.it</email>
							<affiliation key="aff0">
								<orgName type="laboratory">ALMA-AI Interdepartmental Center of Human Centered AI</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Roberta</forename><surname>Calegari</surname></persName>
							<email>roberta.calegari@unibo.it</email>
							<affiliation key="aff0">
								<orgName type="laboratory">ALMA-AI Interdepartmental Center of Human Centered AI</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andrea</forename><surname>Omicini</surname></persName>
							<email>andrea.omicini@unibo.it</email>
							<affiliation key="aff1">
								<orgName type="department">Dipartimento di Informatica -Scienza e Ingegneria (DISI</orgName>
								<orgName type="institution" key="instit1">Alma Mater Studiorum</orgName>
								<orgName type="institution" key="instit2">Università di Bologna</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giovanni</forename><surname>Sartor</surname></persName>
							<email>giovanni.sartor@unibo.it</email>
							<affiliation key="aff0">
								<orgName type="laboratory">ALMA-AI Interdepartmental Center of Human Centered AI</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">⋆</forename><forename type="middle">G</forename><surname>Pisano</surname></persName>
						</author>
						<title level="a" type="main">A Mechanism for Reasoning over Defeasible Preferences in Arg2P ⋆ ⋆⋆</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">1778CA9217BB6B238070709B19A9D765</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T20:40+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Argumentation</term>
					<term>Defeasible preferences</term>
					<term>Arg2P</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper introduces argumentation over defeasible preferences in Arg2P, an argumentation framework based on logic programming. A computational mechanism is first implemented in Arg2P according to Dung's defeasible preference model, then generalised to enable arbitrary preference relations over arguments.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>This work focuses on argumentation with conditional preferences within the Arg2P (short for Arg-tuProlog) framework <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b1">2]</ref>. The topic is already tackled from a theoretical perspective by various works -e.g., <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b5">6]</ref> -but, to the best of our knowledge, a complete working implementation of this functionality does not exist, yet <ref type="bibr" target="#b0">[1]</ref>. Dung et al. <ref type="bibr" target="#b5">[6]</ref> cope with the problem of defeasible preferences while maintaining compatibility with the standard approach to the semantics of standard abstract argumentation <ref type="bibr" target="#b4">[5]</ref>-thus providing the basis for the implementation. Accordingly, in this paper we discuss first of all an efficient implementation of the model presented in <ref type="bibr" target="#b5">[6]</ref> along with some examples.</p><p>Then, an improvement of the computational mechanism for dealing with defeasible preferences is proposed in order to support a broader set of argument ordering relations-like the ones introduced in ASPIC + weakest/last and elitist/democrat. The choice of ordering is very important in the construction of an argumentation framework, and it heavily depends on the application area. A higher number of supported orderings implies then a huge advantage in terms of applicability of the tool. Since the original model <ref type="bibr" target="#b5">[6]</ref> only supports a smaller set of orderings, and does not deal with aggregated preferences, the final goal of our generalisation is to overcome this limitation.</p><p>The paper is structured as follows. Section 2 overviews background notionsi.e., the formal argumentation framework, the conditional preferences model, and the Arg2P language. Section 3 introduces the Arg2P implementation of the argumentation framework <ref type="bibr" target="#b5">[6]</ref> along with some examples. Section 4 discusses the improvement of the previously implemented mechanisms enabling the use of the arguments orderings introduced in ASPIC+ <ref type="bibr" target="#b8">[9]</ref>. Section 5 concludes the work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Preliminary notions</head><p>In the following we introduce the structured argumentation model adopted in Arg2P (Subsection 2.1)-an ASPIC + -like argumentation system <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b8">9]</ref>. Preferences are introduced in Subsection 2.2 according to the model presented in <ref type="bibr" target="#b5">[6]</ref>. Finally, the Arg2P language is discussed in Subsection 2.3.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Structured Argumentation Framework (AF)</head><p>In this section we introduce the argumentation language, arguments and attack relations between them. As usual a literal is an atomic proposition or its negation.</p><p>Notation 1 For any literal ϕ, its complement is denoted by φ. That is, if ϕ is a proposition p, then φ = ¬p, while if ϕ is ¬p, then φ is p.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Literals are brought into relation through rules.</head><p>Definition 1 (Rules). A strict rule r has the form: ρ : ϕ 1 , ..., ϕ n → ψ while a defeasible rule has the form: ρ : ϕ 1 , ..., ϕ n ⇒ ψ in both cases with 0 ≤ n, and where ρ is the unique identifier for r, denoted by N(r); each ϕ 1 , . . . , ϕ n , ψ is a literal; the set {ϕ 1 , . . . , ϕ n } is denoted by Antecedent(r) and ψ by Consequent(r).</p><p>A strict rule is an expression indicating that if ϕ 1 , . . . , ϕ n hold, then without exception it holds that ψ. Strict rules are used for representing strict (sound) knowledge and are denoted with StrictRules. Defeasible rules -denoted with DefRules -are rules that can be defeated by contrary evidence. Pragmatically, a defeasible rule is used to represent defeasible knowledge, i.e., tentative information that may be used if nothing could be posed against it. For simplicity, we define axioms and non-axiom premises via strict and defeasible rules with empty Antecedent. A theory consists of a set of rules.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 2 (Theory).</head><p>A defeasible theory is a tuple ⟨Rules⟩ where Rules ⊆ StrictRules ∪ DefRules.</p><p>Arguments are built from strict and defeasible rules. Given the rules of a defeasible theory, we can construct arguments by chaining them, as specified in the following definition, cf. <ref type="bibr" target="#b8">[9]</ref>. Definition 3 (Argument). An argument A constructed from a defeasible theory ⟨Rules⟩ is a finite construct of the form: A : A 1 , . . . , A n → r ϕ or A : A 1 , . . . , A n ⇒ r ϕ with 0 ≤ n, where r is the top rule of A, denoted by TopRule(A); -A is the argument's unique identifier; -A 1 , . . . , A n are A's direct subarguments, denoted by DirectSub(A); -Sub(A) denotes the entire set of subarguments of A, i.e., Sub(A) = Sub(A 1 )∪ . . . ∪ Sub(A n ) ∪ {A}; ϕ is the conclusion of the argument, denoted by Conc(A); -DefRules(A) is the set of defeasible rules exploited to build argument A -LastDefRules(A) is the set of the last defeasible rules exploited to build argument A where</p><formula xml:id="formula_0">LastDefRules(A) = TopRule(A) if TopRule(A) is defeasible An∈DirectSub(A) LastDefRules(An) otherwise</formula><p>Arguments can be in conflict, accordingly to two kinds of attack: rebuts and undercutting, defined as in <ref type="bibr" target="#b8">[9]</ref>. We assume the postulates of consistency and closure for argument-based systems ensuring that strict rules cannot entail contradictory conclusions-i.e., we assume that 1) the set of strict rules is closed under transposition, and 2) the strict closure of the empty set is directly consistent <ref type="bibr" target="#b3">[4]</ref>. An argumentation graph can then be defined exploiting arguments and attacks.</p><p>Definition 5 (Argumentation framework and graph). An argumentation framework, aka also as abstract argumentation framework, constructed from a defeasible theory T is a tuple ⟨A, ⇝⟩, where A is the set of all arguments constructed from T , and ⇝ is the attack relation over A. The corresponding argumentation graph is the corresponding directed graph where: arcs are interpreted as attacks and nodes are arguments.</p><p>The expression structured argumentation framework in the following specifies that the internal construction of the arguments is provided in the framework (in particular as rules and their chaining).</p><p>Notation 2 Given an argumentation graph G = ⟨A, ⇝⟩, we write A G , and ⇝ G to denote the graph's arguments and attacks respectively. We also write dbut ⇝ and dcut ⇝ to denote the set of direct rebuts and direct undercuts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Preference-Based Argumentation Framework</head><p>Dung et al. <ref type="bibr" target="#b5">[6]</ref> introduce a conservative generalisation of structured argumentation frameworks aimed at dealing with arguments expressing preferences over rules. Based on that, in the following we define a preference-based argumentation framework. First, we define preference arguments and the corresponding set.</p><p>Definition 6 (Preference Argument). A preference argument constructed from a defeasible theory T is an argument A such that Conc(A) has the form N (r1) ≺ N (r2) where r1 and r2 are in DefRules.</p><p>Notation 3 (Preference Arguments Set) We denote the set of preference arguments in an argumentation framework with A P .</p><p>Given the notion of preference arguments, we introduce the notion of Preference-Based AF (PAF) so as to reason over such arguments <ref type="bibr" target="#b5">[6]</ref>. In short, a PAF is a transformation of a standard AF also accounting for preference arguments. This means to define how preference arguments interact with other arguments-i.e., how admissibility of one argument A ∈ A P impacts on others arguments, and more precisely, how the attack relation between these new arguments is defined.</p><p>The key idea in <ref type="bibr" target="#b5">[6]</ref> is to transform the AF direct attacks into arguments that can then be attacked and challenged by preference arguments. Then, conditional preferences attack these new arguments, thus challenging their effect. Converting attacks into arguments leads to the need of rebuilding the attacks set since new arguments have to be considered-namely, (a) attacks coming from attacks transformed into arguments, and (b) attacks involving preference arguments.</p><p>Following this idea, in PAF the attack relation is built taking into account a) and b). With respect to a), existing AF direct attacks -i.e., dbut, dcut -are transformed into arguments and added to the argument set A. All the attacks belonging to the ⇝ set are transformed consistently.</p><p>Notation 4 Let us denote with datt the set of all the attacks from transformation "attacks into arguments". Note that attacks in datt are attacks from old attacks transformed into arguments to either an original argument already existing in AF or to an old attack also transformed into an argument.</p><p>Example 1. For instance, let us consider an argumentation framework AF , with three arguments -A, B and B 1 where Sub(B 1 ) = {B} -and two attacks: one direct rebut from A to B, and one rebut attack from A to B 1 . The transformation (point a)) only impacts the first attack (being a direct attack). A new argument AT T representing the attack from A to B is so added to the new argumentation framework (PAF), and consequently, also the corresponding attack from AT T to B. As concern the attack from A to B 1 , it is not necessary to convert it into an argument -not being a direct attack -but it is enough to change the source of the B 1 attack from A to AT T . The resulting argumentation framework PAF will be so composed of four arguments -namely A, B, B 1 , AT T -and two attacks-one from AT T to B and one from AT T to B 1 (see Fig. <ref type="figure" target="#fig_1">1</ref>). With respect to b), new attacks are introduced, capturing the contrast between a preference argument and an attack argument-i.e., reifying the notion of a preference. The idea is that the preference for argument A over argument B is an attack against the view that B can successfully attack A: prevailed-over arguments cannot attack the prevailing ones. It follows that if a preference argument according to which A prevails over B is admissible, then the attack from B into A is rejected from the extension containing the preference argument.</p><p>Notation 5 Let us denote with patt the set of all attacks from a single preference argument to an attack argument.</p><p>Example 2. Let us consider the PAF from Example 1. Consider then a fourth argument P claiming the superiority of argument B over argument A-this is for the sake of simplicity since arguments actually express preferences over defeasible rules. With respect to b), this scenario would result in a new attack from P to AT T , where AT T is the argument representing the attack from A to B (Fig. <ref type="figure" target="#fig_2">2</ref>). In fact, being B preferred to A, it would be impossible for the latter to attack the former. </p><formula xml:id="formula_1">-⇝ T = datt ∪ patt where: • datt ⊆ (dbut ⇝ ∪ dcut ⇝ ) × A T such that ((A, B), X) ∈ datt iff * X ∈ A and B ∈ Sub(X), or * X = (C, D) ∈ (dbut ⇝ ∪ dcut ⇝ ) and B ∈ Sub(C) • patt ⊆ A × dbut ⇝ such that (X, (A, B)) ∈ patt iff ∃d ∈ LastDefRules(A) such that d &lt; TopRule(B) ∈ Conc(X).</formula><p>The key point in the original work by <ref type="bibr" target="#b5">[6]</ref> is the analysis of the relation used to identify the attacks in patt. This analysis is outside the scope of this paper being our analysis more focused on the computational properties of the AF to P AF transformation. Hence, for further details we point to the original work. Note that our definition of patt includes a specific use of preferences to determine (in) effective attacks, i.e., a rebutting attack fails if directed against a preferred rule in the attacked argument.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Arg2P syntax</head><p>Arg2P <ref type="bibr" target="#b9">[10]</ref> is a modular framework for structured argumentation based on logic programming <ref type="bibr" target="#b2">[3]</ref>. In this section we briefly introduce the Arg2P language necessary for understanding the examples discussed in the paper. For a more detailed introduction of the tool please refer to <ref type="bibr" target="#b9">[10]</ref>. In Arg2P, rules can be expressed by exploiting the operators =⇒ = {− &gt; , =&gt;} -where − &gt; stands for strict implication while =&gt; for the defeasible one -with the notation: label : premise 1 , . . . , premise n =⇒ conclusion. where label is the identifier (unique name) of a rule and premise 1 , . . . , premise n are the premises that entail the conclusion.</p><p>Accordingly, axioms/ defeasible axioms take the form: label : [] =⇒ conclusion. Premises and conclusions can take any form accepted in Prolog: i.e., all Prolog terms -e.g. atoms, variables, lists, compound terms -are allowed. The binary attack relation between arguments -which underpins rebutting attacks -can be reached via terms' negation. Strong negation -negation as definite falsityis encoded prepending the − operator to a term (i.e. −term). Undercut attacks can be expressed exploiting the notation: label 2 : premise 1 , . . . , premise n =⇒ undercut(label 1 ) where label 1 is the identifier of a defeasible rule in the theory. Preferences over rules can be expressed with the notation sup(r1, r2) where r1 and r2 are the rules unique identifiers. The Arg2P framework implements a graph-based mode providing as output the entire argumentation graph according to the specified semantics: the evaluation can be required via the buildLabelSets predicate.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Defeasible preferences mechanism in Arg2P</head><p>The model presented in Subsection 2.2 is well-suited to be implemented in Arg2P: being it self-contained, it can be easily modularised and integrated with the existing model. The transformation from AF to PAF fully generalises mechanisms and semantics used for standard argumentation frameworks, thus enabling the reuse of standard tools already implemented in the framework. Conceptually, we can handle the PAF model as a function F that, given an argumentation framework A F , returns a new argumentation framework B F . How we evaluate and handle framework B F w.r.t. A F is left unchanged. Accordingly, the implementation of the model in Arg2P only requires the creation of a new optional module responsible for the framework's transformation. Users can then determine how to handle preferences, by combining the most suitable modules for the evaluation. The remainder of this section shows how function F is defined and implemented, according to the model in Subsection 2.2. First of all, attacks are split into two sets: non influent attacks for the purpose of transformation -i.e., attacks not affected by preferences, that can be translated as-is in the final framework ValidAttacks -and attacks that should be considered as invalid and thus transformed-InvalidAttacks (line 3 Listing 1.1). After that, the transformation of the framework begins. First, direct attacks are converted into arguments also adding the new derived attacks-point a) of the transformation (see Subsection 2.2) (line 6 Listing 1.1). Then, conforming to the transformation point b) (see Subsection 2.2) attacks from preference arguments to the newly created attack arguments are built, always in accordance to the preference attack relation-line 8 Listing 1.1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Transformation algorithm</head><p>Note that w.r.t. the original algorithm, the filtering of the attacks (line 3 in Listing 1.1) is introduced in order to enhance the comprehensibility of the final graph and improve the computational performance. The set (dbut ⇝ ∪ dcut ⇝ ) is filtered to exclude from the transformation those attacks not referenced by an element in patt; then the transformation is applied only to the InvaildAttacks subset, while the remaing ValidAttacks are transposed as-is in the final graph. From a computational perspective, the graph is kept as small as possible and most faithful to the original, thus avoiding useless transformations.</p><p>Proposition 1. The transformation algorithm in 1.1 is sound w.r.t. the model in Subsection 2.2 by <ref type="bibr" target="#b5">[6]</ref>.</p><p>Proof. The transformation algorithm is almost completely faithful to the original model, then assuring the soundness of its results w.r.t. the original. The only difference is in the filtering part, where we avoid transforming those attacks not influenced by any preference. It is easy to prove the introduced optimisation does not impact the general soundness of the transformation algorithm. It is already proved in <ref type="bibr" target="#b5">[6]</ref> that a PAF, in absence of preference arguments, delivers the same results of a standard AF. As a consequence, if an attack does not interact with a preference argument, it is irrelevant if it has been transformed or not. Thus, the filtering of the irrelevant attacks does not impact the final outcome of the evaluation w.r.t. the not filtered graph.</p><p>Example 3. Let us consider the Sherlock Holmes example proposed in <ref type="bibr" target="#b5">[6]</ref>. A murder investigation involving three persons -namely p1, p2 and s -is considered. It is known for sure that the murderer acted alone, s was physically unable to kill the victim, and p1 benefited from the murder.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Listing 1.2. Example 3 rules in Arg2P</head><p>r1 : inno ( p1 ) , inno ( s ) -&gt; -inno ( p2 ). r2 : inno ( p2 ) , inno ( s ) -&gt; -inno ( p1 ). r3 : inno ( p1 ) , inno ( p2 ) -&gt; -inno ( s ). r4 : beneficiary ( p1 ) -&gt; have_motive ( p1 ). r5 : beneficiary ( p2 ) -&gt; have_motive ( p2 ). pi1 : have_motive ( p1 ) , -have_motive ( p2 ) -&gt; sup ( d2 , d1 ). pi2 : -have_motive ( p1 ) , have_motive ( p2 ) -&gt; sup ( d1 , d2 ).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>d : [] = &gt; inno ( s ). d1 : [] = &gt; inno ( p1 ). d2 : [] = &gt; inno ( p2 ). d3 : [] = &gt; -have_motive ( p1 ). d4 : [] = &gt; -have_motive ( p2 ). f1 : [] -&gt; inno ( s ). f2 : [] -&gt; beneficiary ( p1 ).</head><p>Listing 1.2 shows the Arg2P encoding of the case. Strict rules r1, r2, and r3 encodes the fact that only one of the suspects can be the murderer-i.e., if two of them are innocent, then the third is guilty. Strict rules r4 and r5 derive the motivation to murder from the possible benefits. Five unconditional defeasible rules work as assumptions: d, d1, and d2 encode the principle on which persons are innocent unless proven guilty; d4 and d5 assume that p1 and p2 have both no motive to kill. There are also two facts, f1 and f2, asserting the knowledge on s' innocence (f1) and p1 's profit from the murder (f2). Finally, two additional strict rules -pi1 and pi2 -conclude a preference over rules. In particular, if p1 had a motivation to kill while p2 had not, then the assumption on p2 's innocence is stronger than p1 's one (pi1). Conversely, if p1 hadn't a motive to kill while p2 had, then the favours are on p1 's innocence (pi2).</p><p>Fig. <ref type="figure">3</ref> shows the evaluation results under the grounded semantic in a standard argumentation framework -i.e., with no defeasible preferences handling -while Fig. <ref type="figure" target="#fig_0">4</ref> shows the results with the transformation module enabled. As expected, without any preference on the rules the graph remains undecided-it is not possible to decide on arguments concluding inno(p1)(inno(p2)) and ¬inno(p1) (¬inno(p2)). Conversely, after enabling defeasible reasoning on preferences, it is possible to decide for p2 's innocence, thus condemning p1. The latter result is caused by argument A15 (Fig. <ref type="figure" target="#fig_0">4</ref>). The claimed preference of d2 over d1 impacts only two direct rebuts: the one from A14 to A2 and the one from A13 to A2. Consequently, these are the only attacks transformed into arguments (A7 and A8 )-filtering of the transformation algorithm. The attacks coming from these two arguments are the same: they attack argument A2 and all its derived arguments (A10, A11 and A14 ). If we evaluate the graph, we have that the preference argument (A15 ) is undisputed. It follows the rejection of the two attack arguments A7 and A8 and the admissibility of argument A2 on p2 's innocence. The rest of the graph is computed accordingly.</p><p>Example 4. We now consider an example of preference reasoning from <ref type="bibr" target="#b10">[11]</ref> showing a case of multiple nested preferences. Listing 1.3 shows the Arg2P theory. Two unconditional rules are stated -r0 and r1 -concluding the two conflicting literals a and ¬a. Then a defeasible rule (r1) claims b if a is proved, and three unconditional rules conclude a preference: r4 concluding the priority of r0 over r3, the priority of r3 over r0 (r5) and the priority of r5 over r4 (r6).</p><p>Taking into account no preference, the evaluation under the grounded semantic is undecided: arguments for a and ¬a rebut each other and consequently also the one concluding b is undecided. Intuitively, r5 states the strength of the ¬a claim and r6 enforces this priority against the one in r4. Accordingly, we expect the argument for ¬a to be accepted thus rejecting the opponent arguments. Fig. <ref type="figure" target="#fig_5">5</ref> shows the PAF built on the theory and its evaluation under the grounded semantic. Argument A1 for ¬a is accepted-thus causing the rejection of arguments A0 (a) and A8 (b). The accepted preferences are two: the one on r5 over r4, that is undisputed, and the one on r3 over r0. As a consequence, the attack from argument A2 to A3 (argument A7) and the one from A0 to A1 (argument A5) are both rejected.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Generalising the reasoning mechanism</head><p>The main limitation of the model described in Subsection 2.2 is the impossibility of dealing with attacks coming from a set of preferences, a.k.a. group preference attacks: preference attacks are only allowed from a single preference to an attack. The issue is well known in the literature and already discussed in papers dealing with defeasible preferences, see for instance <ref type="bibr" target="#b7">[8]</ref> where an extended argumentation framework handling group attacks is proposed. The problem was also tackled in the work by Modgil <ref type="bibr" target="#b6">[7]</ref>, where authors leverage a sort of super-arguments to join preferences. This approach was then refined in the logical model proposed in <ref type="bibr" target="#b7">[8]</ref>.</p><p>We believe that the super-argument approach could be an optimal solution to integrate group preference attacks in the Arg2P framework. The extension of the computational mechanism under this perspective requires just a minor modification to the transformation algorithm (listing 1.1), maintains all the computational properties of the standard implementation, and allows overcoming the limit of the model in Subsection 2.2. Hence, following the insights from <ref type="bibr" target="#b6">[7]</ref>, we propose a generalisation of the already-presented reasoning mechanism coping with the group preference attack problem and thus enabling the use of different arguments ordering relations. In a nutshell, we leverage artificial arguments joining sets of preferences. Attacks come no more directly from preferences but are issued from these artificial arguments. In the following, we examine in detail our approach with some examples. We cannot here address the corresponding extension of the formal model of Subsection 2.2, which we leave to future works.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">The new algorithm</head><p>In the model discussed in Subsection 2.2 attacks ∈ patt can be expressed in the form (X, (A, B)), where X is a preference argument and (A, B) is an attack transformed into an argument. In the mechanism generalisation X is no longer a preference argument but rather an artificial argument constructed from the involved preference arguments. In order to cope with such an extension, we first introduce the grouped preference set A GP defined as: Definition 8 (Grouped Preference Set). Given a set of preference arguments A P in an argumentation framework, the set of all the possible subsets of A P , i.e., A GP = 2 A P is the grouped preference set.+ Based on grouped preference set, we define a further kind of argument complementing definition 3.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 9 (Joint preferences argument). Every argument set</head><p>A ∈ A GP is called a joint preferences argument. The direct subarguments of a joint preferences argument A are all preference arguments X ∈ A .</p><p>We also say that an argument A is preferred to an argument B according to the preferences in a joint preferences argument X ∈ A GP with the notation ≻ X . Then, we modify definition 7 as follows:</p><p>Definition 10 (Generic Preference-Based AF). Given an argumentation framework T = ⟨A, ⇝⟩, a generic preference-based argumentation framework (GPAF) of T is an abstract argumentation framework F K = ⟨A T , ⇝ T ⟩ where</p><formula xml:id="formula_2">-A T = A ∪ dbut ⇝ ∪ dcut ⇝ ∪ A GP , and -⇝ T = datt ∪ patt where: • datt ⊆ (dbut ⇝ ∪ dcut ⇝ ) × A T such that ((A, B), X) ∈ datt iff * X ∈ A and B ∈ Sub(X), or * X = (C, D) ∈ (dbut ⇝ ∪ dcut ⇝ ) and B ∈ Sub(C) • patt ⊆ A GP × dbut ⇝ such that (X, (A, B)) ∈ patt iff B ≻ X A.</formula><p>Intuitively, we add to the arguments all the existing preference aggregations in A GP . Then, attacks in patt are not determined by single arguments (elements of A P ) but by joint preferences arguments (elements of A GP ). If we consider once again Example 2, the corresponding GPAF would contain a new argument AP with P as direct subargument, and the preference attack would come from AP and not from P as in the standard PAF (Fig. <ref type="figure" target="#fig_6">6</ref>).  Proof. Let us consider a PAF. For each A ∈ patt such that A = (X, (B, C)) a new argument {X} ∈ A GP is built, and the patt set is modified accordingly-i.e., (patt−A)∪{({X}, (B, C)}. We have to prove that the modified graph delivers the same results as the original one. Since {X} like any argument in A GP cannot be directly rebutted/undercut, (being artificially added to the argumentation framework) status of {X} in GPAF is the same as that of X in PAF, since X is the only direct subargument of {X}. It follows that the PAF evaluation remains unaltered w.r.t. the initial transformation. W.r.t. the algorithm presented in Subsection 3.1, the new transformation involves an improved filtering strategy -taking into account preferences' combination -and an extended construction of preference attacks. Indeed, the construction of the preference attacks should return also the artificial argumentscontaining also the attacks derived from their sub-arguments. Fig. <ref type="figure" target="#fig_7">7</ref> shows the evaluation of Example 3 with the generalised algorithm and the original PAF's preference attack relation. As expected, the result is analogous to Fig. <ref type="figure" target="#fig_0">4</ref>, the only difference is the preference attack origin. While in the original model A15 directly attacks arguments A7 and A8, in the generalised mechanism (Fig. <ref type="figure" target="#fig_7">7</ref>) the artificial argument A10 is the source of the attack. From a technical perspective, the improved reasoning mechanism enables different arguments orderings while evaluating the argumentation graph. The evaluation under different orderings could no longer adhere to the assumptions in <ref type="bibr" target="#b5">[6]</ref>. However, some application scenarios can consider the loosening of such assumptions, therefore exploiting other types of ordering. For instance, let us consider the orderings introduced in <ref type="bibr" target="#b8">[9]</ref>. The authors bear the idea that -abstracting from the specific properties -the choice of an argument ordering is deeply connected to the application scenario. For example, the weakest link principle -according to which all the defeasible steps an argument construction should be considered in the ordering -could be more suited to empirical reasoning scenarios.</p><p>Let us consider the application of the weakest-link democratic ordering <ref type="bibr" target="#b8">[9]</ref> to Example 3. In a nutshell, in the weakest democrat ordering, an argument A is preferred to an argument B if for every defeasible rule r used to build B, there is a preference of the form r 1 ≻ r, with r 1 an A's rule. Fig. <ref type="figure" target="#fig_8">8</ref> shows the evaluation results. Using this ordering, preference on d2 over d1 is not enough to conclude p2 's innocence. In fact, while this preference is enough to conclude the superiority of argument A2 on A12, the same does not hold for argument A13, since it based also on the defeasible rule d. The example demonstrates the flexibility of the mechanisms presented in this section.</p><p>Example 5. Let us now consider an example exploiting joint preference arguments containing a combination of preferences-Listing 1.4. The theory contains two unconditional rules -r0 and r2 -concluding the two literals a and ¬b. Then a defeasible rule (r1) claims b if a, and two unconditional rules conclude a preference: r3 concluding the priority of r2 over r1, and r4 concluding the priority of r2 over r0. The GPAF evaluation with the weakest-link democratic ordering and grounded semantic is: accepted for ¬b (A1) and rejected for b (A5) (Fig. <ref type="figure">9</ref>). A6 is the joint preference argument which combines the preferences expressed by r4 (A2) and r3 (A3).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusions</head><p>In this paper we show how the conditional preferences mechanism from <ref type="bibr" target="#b5">[6]</ref> can be implemented in Arg2P, then improved in order to cope with arbitrary preference relations over arguments.</p><p>The work can be expanded in various ways, starting from a full theoretical foundation of the model. From a pure computational perspective, the work needs further analysis w.r.t. the themes of (i) computational complexity and (ii) termination-i.e., prove if the algorithms always guarantee a solution or graph configurations leading to undecidable state exist. While have no final result on complexity (i), it should be polynomial w.r.t. the number of input attacks for both the algorithms. Algorithms are based on two main operations: the transformation of the attacks in arguments -linear w.r.t. the number of attacks -and the recognition of the new pref-attacks -requiring in the worst case the examination of all the preference arguments for every attack. So, in the worst case, the complexity should be O(2N * P A), where N is the number of the input attacks and P A is the number of preference arguments. Termination (ii) is deeply connected with formal foundation of the model, hence it will be tackled after its completion. Nonetheless, initial informal analysis suggests a transformed graph should be always be obtainable by the proposed algorithms. However, the soundness of solutions provided by the transformed argumentation graph must be proved, in particular w.r.t. corner configurations. In fact, PAF properties hold only when the defeasible theory respects some constraints <ref type="bibr" target="#b5">[6]</ref>-a.k.a. as preference stratification, or p-stratification. Basically, it is required to ensure partial ordering over rules concluding a preference to avoid cycles in the inference chain-i.e., a preference should not influence the rules it derives from.</p><p>We also intend to extend this work towards Arg2P's query-based mode <ref type="bibr" target="#b9">[10]</ref>. Intuitively, it would be enough to evaluate also preference arguments that con-tribute to the patt set while evaluating an attacking argument. Further and deeper investigations are required in order to prove its feasibility.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Definition 4 (</head><label>4</label><figDesc>Attack). An argument A attacks an argument B at B ′ ∈ Sub(B) iff A undercuts or rebuts B (at B ′ ), where: (i) A undercuts B (at B ′ ) iff Conc(A) = ¬N(r) and B ′ s top rule r is defeasible; (ii) A rebuts B (at B ′ ) iff Conc(A) = ¬ϕ, Conc(B ′ ) = ϕ and B ′ 's top rule r is defeasible. We say that an argument A directly rebuts (dbut) or directly undercuts (dcut) B if A rebuts or undercuts B at B itself.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Standard argumentation graph (left) and transformed preference graph attack relation w.r.t. a) (right).</figDesc><graphic coords="5,186.64,115.84,242.08,68.27" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. Example 2 transformed preference graph attack relation w.r.t. a).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>Listing 1.1. Transformation algorithm 1 ( PafArguments , PafAttacks ) B u i l d P a f A r g u m e n t a t i o n G r a p h ( Arguments , Attacks ): 2 % Filters attacks that are in no circumstances impacted by preference arguments 3 ( ValidAttacks , Inva lidAtt acks ) = f i l t e r S u p R e l a t e d A t t a c k s ( Attacks ) 4 % Transf ormati on point a ) converts the preference -related attacks into arguments , 5 % also transposing the connected attacks , datt set 6 ( NewArguments , NewAttacks ) = conver tAttac ks ( I nvalid Attack s ) 7 % Transf ormati on point b ) build the patt set 8 PrefAttacks = b u i l d P r e f A t t a c k s ( Arguments , NewArguments ) 9 % Returns the new arguments and attacks sets 10 PafArguments = append ( Arguments , NewArguments ) 11 PafAttacks = append ( ValidAttacks , NewAttacks , PrefAttacks ) 12 return ( PafArguments , PafAttacks ) Listing 1.1 3 shows the pseudo-code of the algorithm used in transforming AF to PAF-i.e., the encoding of the function F .</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Fig. 3 .Fig. 4 .</head><label>34</label><figDesc>Fig. 3. Arguments from Listing 1.2 with conditional preference disabled.</figDesc><graphic coords="9,134.77,168.72,345.82,93.67" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Fig. 5 .</head><label>5</label><figDesc>Fig. 5. Arguments from Listing 1.3 with conditional preference enabled.</figDesc><graphic coords="10,276.36,120.45,186.75,61.71" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Fig. 6 .</head><label>6</label><figDesc>Fig. 6. PAF (left) and transformed GPAF (right).</figDesc><graphic coords="12,169.35,115.84,276.67,62.46" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Fig. 7 .</head><label>7</label><figDesc>Fig. 7. Evaluation of Example 3 with the extended mechanism and the original PAF's preference attack relation.</figDesc><graphic coords="12,134.77,279.40,345.82,93.58" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Fig. 8 .</head><label>8</label><figDesc>Fig. 8. Example 3 evaluation with the modified algorithm and the weakest democrat argument ordering.</figDesc><graphic coords="13,134.77,174.70,345.83,93.30" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Listing 1 . 4 .Fig. 9 .</head><label>149</label><figDesc>Fig. 9. Arguments from Listing 1.4 with conditional preferences enabled.</figDesc></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_0">Sources and technical details are available on the official project page http://arg2p.apice.unibo.it</note>
		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>have been supported by the H2020 ERC Project "CompuLaw" (G.A. 833647). A. Omicini has been supported by the H2020 Project "StairwAI" (G.A. 101017142).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Defeasible systems in legal reasoning: A comparative assessment</title>
		<author>
			<persName><forename type="first">R</forename><surname>Calegari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Contissa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lagioia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Omicini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sartor</surname></persName>
		</author>
		<idno type="DOI">10.3233/FAIA190320</idno>
		<ptr target="https://doi.org/10.3233/FAIA190320" />
	</analytic>
	<monogr>
		<title level="m">Legal Knowledge and Information Systems. JURIX 2019: The Thirty-second Annual Conference, Frontiers in Artificial Intelligence and Applications</title>
				<imprint>
			<publisher>IOS Press</publisher>
			<date type="published" when="2019-12-13">11-13 Dec 2019</date>
			<biblScope unit="volume">322</biblScope>
			<biblScope unit="page" from="169" to="174" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Arg-tuProlog: a modular logic argumentation tool for PIL</title>
		<author>
			<persName><forename type="first">R</forename><surname>Calegari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Contissa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Pisano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sartor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sartor</surname></persName>
		</author>
		<idno type="DOI">10.3233/FAIA200880</idno>
		<ptr target="https://doi.org/10.3233/FAIA200880" />
	</analytic>
	<monogr>
		<title level="m">Legal Knowledge and Information Systems. JURIX 2020: The Thirty-third Annual Conference. Frontiers in Artificial Intelligence and Applications</title>
				<imprint>
			<date type="published" when="2020-12">Dec 2020</date>
			<biblScope unit="volume">334</biblScope>
			<biblScope unit="page" from="9" to="11" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Explainable and ethical AI: A perspective on argumentation and logic programming</title>
		<author>
			<persName><forename type="first">R</forename><surname>Calegari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Omicini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sartor</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-73065-9_2</idno>
		<ptr target="https://doi.org/10.1007/978-3-030-73065-9_2" />
	</analytic>
	<monogr>
		<title level="m">AIxIA 2020 -Advances in Artificial Intelligence</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<imprint>
			<publisher>Springer Nature</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">12414</biblScope>
			<biblScope unit="page" from="19" to="36" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">On the evaluation of argumentation formalisms</title>
		<author>
			<persName><forename type="first">M</forename><surname>Caminada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Amgoud</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.artint.2007.02.003</idno>
		<ptr target="https://doi.org/10.1016/j.artint.2007.02.003" />
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">171</biblScope>
			<biblScope unit="issue">5-6</biblScope>
			<biblScope unit="page" from="286" to="310" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">M</forename><surname>Dung</surname></persName>
		</author>
		<idno type="DOI">10.1016/0004-3702</idno>
		<idno>(94)00041-X</idno>
		<ptr target="https://doi.org/10.1016/0004-3702" />
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">77</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="321" to="358" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">On structured argumentation with conditional preferences</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">M</forename><surname>Dung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">M</forename><surname>Thang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">C</forename><surname>Son</surname></persName>
		</author>
		<idno type="DOI">10.1609/aaai.v33i01.33012792</idno>
		<ptr target="https://doi.org/10.1609/aaai.v33i01.33012792" />
	</analytic>
	<monogr>
		<title level="m">The Thirty-Third AAAI Conference on Artificial Intelligence</title>
				<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="2792" to="2800" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Reasoning about preferences in argumentation frameworks</title>
		<author>
			<persName><forename type="first">S</forename><surname>Modgil</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.artint.2009.02.001</idno>
		<ptr target="https://doi.org/10.1016/j.artint.2009.02.001" />
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">173</biblScope>
			<biblScope unit="issue">9-10</biblScope>
			<biblScope unit="page" from="901" to="934" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Reasoning about preferences in structured extended argumentation frameworks</title>
		<author>
			<persName><forename type="first">S</forename><surname>Modgil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Prakken</surname></persName>
		</author>
		<idno type="DOI">10.3233/978-1-60750-619-5-347</idno>
		<ptr target="https://doi.org/10.3233/978-1-60750-619-5-347" />
	</analytic>
	<monogr>
		<title level="m">Computational Models of Argument: Proceedings of COMMA 2010</title>
				<imprint>
			<publisher>IOS Press</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="volume">216</biblScope>
			<biblScope unit="page" from="347" to="358" />
		</imprint>
	</monogr>
	<note>Frontiers in Artificial Intelligence and Applications</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">The ASPIC + framework for structured argumentation: a tutorial</title>
		<author>
			<persName><forename type="first">S</forename><surname>Modgil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Prakken</surname></persName>
		</author>
		<idno type="DOI">10.1080/19462166.2013.869766</idno>
		<ptr target="https://doi.org/10.1080/19462166.2013.869766" />
	</analytic>
	<monogr>
		<title level="j">Argument Computation</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="31" to="62" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Arg-tuProlog: A tuProlog-based argumentation framework</title>
		<author>
			<persName><forename type="first">G</forename><surname>Pisano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Calegari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Omicini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sartor</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-2710/paper4.pdf" />
	</analytic>
	<monogr>
		<title level="m">CILC 2020 -Italian Conference on Computational Logic</title>
		<title level="s">CEUR-WS</title>
		<editor>
			<persName><forename type="first">F</forename><surname>Calimeri</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Perri</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Zumpano</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2020-10">Oct 2020</date>
			<biblScope unit="volume">2719</biblScope>
			<biblScope unit="page" from="13" to="15" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Argument-based extended logic programming with defeasible priorities</title>
		<author>
			<persName><forename type="first">H</forename><surname>Prakken</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sartor</surname></persName>
		</author>
		<idno type="DOI">10.1080/11663081.1997.10510900</idno>
		<ptr target="https://doi.org/10.1080/11663081.1997.10510900" />
	</analytic>
	<monogr>
		<title level="j">Journal of Applied Non-Classical Logics</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="25" to="75" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
