<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Journal of Artificial Intelligence Research 77
(2023) 737-792. doi:10.1613/jair.1.14368.
[34] K. Čyras</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1609/AAAI.V32I1.11512</article-id>
      <title-group>
        <article-title>Weighted Assumption Based Argumentation to reason about ethical principles and actions</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Paolo Baldi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fabio Aurelio D'Asaro</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abeer Dyoub</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesca A. Lisi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Centro Interdipartimentale di Logica e Applicazioni (CILA), University of Bari “Aldo Moro”</institution>
          ,
          <addr-line>Via E. Orabona 4, Bari, 70125</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Dept. of Human Studies, University of Salento</institution>
          ,
          <addr-line>Lecce</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Dept. of Informatics, University of Bari “Aldo Moro”</institution>
          ,
          <addr-line>Via E. Orabona 4, Bari, 70125</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>1</volume>
      <fpage>25</fpage>
      <lpage>27</lpage>
      <abstract>
        <p>We augment Assumption Based Argumentation (ABA for short) with weighted argumentation. In a nutshell, we assign weights to arguments and then derive the weight of attacks between ABA arguments. We illustrate our proposal through running examples in the field of ethical reasoning, and present an implementation based on Answer Set Programming.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Formal Argumentation</kwd>
        <kwd>Assumption Based Argumentation</kwd>
        <kwd>Ethical Reasoning</kwd>
        <kwd>Fuzzy Logic</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Formal argumentation frameworks model reasoning with conflicting claims, based on the crucial notion
of attacks among arguments. These attacks are rendered as directed edges connecting nodes in a graph,
while the arguments themselves are represented simply as nodes of the graph, see e.g. the seminal
paper by Dung [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Subsequent approaches, belonging to the field of structured argumentation, see e.g.
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], provide a more fine-grained representation of the argument nodes, equipping them with a logical
structure, and using such structure for deriving from logical principles the occurrence of attacks among
arguments.
      </p>
      <p>Assumption-based argumentation (see, e.g., [2, Chapter 7]) is a prominent approach to structured
argumentation. It represents arguments as logical derivations, built on the basis of two ingredients:
rules, which are considered to be non-defeasible, and assumptions, which are taken instead to be the
defeasible part of the argument, and possibly the target of attacks.</p>
      <p>In this work, we introduce Weighted Assumption Based Argumentation frameworks (wABAs), which
enrich ABA with weighted arguments. These weights on arguments determine in turn weights on
the attacks among arguments, on the model of weighted abstract argumentation, see, e.g., [3, Chapter
6]. This has several modeling advantages. On the one hand, the introduction of weights allows us to
import ideas from fuzzy logic. On the other hand, since weighted arguments translate into weighted
attacks, we may then allow for certain forms of incoherence in the semantics, making use of standard
techniques in weighted abstract argumentation.</p>
      <p>We propose an implementation of wABA using a translation into Answer Set Programming (ASP) by
means of clingo answer set grounder and solver, and demonstrate the formalism and its
implementation on a scenario involving AI ethics.</p>
      <p>Our motivations for the introduction and implementation of wABA originates indeed from the general
aim of addressing computational reasoning with ethical principles in AI, and more specifically in the
domain of symbiotic human-AI interactions. Deontological ethics, with its rule-based approach seems at
ifrst most well- suited for computational implementation in our setting. However, eliciting precise rules
from general principles is far from obvious, and ethical rules can easily result in inconsistent outcomes.</p>
      <p>We argue that, due to its peculiar features, wABA ofers adequately expressive computational
machinery both for the representation of ethical reasoning, and guidance on conflict resolution.</p>
      <p>Concerning representation, our approach distinguishes, within wABA arguments, among (i) the
general ethical principles, which play the role of the (defeasible) assumptions in the framework, (ii) the
factual elements, which may be used as premises of arguments in addition to the assumptions, and (iii)
the prescriptions of courses of actions, which appear as conclusions of arguments.</p>
      <p>
        Arguments in wABA are thus well-suited to render prima-facie duties [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], i.e., duties based on ethical
principles, that may lead to conflicting prescriptions, and may be overriden on the basis of application
context.
      </p>
      <p>
        Conflicts between ethically motivated prescriptions are naturally derived in wABA as attacks among
the arguments. Following the extension-based semantics of formal argumentation, wABA then allows
one to represent various possible solutions to the ethical conflicts, i.e., diferent horns of ethical dilemmas,
as diferent extensions. In the spirit of symbiotic AI [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], we take it to be a crucial feature of our approach
that the AI agent does not substitute itself to the human agent, but disentangles, rather than solve, the
ethical conflicts.
      </p>
      <p>
        At the same time, wABA also ofers guidance to the resolution of conflicts, due to the use of weights.
Our design choice here is to avoid any a priori weighting of ethical principles, rather allowing the
assignment of weights only to formulas standing for factual aspects, i.e. for the assessment of the context
of application. In this respect, we extend and incorporate the fuzzy rule-based system developed in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
within our framework. Only secondarily, on the basis of suitable computations, weights are carried over
to arguments, then to attacks, and finally to the extensions of the argumentation framework. The user
is thus ultimately confronted with weighted extensions, i.e., weighted solutions to ethical dilemmas,
and can thus evaluate how strong a violation of certain ethical assumption she is willing to accept.
      </p>
      <p>The rest of the paper is structured as follows. In Section 2 we provide some background on abstract
and assumption-based argumentation. In Section 3 we introduce wABA and in Section 4 we discuss
our ASP implementation in clingo. Section 5 discusses the application of the framework to ethical
reasoning and Section 6 the related work on computational approaches to ethical reasoning. Section
7 concludes the paper by wrapping up our contribution while hinting at future developments of the
present work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>In this section, we briefly recall the fundamental concepts of abstract argumentation frameworks (AAFs),
assumption-based argumentation (ABA), and weighted abstract argumentation frameworks (wAAFs).
These frameworks provide the machinery upon which our proposed approach is constructed.</p>
      <sec id="sec-2-1">
        <title>2.1. Abstract Argumentation Frameworks</title>
        <p>
          We begin with Dung’s abstract argumentation frameworks [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], which provide an abstract
characterization of argumentation.
        </p>
        <p>Definition 2.1 (Abstract Argumentation Framework). An abstract argumentation framework (AAF) is a
pair (Arg, Att) where:
• Arg is a finite set of arguments;
• Att ⊆ Arg × Arg is a binary relation representing attacks between arguments.</p>
        <p>Given an AAF (Arg, Att), for any ,  ∈ Arg, the notation (, ) ∈ Att indicates that  attacks . We
say that a set of arguments  ⊆ Arg defends an argument  ∈ Arg if for each argument  ∈ Arg that
attacks , there exists an argument  ∈  such that  attacks . A subset  ⊆ Arg is:</p>
        <sec id="sec-2-1-1">
          <title>Admissible</title>
        </sec>
        <sec id="sec-2-1-2">
          <title>Complete</title>
        </sec>
        <sec id="sec-2-1-3">
          <title>Preferred</title>
        </sec>
        <sec id="sec-2-1-4">
          <title>Grounded</title>
        </sec>
        <sec id="sec-2-1-5">
          <title>Naive</title>
        </sec>
        <sec id="sec-2-1-6">
          <title>Stage</title>
        </sec>
        <sec id="sec-2-1-7">
          <title>Semi-Stable</title>
        </sec>
        <sec id="sec-2-1-8">
          <title>Stable</title>
          <p>• conflict-free if there are no ,  ∈  such that (, ) ∈ Att;
• admissible if it is conflict-free and defends all its elements;
• preferred if it is a maximal (w.r.t. set inclusion) admissible set;
• grounded if it is the least (w.r.t. set inclusion) admissible set;
• stable if it is conflict-free and attacks every argument not in the set.</p>
          <p>In addition to the semantics based on admissibility, alternative semantics have been introduced to
capture diferent intuitions, particularly when admissibility leads to overly restrictive or unintuitive
outcomes. These include the following:
• Naive extensions are the maximal (w.r.t. set inclusion) conflict-free subsets of Arg. Unlike
admissible extensions, naive extensions do not require defense against attacks, focusing instead
on maximizing conflict-freeness alone.
• Semi-stable extensions are admissible sets whose range—i.e., the union of the set and the arguments
it attacks—is maximal (w.r.t. set inclusion) among admissible sets. These extensions aim to
approximate stable extensions when the latter do not exist.
• Stage extensions are conflict-free sets whose range is maximal among all conflict-free sets. Like
semi-stable semantics, they emphasize coverage (via attack) of the argument space, but do not
require admissibility.</p>
          <p>These non-admissibility-based semantics are particularly relevant in weighted or resource-bounded
settings, where defense may be impractical or where broader coverage is desirable despite some lack of
coherence. In such cases, naive and stage extensions may yield more informative or robust outcomes
than traditional admissible-based semantics.</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Assumption-Based Argumentation</title>
        <p>Assumption-Based Argumentation (ABA; see, e.g., [7] for a primer) provides a structured argumentation
framework grounded in a deductive system, based on rules and defeasible assumptions.</p>
        <p>Definition 2.2 (ABA Framework). An assumption-based argumentation (ABA) framework is a tuple
(ℒ, ℛ, , ) where:
• ℒ is a formal language;
• ℛ is a set of inference rules of the form  ← 1, . . . ,  with ,  ∈ ℒ;
•  ⊆ ℒ is a non-empty set of assumptions;
• :  → ℒ is a total mapping that assigns to each assumption its contrary.</p>
        <p>An ABA is said to be flat if and only if assumptions only appear in the body of rules.</p>
        <p>Henceforth, for any rule  ∈  of the form  ← 1, . . . ,  ∈ ℛ we let () = {1, . . . , } and
ℎ() = . Arguments in ABA are deductions, denoted by Φ ⊢  , where Φ is a set of assumptions
and  is any formula in ℒ, obtained from Φ by applying one or more rules. Given an ABA argument 
(i.e. a deduction), we denote by () the set of rules supporting it. Attacks are defined via contraries.
Specifically, we will have that an argument  attacks an argument , if and only if  is a deduction of the
form Φ′ ⊢  , and  is a deduction of the form Φ,  ⊢ , where  ∈ ,  ∈ ℒ,  is the contrary of  and
∅ ⊆ Φ, Φ′ ⊆ . Any ABA framework thus naturally defines a corresponding abstract argumentation
framework associated with it, from which extensions can be extracted.</p>
        <p>Example 2.3. As a refresher, consider the flat ABA framework consisting of atoms , , , and ,
where  and  are assumptions, contraries are given by  = ca and  = cb, and rules are ca ← , ca ← ,
cb ← . Recall that in ABA an argument is a deduction. Examples of arguments in this ABA are  ⊢ ,
 ⊢ ca, and  ⊢ ca. Attacks are then derived by considering contraries: for example,  ⊢ ca attacks
 ⊢ ca as the derived atom ca is the contrary of the assumption . The full framework is shown in
Figure 2, which was produced with the PyArg library [8]. This graph can be treated as a standard AAF.
For instance, the only stable extension of this framework is { ⊢ ,  ⊢ }.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Weighted Abstract Argumentation</title>
        <p>Weighted extensions of abstract argumentation frameworks [9] assign numerical weights to arguments
or attacks, enabling reasoning with preferences, strengths, or costs.</p>
        <p>Definition 2.4 (Weighted AAF). A weighted abstract argumentation framework (wAAF) is a triple
(Arg, Att, ) where:
• (Arg, Att) is an abstract argumentation framework;
•  : Att → R+ is a function assigning a positive real-valued weight to each attack.</p>
        <p>An inconsistency budget  is typically used to specify a degree of inconsistency one is willing to
tolerate in a given scenario. In other words, attacks that weigh up to  may be discarded from the
framework. Semantics are then defined with respect to the inconsistency budget  and a standard
semantics  of AAF, see, e.g., [10]. One usually refers to these semantics as  - extensions, e.g., 3-stable,
2-admissible, 5-grounded, etc. Note that whenever  = 0 we obtain the standard argumentation
semantics  , e.g., 0-stable is the standard AAF stable semantics, 0-grounded is the standard AAF
grounded semantics, etc.</p>
        <p>Example 2.5. Consider the weighted abstract argumentation framework (Arg, Att, ) where:
Arg = {, , },</p>
        <p>Att = {(, ), (, ), (, )}, ((, )) = 2, ((, )) = 1, ((, )) = 4.</p>
        <p>This framework forms a directed cycle where each argument attacks one other. Under standard stable
semantics (i.e., with  = 0 and  =stable), there exists no extension, as each argument is attacked by
another, and no conflict-free set can defend against all incoming attacks.</p>
        <p>Now consider the framework under a budgeted stable semantics with inconsistency budget  = 3.
In this case, we may discard attacks whose cumulative weight does not exceed 3. For instance, we
may choose to discard the attack (, ), which has weight 4, but this alone would exceed the budget.
However, if we discard (, ) and (, ), whose combined weight is 3, we remain within the budget.</p>
        <p>After discarding (, ) and (, ), the only remaining attack is (, ). The set {, } is now conflict-free
with respect to the reduced attack relation and attacks , making it a valid 3-stable extension.</p>
        <p>This example illustrates how the introduction of a budget  permits certain extensions that are
disallowed under classical semantics, thereby enabling reasoning in the presence of bounded inconsistency
or uncertainty.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Weighted Assumption Based Argumentation</title>
      <p>In this Section we develop our proposed framework for Weighted Assumption Based Argumentation
(wABA). The basic idea is to extend ABA by assigning a cost to certain atoms, and use these costs to
compute the weight of attacks.</p>
      <p>Just like in plain ABA, arguments are defined as deductions. Attacks are still defined in terms of
contraries as in ABA, i.e. any argument of the form Φ,  ⊢  may only be attacked by arguments of the
form Φ′ ⊢  .</p>
      <p>Definition 3.1 (Weighted Assumption-Based Argumentation). A wABA framework is a tuple
(ℒ, ℛ, , , , )
where (ℒ, ℛ, , ) is an ABA framework,  ⊆ R0+ ∪ {∞} (where R0+ is the set of nonnegative real
numbers) and  = (, ⊕ , ⊗ , ⊕ , ⊗ ) is a semiring1. The function  : ℒ →  is a weight such that
() = ⊗ for each  ∈ .</p>
      <p>The weights are then extended to attacks (, ) among arguments  and  by letting2:
((, )) =
⨂︁</p>
      <p>⨂︁
 ∈ rules()  ∈ body()
().</p>
      <p>As in weighted abstract argumentation, one can then choose to discard attacks that do not exceed an
inconsistency budget  ∈ , using the operation ⊕ of the semiring, as follows.
1This means that (, ⊕ , ⊕ ) is a commutative monoid, (, ⊗ , ⊗ ) is a monoid, distributivity holds w.r.t. both operators, and
any element is annihilated by ⊕ .
2In other words, the weight of the attack is the weight of the attacking derivation, which is in turn computed by suitably
aggregating the weights of the atoms occurring in the body of the rules supporting the derivation.</p>
      <p>Definition 3.2 ( - extensions). Let (ℒ, ℛ, , , , ) be a WABA framework, over the semiring
 = (, ⊕ , ⊗ , ⊕ , ⊗ ),  be a set of extensions over the abstract argumentation framework (Arg, Att)
associated with (ℒ, ℛ, , ), and  ∈ . We say that a subset Arg′ ⊆ Arg is a  - extension for
(ℒ, ℛ, , , , ) if there is a subset Att′ ⊆ Att such that</p>
      <p>⨁︁
(,)∈Att′</p>
      <p>(, ) ≤ 
and Arg′ is a  extension for (Arg, Att ∖ Att′) .</p>
      <p>We say that a wABA is flat if (ℒ, ℛ, , ) is a flat ABA. In the following, we assume that all wABAs
are flat, unless stated otherwise.</p>
      <p>We illustrate the notions we have introduced with the following example.</p>
      <p>Example 3.3. Consider the wABA consisting of atoms {, , , , ca, cb}, where  and  are assumptions,
 and  are contextual information, and contraries are defined via  = ca and  = cb. Weight of
contextual atoms are defined as () = 3 and () = 5. Rules are:
ca ← , 
cb ← , 
 ← ,</p>
      <p>← ⊤
The resulting framework may be depicted as in Figure 3. Note that if one has an appropriate inconsistency
budget then some pairs of attacks may be removed from the framework, e.g., the mutual attacks from
 ⊢ ca and  ⊢  for rules( ⊢ ) = { ← ⊤ ,  ← , } and rules( ⊢ ) = { ← ⊤ ,  ←
, ,  ← , }. The resulting extensions can be calculated according to the standard AAF definition.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Implementation</title>
      <p>Our implementation of (Weighted) Assumption-Based Argumentation is based on Answer Set
Programming (ASP) using clingo [11]. The code is freely available at https://github.com/dasaro/ABA-variants/
tree/main/WABA.</p>
      <sec id="sec-4-1">
        <title>4.1. Theoretical assumptions</title>
        <p>The current encoding supports context-free, naive and stable semantics.</p>
        <p>Our implementation adopts the min–max semiring</p>
        <p>= (N ∪ {∞}, max, min, 0, ∞)
as our default structure for wABA. This choice is driven primarily by the inherently fuzzy nature of
inputs, and clingo implmenetation, which provides native support for natural numbers but not for
real values. We start by detailing the translation procedure which translates a wABA framework into
an answer set program, which is largely based on ASPforABA [12].</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Translation</title>
        <p>Let (ℒ, ℛ, , , , ) be a flat wABA. For compactness and readability, in the following we use the
standard clingo abbreviation “;” which unpacks an expression, e.g., of the form r(a1,b1; . . . ;
an,bn) into a set of clauses r(a1,b1), . . . , r(an,bn).</p>
        <p>The set of assumptions  = {1, . . . , } gets translated to:
assumption(a1; ...; an).</p>
        <p>Each assumption is assigned to its contrary via the
translated to:
contrary(a1,c1; ...; an,cn).
operator, e.g., 1 = 1, . . . ,  = . This gets</p>
        <p>Each rule of the form ℎ ← 1, . . . , , where ℎ ∈ ℒ ∖  is the head, and 1, . . . ,  ∈ ℒ are the body
of the rule, is translated to:
head(id,h). body(id,b1; ...; id,bm).
where id is a unique identifier assigned to the rule, so that diferent rules get assigned diferent
identifiers.</p>
        <p>Finally, weights for atoms in ℒ, e.g., (1) = 1, . . . , () =  translate to:
weight(d1,w1; ...; dl, wl).</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Semantics</title>
        <p>budget(beta).</p>
        <p>The semantics file core.lp is fixed for all semantics and is described in what follows. We start by
declaring a global inconsistency budget  :
This can be set from the Command Line by using clingo’s –const beta=N in-built flag. Note that if
one wants to enumerate all the extensions, regardless of their budget, –const beta=#sup may be
used.</p>
        <p>Each assumption can be either in or out of a candidate extension:
in(X) :- assumption(X), not out(X).
out(X) :- assumption(X), not in(X).</p>
        <p>Support propagates from selected assumptions through the rules in a bottom-up fashion:
supported(X) :- assumption(X), in(X).
supported(X) :- head(R,X), triggered_by_in(R).
triggered_by_in(R) :- head(R,_), supported(X) : body(R,X).</p>
        <p>We then assign supported atoms a weight according to the rules they were produced from, using the
⊗ = min operator (and note that assumptions are assigned weight ⊗ = ∞ which in clingo is rendered
as the constant #sup):
supported_with_weight(X,#sup) :- assumption(X), in(X).
supported_with_weight(X,W) :- supported(X), weight(X,W).
supported_with_weight(X,W)
:supported(X), head(R,X),</p>
        <p>W = #min{ V, B : body(R,B), supported_with_weight(B,V) }.</p>
        <p>Attacks arise from contraries:
attacks_with_weight(X,Y,W)
:supported(X), supported_with_weight(X,W),
assumption(Y), contrary(Y,X).</p>
        <p>As usual in wAAFs, we may choose to discard any subset of those attacks, paying their full weight;
the total discarded weight must not exceed the budget w.r.t. semiring operation ⊕ = max:
{ discarded_attack(X,Y,W) : attacks_with_weight(X,Y,W) }.
extension_cost(C) :- C = #max{ W, X, Y : discarded_attack(X,Y,W) }.
:- extension_cost(C), C &gt; B, budget(B).</p>
        <p>Since we do not want discarded attacks to be efective, we also introduce the notion of a successful
attack:
attacks_successfully_with_weight(X,Y,W)
:</p>
        <p>attacks_with_weight(X,Y,W), not discarded_attack(X,Y,W).</p>
        <p>Once we have figured out how arguments are built from the assumptions, including their weights,
contraries and discarded attacks according to the budget, we are left with the task of selecting an
appropriate semantics to choose valid extensions from.</p>
        <p>First, we introduce useful shorthands:
defeated(X) :- attacks_successfully_with_weight(_,X,_).
not_defended(X) :- attacks_successfully_with_weight(Y,X,_), not defeated(Y).
which state that an atom is defeated if it receives a successful attack, and that it is not defended if it
has an undefeated attacker. These shorthands will help simplify the form of semantics below.</p>
        <p>Then, we define four popular semantics:
Conflict-Free Conflict freeness ensures that, once discarded attacks are removed in the way outlined
above, two assumptions in the same extension do not attack each other:
:- in(X), defeated(X).</p>
        <p>Admissible semantics Admissibility is a widely adopted property of argumentation frameworks,
and many other semantics (such as the stable semantics defined below) produce subsets of admissible
sets. It is implemented by making sure it is context-free and all its elements are defended:
:- in(X), defeated(X). % conflict-freeness
:- in(X), not_defended(X).</p>
        <p>Stable Semantics In the stable semantics, every (conflict-free) assumption outside the extension
must be defeated by a non-discarded attack:
:- in(X), defeated(X). % conflict-freeness
:- out(X), not defeated(X).</p>
        <p>Naive Semantics The naive semantics is a maximal conflict-free set w.r.t. to set inclusion, which is
not necessarily admissible:
:- in(X), defeated(X). % conflict freeness
#heuristic in(X) : assumption(X). [1,true]
which ensures the maximal set of assumptions is “in” regardless of the extension being admissible or
not.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. A Weighted ABA Approach to Ethical Reasoning</title>
      <p>
        In this section, we apply the proposed clingo implementation of wABA to ethical reasoning involving
conflicting principles and contextual medical information. We begin by revisiting the approach proposed
in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], which we expand into a more comprehensive (W)ABA-based framework for ethical
decisionmaking. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] focuses on the following motivating scenario:
Example 5.1 (Patient Dilemma). A care robot approaches a patient to administer medication, which
would very likely address some health issues. The patient, however, refuses to take it. Should the robot
attempt to persuade the patient, or should it respect the patient’s decision?
      </p>
      <p>The dilemma involves a potential conflict between autonomy, which prioritizes the patient’s decision,
and beneficence , which emphasizes promoting the patient’s well-being. Depending on the clinical and
institutional context, other principles such as non-maleficence (e.g., medication may have harmful side
efects) and justice (e.g., resource constraints across patients) may also be relevant.</p>
      <p>
        Figure 4 shows the architecture of the fuzzy logic based system for ethical risk assessment (ERA)
proposed in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Dyoub &amp; Lisi used ERA system to assess the possible ethical risk of causing physical
harm to the patient in the above mentioned Patient Dilemma. For evaluating the physical harm risk
in this case, we can consider diferent parameters as inputs to ERA system such as, severity of health
condition of the patient, mental/psychological condition of the patient, physiological indicators of
well-being, etc. These inputs are rated on some scale (e.g. between 0 and 10). The crisp values are then
fuzzified into fuzzy sets with linguistic variables. For example, the fuzzy set label for severity could be
one of {very_low, low, medium, high, very_high}. Then, the fuzzy inference engine uses the fuzzy rules
from the fuzzy rule base to calculate the risk level. For instance, a patient in a severe health condition
(90%) with reduced mental capacity (80%) is evaluated as very_high risk (95%). These values may be
readily piped in into our wABA framework.
      </p>
      <sec id="sec-5-1">
        <title>5.1. Plain ABA approach</title>
        <p>We first show how we can embed linguistic labels from the fuzzy system to reason about which ethical
principles are satisfied (resp. violated).</p>
        <p>Example 5.2 (Patient Dilemma in ABA). Let us assume that, on a scale from 0 to 10, a patient in a
highly severe physical condition (9/10), high mental risk (8/10) and refusing to taking her medications,
is evaluated as a high_risk individual by the underlying fuzzy systems–meaning that not taking the
medications could lead to developing severe physical consequences (including death).</p>
        <p>We may formalize principles involved in this scenario through assumptions respect_autonomy and
act_beneficently . Other atoms in the language are risk(high), reflecting the fuzzy evaluation of risk,
action(give_meds) as the action of giving medicines, action(dont_give_meds) as the action of not giving
medicines to the patient, opinion(refuses_meds) as the contextual information that the patient refuses
the medications, ca as the contrary of respect for autonomy (i.e., respect_autonomy = ca) and cb as the
contrary of act_beneficently (i.e., act_beneficently = cb). Experts define rules as follows:
action(dont_give_meds) ←
action(give_meds) ←</p>
        <p>opinion(refuses_meds), respect_autonomy
 ←
action(give_meds), opinion(refuses_meds)
 ←
action(dont_give_meds), risk(high)
opinion(refuses_meds) ← ⊤</p>
        <p>risk(high) ← ⊤
risk(high), act_beneficently
whose interpretation is intuitive. We can calculate the stable extensions in our framework, namely
{act_beneficently }, from which action give_meds follows, and {respect_autonomy}, which supports
not giving meds. Note that, in an alternative situation where the patient does not refuse to take the
meds both ethical principles may be satisfied: indeed, in this scenario we get the unique extension
{act_beneficently, respect_autonomy } from which one can derive that giving medications satisfies both
beneficence and autonomy, since the patient does not refuse to take the medications in the first place.</p>
        <p>This simple example can be further extended by considering more nuanced interactions between
ethical principles and their contraries. A fuller example, involving the other two standard principle of
bioethics (namely, non maleficence and justice) is available on our GitHub repository, and the interested
reader can try it out.</p>
        <p>We now turn to discussing how such reasoning about ethical principles may be enriched with weights
that allow for inconsistencies.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. wABA approach</title>
        <p>In addition to assumptions and contraries, wABA introduces weighted information, and an inconsistency
budget  . In this way, we can model more nuanced aspects of ethical decision-making.
Example 5.3 (Patient Dilemma in wABA). In order to show the characteristics of wABA, we further
elaborate on Example 5.2. Recall that the patient is in a severe physical condition (0.9) with reduced
mental capacity (0.8). In this case, we let the fuzzy component of our system defuzzify its output,
which provides us with a numeric value for the risk of phyisical harm. Let us assume the output
defuzzified value for the risk is 0.7. Furthermore, let us assume that the patient is very reluctanct to
taking medications, which we assign (or the fuzzy system assigns) a weight of 0.9. This amounts to
saying that the contextual information is described by:</p>
        <p>(risk) = 0.7, (opinion(refuses_meds)) = 0.9
It is worth noting here that we have dropped the lingustic variables appearing in the first example (i.e.,
risk(high)) in favor of explicit weights. We modify the rules to reflect this:
action(dont_give_meds) ←
 ←</p>
        <p>opinion(refuses_meds) ← ⊤
action(give_meds) ←
opinion(refuses_meds), respect_autonomy
action(give_meds), opinion(refuses_meds)</p>
        <p>risk ← ⊤
risk, act_beneficently
 ←
action(dont_give_meds), risk
where the weights of risk and refuses_meds automatically enter the computation by means of wABA
semantics.</p>
        <p>In the implementation, we use the appropriate predicate weight/2 as follows:
head(ctx1, risk). weight(risk, 7).
head(ctx2, refuses_meds). weight(refuses_meds, 9).</p>
        <p>Assumptions, rules and contraries are implemented in the exact same way as in plain ABA (see
Example 5.2). Note that if we let  = 0 we reconstruct exactly the same extensions as in Example 5.2,
i.e., the (postprocessed for readability) output looks as follows:
Answer: 1
in(respect_autonomy) supported_with_weight(action(dont_give_meds),9) supported_with_weight(cb,7)
extension_cost(0)
Answer: 2
in(act_beneficently) supported_with_weight(action(give_meds),7) supported_with_weight(ca,7)
extension_cost(0)
However, if we list all extensions, we get another inconsistent stable extension resulting from the
removal of the ethical inconsistency between respect_autonomy and act_beneficently:
Answer: 3
in(respect_autonomy) in(act_beneficently) supported_with_weight(action(dont_give_meds),9)
supported_with_weight(action(give_meds),7) supported_with_weight(ca,7) supported_with_weight(
cb,7) extension_cost(7)
This extension comes at an incosistency cost of 7. However, we now have the additional information
that action(give_meds) weighs 7 in all extensions, while action(dont_give_meds) weighs 9.
Therefore, it is slightly recommended not to give meds in this scenario, thus respecting autonomy. This
suggestion could of course be overridden by a human operator if s/he wishes to satisfy beneficence
over autonomy.</p>
        <p>We illustrated how wABA may resolve conflicts between ethical principles by deriving attacks
from contraries and evaluating which sets of assumptions can be tolerated together under the given
inconsistency budget. The weighted implementation allows discarding some attacks–e.g., between
autonomy and beneficence–if the total weight of discarded attacks remains within the user-specified
limit.</p>
        <p>Note that this is particularly useful in this scenario as it also allows for reasoning about ethical
principles in highly inconsistent scenarios, where plain ABA would not produce extensions, as it is
often the case in realistic scenarios due to ethical principles leading to diferent courses of actions. In
such cases wABA can help a human operator getting the fuller picture, e.g., verify what principles
are satisfied in most extensions, normalizing their weight according to inconsistency levels, as well as
showing what course of action is recommended and what this implies on the ethical dimension.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Related Work</title>
      <p>Resolving conflicts between ethical principles has long been a core challenge in both normative ethics
and applied biomedical ethics. Clinical decision-making frequently necessitates the careful balancing of
competing moral considerations, such as honoring patient autonomy, promoting well-being, preventing
harm, and ensuring fairness. The principlist approach proposed by Beauchamp and Childress [13]
remains a cornerstone of biomedical ethics. It articulates four central principles -autonomy, beneficence,
nonmaleficence, and justice - that guide ethical decision-making. In his influential work [ 14], Floridi
further argues that a fifth principle, namely explicability is a required addition to those principles,
specifically for the needs of AI ethics. However, these ethical frameworks do not prescribe a unique resolution
when these principles come into conflict, prompting the need for formal methods that can assist in
principled judgment. Subsequent proposals, such as the mixed consequentialist-nonconsequentialist
hierarchy, advocate balancing principles within categories (e.g., nonmaleficence vs. beneficence)
before applying lexical priority to nonconsequentialist outcomes [15]. Similarly, [16] provides a general
workflow to derive concrete rules from general principles, taking into account the specificity of the
contexts, and the allowable exceptions to rules. This approach has been subsequently implemented in
Datalog [17]. These frameworks emphasize the role of case-based reasoning and iterative specification
to resolve dilemmas.</p>
      <p>Several computational approaches have been proposed to tackle this issue. Anderson and Anderson,
in their MedEthEx [18] and EthEl [19], operationalize Beauchamp and Childress’ principles through
machine learning, deriving decision rules from biomedical ethicists’ intuitions in training cases. In these
systems, the intensity of duty violations (e.g., autonomy vs. beneficence) is quantified by assigning it a
weight, then, for each possible action, the system computes the weighted sum of duty satisfaction. After
that, using inductive logic programming (ILP), their systems generate actionable rules for recurring
dilemmas. In [20] and [21] , Rossi and her team formalize principles as constraint-based systems with
conflict resolution engines. They argue that preferences can model the relative importance of ethical
principles and help resolve conflicts by identifying most preferred outcomes, given context-sensitive
constraints. Their work leverages CP-nets and weighted constraints to evaluate ethical options, ofering
a flexible and explainable approach to value-sensitive decision-making. In [ 22], Kleiman-Weiner et al.
suggest an abstract and recursive utility calculus to resolve conflicts among moral principles. Moral
theories (for the purposes of trading of diferent agents’ interests) can be formalized as values or
weights that an agent attaches to a set of abstract principles for how to factor any other agents’ utility
functions into their own utility-based decision-making and judgment. Drawing on machine learning
and computational social choice, [23] proposes an algorithm to learn a model of societal preferences, and,
when faced with specific ethical dilemma at runtime, aggregate those preferences to identify a desirable
choice. Awad et al. in [24], propose a framework for incorporating public opinion, as essential tool, into
policy making in situations where ethical values are in conflict. Their framework advocates creating
vignettes representing abstract value choices, eliciting the public’s opinion on these choices, and using
machine learning, in particular ILP, to extract principles that can serve as succinct statements of the
policies implied by these choices and rules that can be embedded in algorithms to guide the behavior of
AI-based systems. To present the functionality of this proposal, the authors borrow vignettes from the
Moral Machine website [25].</p>
      <p>Dennis et al. [26] developed the ETHAN (a BDI (Belief-Desire-Intention) agent language) system
that deals with situations when civil air navigation regulations are in conflict. The system relates these
rules to four hierarchical ordered ethical principles (do not harm people, do not harm animals, do not
damage self, and do not damage property) and develops a course of action that generates the smallest
violation to those principles in case of conflict. In their prototype, ethical reasoning was integrated
into a BDI agent programming language via the agent plan selection mechanism. [27] implement a
BDI architecture within a multi-agent system (MAS), with a particular emphasis on handling norm
conflicts. In their framework, agents are capable of adopting and dynamically updating norms, and
they determine which norms to activate based on the current context, their desires, and their intentions.
Conflicts between norms are resolved by selecting the norm that best contributes to the fulfillment of
the agent’s goals and intentions, efectively embedding norm adherence within the agent’s motivational
structure. Similarly, Mermet and Simon [28] address norm conflicts by distinguishing between moral
and ethical rules, the latter being invoked when moral rules are in conflict. They perform a verification
of whether their system called GDT4MAS, is able to choose the correct ethical rule in conflict cases.</p>
      <p>Chorley et al. in [29] described an implementation of the approach to deliberation about a choice of
action based on presumptive argumentation and associated critical questions [30]. The authors use the
argument scheme proposed in [31] to generate presumptive arguments for and against actions, and
then subject these arguments to critical questioning. They have explored automation of argumentation
for practical reasoning by a single agent in a multi-agent context, where agents may have conflicting
values. Their approach was illustrated with a particular example based on an ethical dilemma.</p>
      <p>[32] uses abstract argumentation for decision-making with multiple experts. The approach represents
in the form of attacks in abstract argumentation, the lack of fairness in the evaluation performed by
an expert, which may thus lead to the dismissal of the evaluation. Finally, [33] is directly related
with our approach. It introduces a moral advisor based on logic-based normative systems integrated
with Aspic-style structured argumentation, in order to handle moral dilemmas involving multiple
stakeholders, bearing diferent interests and ethical views.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion and future work</title>
      <p>
        We have introduced wABA, together with an ASP based implementation. We applied our formalism to a
patient dilemma scenario, showing how it can smoothly integrate the weighted assessment of a medical
situation, the ethical principles involved, and possibly weighted solutions to the dilemma. We believe
that this framework may provide a useful module for implementing reasoning with ethical principles in
AI systems that interact with humans. Ongoing work is devoted to a detailed formal analysis of the
framework, an extension of the implementation, and its applications in the ethical domain. Concerning
the first aspect, we plan to analyze the computational properties of wABA and its implementation,
such as correctness w.r.t. the intended semantics and analysis of its complexity. We would also like to
further explore the relation to the diferent choices of the semantics and the choice of semiring. We
plan to compare our framework to other approaches, based on structured argumentation frameworks,
in particular involving preferences, such as ASPIC+ [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and ABA+ [34].
      </p>
      <p>Concerning the implementation, we are incorporating further formal argumentation semantics and
operations for the aggregation of weights.</p>
      <p>Finally, for applications, we are currently investigating realistic scenarios, involving reasoning with
data and conflicting ethical principles, where we believe that our system may provide valuable support.
A particularly promising application in this sense would be the analysis of medical triage, with their
related intricate legal and ethical regulations [35].</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>This work was partially supported by the project FAIR- Future AI Research (PE00000013), under the
NRRP MUR program funded by the NextGenerationEU.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.
[7] F. Toni, A tutorial on assumption-based argumentation, Argument &amp; Computation 5 (2014) 89–
117. URL: https://doi.org/10.1080/19462166.2013.869878. doi:10.1080/19462166.2013.869878.
arXiv:https://doi.org/10.1080/19462166.2013.869878.
[8] A. Borg, D. Odekerken, Pyarg for solving and explaining argumentation in python: Demonstration,
in: F. Toni, S. Polberg, R. Booth, M. Caminada, H. Kido (Eds.), Computational Models of Argument
- Proceedings of COMMA 2022, Cardif, Wales, UK, 14-16 September 2022, volume 353 of Frontiers
in Artificial Intelligence and Applications , IOS Press, 2022, pp. 349–350. URL: https://doi.org/10.
3233/FAIA220167. doi:10.3233/FAIA220167.
[9] P. E. Dunne, A. Hunter, P. McBurney, S. Parsons, M. Wooldridge, Weighted argument systems:
Basic definitions, algorithms, and complexity results, Artificial Intelligence 175 (2011) 457–486.</p>
      <p>URL: http://dx.doi.org/10.1016/j.artint.2010.09.005. doi:10.1016/j.artint.2010.09.005.
[10] S. Bistarelli, F. Santini, Weighted argumentation, FLAP 8 (2021) 1589–1622. URL: https:
//collegepublications.co.uk/ifcolog/?00048.
[11] M. GEBSER, R. KAMINSKI, B. KAUFMANN, T. SCHAUB, Multi-shot asp solving with clingo,</p>
      <p>Theory and Practice of Logic Programming 19 (2019) 27–82. doi:10.1017/S1471068418000054.
[12] T. Lehtonen, J. Wallner, M. Järvisalo, Aspforaba - asp-based algorithms for reasoning in aba, 2023.
[13] T. L. Beauchamp, J. F. Childress, et al., Principles of biomedical ethics, eighth ed., Oxford University</p>
      <p>Press, USA, 2019.
[14] L. Floridi, The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities, Oxford
University Press, 2023. URL: https://doi.org/10.1093/oso/9780198883098.001.0001. doi:10.1093/
oso/9780198883098.001.0001.
[15] R. M. Veatch, Resolving conflicts among principles: ranking, balancing, and specifying, Kennedy</p>
      <p>Institute of Ethics Journal 5 (1995) 199–218.
[16] B. Townsend, C. Paterson, T. T. Arvind, G. Nemirovsky, R. Calinescu, A. Cavalcanti, I. Habli,
A. Thomas, From Pluralistic Normative Principles to Autonomous-Agent Rules, Minds and
Machines 32 (2022) 683–715. URL: https://doi.org/10.1007/s11023-022-09614-w. doi:10.1007/
s11023-022-09614-w.
[17] M. Mirani, F. Raimondi, N. Troquard, Towards Eficient Norm-Aware Robots ’ Decision Making
Using Datalog, in: 3rdWorkshop on Bias, Ethical AI, Explainability and the Role ofLogic and Logic
Programming – JointWorkshop @ AIxIA 2024, Bolzano, Italy, volume 7713, 2024.
[18] M. Anderson, S. L. Anderson, C. Armen, Medethex: Toward a medical ethics advisor, in: Caring
Machines: AI in Eldercare, Papers from the 2005 AAAI Fall Symposium, Arlington, Virginia, USA,
November 4-6, 2005., volume FS-05-02 of AAAI Technical Report, AAAI Press, USA, 2005, pp. 9–16.</p>
      <p>URL: https://www.aaai.org/Library/Symposia/Fall/fs05-02.php.
[19] M. Anderson, S. L. Anderson, ETHEL: toward a principled ethical eldercare system, in: AI in
Eldercare: New Solutions to Old Problems, Papers from the 2008 AAAI Fall Symposium, Arlington,
Virginia, USA, November 7-9, 2008, volume FS-08-02 of AAAI Technical Report, AAAI, USA, 2008,
pp. 4–11. URL: http://www.aaai.org/Library/Symposia/Fall/fs08-02.php.
[20] F. Rossi, Safety constraints and ethical principles in collective decision making systems, in:
S. Hölldobler, M. Krötzsch, R. Peñaloza, S. Rudolph (Eds.), KI 2015: Advances in Artificial
Intelligence - 38th Annual German Conference on AI, Dresden, Germany, September 21-25, 2015,
Proceedings, volume 9324 of Lecture Notes in Computer Science, Springer, 2015, pp. 3–15. URL:
https://doi.org/10.1007/978-3-319-24489-1_1. doi:10.1007/978-3-319-24489-1\_1.
[21] A. Loreggia, N. Mattei, F. Rossi, K. B. Venable, Preferences and ethical principles in decision making,
in: J. Furman, G. E. Marchant, H. Price, F. Rossi (Eds.), Proceedings of the 2018 AAAI/ACM
Conference on AI, Ethics, and Society, AIES 2018, New Orleans, LA, USA, February 02-03, 2018, ACM,
2018, p. 222. URL: https://doi.org/10.1145/3278721.3278723. doi:10.1145/3278721.3278723.
[22] M. Kleiman-Weiner, R. Saxe, J. B. Tenenbaum, Learning a commonsense moral theory, Cognition
167 (2017) 107–123. URL: https://www.sciencedirect.com/science/article/pii/S0010027717300707.
doi:https://doi.org/10.1016/j.cognition.2017.03.005, moral Learning.
[23] R. Noothigattu, S. N. S. Gaikwad, E. Awad, S. Dsouza, I. Rahwan, P. Ravikumar, A. D. Procaccia,
A voting-based system for ethical decision making, in: S. A. McIlraith, K. Q. Weinberger (Eds.),</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Dung</surname>
          </string-name>
          ,
          <article-title>On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>77</volume>
          (
          <year>1995</year>
          )
          <fpage>321</fpage>
          -
          <lpage>357</lpage>
          . doi:
          <volume>10</volume>
          .1016/
          <fpage>0004</fpage>
          -
          <lpage>3702</lpage>
          (
          <issue>94</issue>
          )
          <fpage>00041</fpage>
          -
          <lpage>x</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>F. H. van Eemeren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Verheij</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. F.</given-names>
            <surname>Gordon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Baroni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Caminada</surname>
          </string-name>
          , G. Brewka,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ellmauthaler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Strass</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Wallner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Woltran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Schulz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Besnard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hunter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Macagno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Walton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Reed</surname>
          </string-name>
          , Handbook of Formal Argumentation, College Publications,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gabbay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Giacomin</surname>
          </string-name>
          , G. Simari, Handbook of Formal Argumentation, Volume
          <volume>2</volume>
          , v. 2,
          <string-name>
            <surname>College</surname>
            <given-names>Publications</given-names>
          </string-name>
          ,
          <year>2021</year>
          . URL: https://books.google.it/books?id=JUekzgEACAAJ.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>W. D.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <source>The Right and the Good</source>
          , Oxford University Press, Oxford, UK,
          <year>1930</year>
          . doi:
          <volume>10</volume>
          .2307/ 2180065.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Carnevale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lombardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Lisi</surname>
          </string-name>
          ,
          <article-title>A human-centred approach to symbiotic AI: Questioning the ethical and conceptual foundation</article-title>
          ,
          <source>Intelligenza Artificiale</source>
          <volume>18</volume>
          (
          <year>2024</year>
          )
          <fpage>9</fpage>
          -
          <lpage>20</lpage>
          . doi:
          <volume>10</volume>
          .3233/ IA-240034.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Dyoub</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Lisi</surname>
          </string-name>
          ,
          <article-title>Towards Ethical Risk Assessment of Symbiotic AI Systems with Fuzzy Rules</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          <volume>3881</volume>
          (
          <year>2024</year>
          )
          <fpage>36</fpage>
          -
          <lpage>49</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>