<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshop on Advances in Argumentation in Artificial Intelligence, September</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Emanuele De Angelis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maurizio Proietti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesca Toni</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>IASI-CNR</institution>
          ,
          <addr-line>Rome</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Imperial College London</institution>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>13</volume>
      <issue>2025</issue>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Argumentative learning amounts to integrating argumentative reasoning into forms of machine learning from examples. Amongst several approaches, ABA Learning is a form of argumentative learning that, given a background knowledge, and positive and negative examples, derives an Assumption-Based Argumentation (ABA) framework. The learnt ABA frameworks can be deployed to make run-time inference about previously unseen examples, even after having seen very few positive and negative examples. This inference is determined by (non-)acceptance of examples in extensions of the ABA frameworks. However, it may be impossible to determine definite (non-)acceptance when the learnt ABA frameworks admit no or several extensions. In this paper, we explore how this behaviour can be managed by “agentifying” ABA learning. This agentification amounts to leveraging the use of rules in non-flat ABA frameworks, representing denial integrity constraints, towards definite conclusions. Specifically, agentified ABA Learning can identify actions in the external environment aimed at generating observations for expanding the original ABA frameworks so that they admit extensions and at choosing amongst the extensions of (expanded) ABA frameworks.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Assumption-based Argumentation</kwd>
        <kwd>Agentic AI</kwd>
        <kwd>Symbolic Learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Argumentative learning amounts to integrating argumentative reasoning into forms of machine learning
from examples [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. ABA Learning [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ] is a recent approach to argumentative learning, generating
Assumption-based Argumentation (ABA) frameworks [
        <xref ref-type="bibr" rid="ref5 ref6 ref7 ref8">5, 6, 7, 8</xref>
        ] from (possibly very few) positive and
negative examples of learnable predicates, given an initial ABA framework serving as a background
knowledge. The ABA frameworks generated by ABA Learning can be deployed to make run-time
inference about previously unseen examples, by determining whether these examples are accepted or
not in extensions (such as stable extensions [
        <xref ref-type="bibr" rid="ref5 ref6 ref7 ref8">5, 6, 7, 8</xref>
        ]) of the frameworks expanded with information
about the unseen examples. However, especially when the (positive and negative) examples seen
during training are very few, the learnt ABA frameworks may admit no or several extensions, thus
leading to the inability to determine definite (non-)acceptance of the new examples. For illustration
(see Section 3 for a formalisation with the help of the well-known Nixon diamond example [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]), the
learnt ABA framework may include (non-defeasible) rules that quakers are pacifists and republicans
are militarists, as well as background knowledge that pacifists tend to vote against war and militarists
tend to vote for war, but individuals cannot vote for both (here votes are assumptions, each being the
contrary of the other). Suppose then that, at inference time, we are interested in determining whether a
previously unseen individual nixon , who is both quaker and republican, is pacifist or militarist: the
learnt ABA framework admits no stable extension (as both voting assumptions need to be accepted, but
they are in conflict), and, if modified so that the learnt rules become defeasible, we get two diferent ABA
frameworks accepting conflicting claims about nixon . So no conclusion can be drawn about nixon .
      </p>
      <p>
        In this paper we explore how this behaviour can be managed by “agentifying” ABA learning. Like
in standard autonomous agent systems [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], ABA Learning agents are able to decide actions to be
performed in the external environment (possibly including humans, data repositories and/or other
agents) and draw observations from this environment as concerns the actions’ outcomes. The actions
consist in consulting an external source (human, agent or data repository) in the environment; the
observations amount to learning the outcome of the consultation. To continue the nixon illustration
(again, see Section 3 for details), an ABA Learning agent will decide to check how nixon voted in
the past and, upon observing that he voted for war, extend the learnt ABA framework to obtain a
single stable extension where nixon is militarist. The action results from the need to “satisfy” the
original background knowledge (that pacifists/militarist tend to vote against/for war but individuals
cannot vote for both). This behaviour is aided by the presence, in the ABA frameworks, of rules with
“actionable”assumptions (e.g. votes) in the head. Thus, our agentification of ABA Learning needs to
extend it beyond flat ABA frameworks (without assumptions in the head of rules) that have been the
focus of existing approaches to date [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ].
      </p>
      <p>This paper introduces the general vision of agentified ABA Learning (Section 4), after providing
the core background (Section 2) and formalising the earlier illustration (Section 3). It concludes by
discussing directions for future work (Section 5).</p>
      <p>
        Related work Our use of rules with assumptions in the head towards actions aimed at choosing
amongst extensions of (expanded) ABA frameworks is reminiscent of enforcement of integrity
constraints seen as agents’ goals, e.g. in the spirit of [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ]. Indeed, non-flat ABA rules can be equivalently
understood as denials, under stable extensions [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Works on argumentative agents exist (e.g. towards
persuasion [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]), including recent ones where agents are based on Large Language Models (e.g. for
explainability [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]). Unlike these works, we see agentification as a way to support a form of active
learning.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>
        An ABA framework [
        <xref ref-type="bibr" rid="ref5 ref6 ref7 ref8">5, 6, 7, 8</xref>
        ] is a tuple ⟨ℒ, ℛ, ,
⟩ such that
• ⟨ℒ, ℛ⟩ is a deductive system, where ℒ is a language and ℛ is a set of (inference) rules of the form
0 ← 1, . . . ,  ( ≥ 0,  ∈ ℒ, for 1 ≤  ≤ );
•  ⊆ ℒ is a (non-empty) set of assumptions;1
• is a total mapping from  into ℒ, where  is the contrary of , for  ∈ .
      </p>
      <p>
        Given a rule 0 ← 1, . . . , , 0 is the head and 1, . . . ,  is the body; if  = 0 then the body is said
to be empty (represented as 0 ← ) and the rule is called a fact. Elements of ℒ can be any sentences,
but in this paper we focus on ABA frameworks where ℒ is a finite set of ground atoms. However, we
will use schemata for rules, assumptions and contraries, using variables, similarly to logic programs, to
represent compactly all instances over some underlying universe. We will also use equalities of the form
1 = 2, where 1, 2 are ground terms, and we assume that, for all ground terms , the fact  =  ← is in
ℛ. In particular, we will feel free to write a fact () ← , with  a (tuple of) terms, as () ←  = , with
 a (tuple of) variables. Unlike other works [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ], in this paper ABA frameworks are not required
to be flat , and thus assumptions can be heads of rules. As customary, we leave ℒ implicit, and use
⟨ℛ, , ⟩ to stand for ⟨ℒ, ℛ, , ⟩.
      </p>
      <p>
        The semantics of an ABA framework is given in terms of sets of assumptions, called extensions (for a
formal definition see, e.g., [
        <xref ref-type="bibr" rid="ref5 ref6 ref8">5, 8, 6</xref>
        ]). A set of assumptions  ⊆  attacks an assumption  ∈  if there
is a finite deduction (i.e., an argument) from ′ ⊆  to  using rules in ℛ; a set of assumptions 1 ⊆ 
attacks a set of assumptions 2 ⊆  if 1 attacks some  ∈ 2. Then, a set of assumptions  ⊆  is
stable if  is conflict-free (i.e.,  does not attack itself), closed (i.e., there is no  ̸∈  such that there is
an argument, i.e., a deduction, from some ′ ⊆  to  using rules in ℛ), and  attacks all  ̸∈ . An
ABA framework is satisfiable if it admits at least one stable extension. A claim  ∈ ℒ is accepted in a
stable extension ∆ of an ABA framework  if there is an argument from ∆ to  using rules of  .
1The non-emptiness requirement can always be satisfied [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>Example 1. Consider the following non-flat ABA framework ⟨ℛ, , ⟩, where:2
ℛ = { () ←  (), (1) ← , (2) ← , (3) ←} ;
 = { (),  ()};
 () = (),  () = ();
for  ∈ {1, 2, 3}. A stable extension is given by { (1),  (1),  (2)}. Instead, for example, { (1)} is not
stable as it is not closed, { (3)} is not stable as it is not conflict-free, and { (1),  (1)} is not stable as it
does not attack  (2).</p>
      <p>
        ABA Learning [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ] is a method that, given background knowledge, in the form of a satisfiable
ABA framework  = ⟨ℛ, , ⟩, positive examples ℰ + ⊆ ℒ , and negative examples ℰ − ⊆ ℒ , derives
an ABA framework  ′ = ⟨ℛ′, ′, ′⟩, with ℛ ⊆ ℛ ′,  ⊆  ′, ⊆ ′, such that 1)  ′ admits a stable
extension ∆ , 2) all positive examples are accepted in ∆ , and 3) no negative example is accepted in ∆ .3
      </p>
      <p>
        ABA Learning makes use of transformation rules, including the following ones4 (1) rote learning,
which, given a positive example (), introduces a new rule () ←  = , (2) folding, which, given
rules  ← ,  and  ← , derives the new rule  ← , , and (3) assumption introduction, which,
given rule  ← , introduces an assumption  , with contrary  , and derives the new rule  ← ,  .
As mentioned above, we can freely introduce equalities, which can then occur in rule premises (e.g.,
in , ). Thus, for instance, if ℛ contains two facts () ← and () ← , we can rewrite them into
() ←  =  and () ←  = , and by folding, we can derive () ← (). In previous
work [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ], we have presented various learning algorithms, in the case of flat ABA frameworks, based
on these transformation rules. We will see examples of their application in our non-flat setting the next
section.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. ABA Learning through Action and Observation</title>
      <p>
        We illustrate our idea of an ABA Learning agent that proactively interacts with the environment, by
means of an example – a variant of the well-known Nixon diamond problem proposed in the field
of non-monotonic reasoning [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. By considering non-flat ABA frameworks, we are able to represent
background knowledge in a richer way, also through certain types of denials that enforce a form of
integrity constraints. For instance, the following ABA framework ⟨ℛ, , ⟩ represents that individuals
,  are quakers and individual  is a republican and the general information that pacifists vote against
war, while militarists vote for war.
      </p>
      <p>ℛ = { quaker () ← , quaker () ← , republican() ← ,</p>
      <p>voted (, _) ← pacifist(), voted (X , pro_war ) ←
 = { voted (, pro_war ), voted (X , against _war ) }
militarist () }
voted (X , against _war ) = voted (X , pro_war ) voted (X , pro_war ) = voted (X , against _war )
This ABA framework is non-flat because the heads of the last two rule schemata in ℛ are assumptions,
voted (X , against _war ) and voted (X , pro_war ), which are one the contrary of the other. Suppose now
that we want to learn an explicit definition of the concepts pacifistand militarist from the background
knowledge and the following positive and negative examples:</p>
      <p>
        ℰ + = { pacifist(), pacifist(), militarist () } ℰ − = { militarist (), militarist (), pacifist() }
This ABA Learning problem can be solved by applying the transformation rules presented in the
previous section, according to one of the ABA Learning algorithms5 presented in recent work [
        <xref ref-type="bibr" rid="ref16 ref3 ref4">3, 4, 16</xref>
        ].
2As in [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ], we give components of ABA frameworks as schemata, with variables in capital letters implicitly universally
quantified with scope the schemata in which they occur.
3Note that this definition, originally given for flat ABA frameworks only, naturally extends to our setting, by adopting the
notion of acceptance given earlier.
4Here we present only the instances of the rules that are suficient to present the example in the next section. For more
extended versions, we refer to previous work [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ].
5The specific algorithm is not relevant in our development.
      </p>
      <p>By rote learning, we introduce a new (ABA) rule:6</p>
      <p>1. pacifist() ←  = .</p>
      <p>Then, we rewrite the rule ‘() ← ’ as ‘() ←  = ’ and, by folding, we replace  = 
in  1 by its consequence (). Hence, we get a generalised rule:</p>
      <p>2. pacifist() ← ().</p>
      <p>Similarly, we can learn the rule:</p>
      <p>3. militarist () ← ().</p>
      <p>Thus, we have learnt an ABA framework, where quakers are pacifists and republicans are not.</p>
      <p>Suppose now that we get a new observation in the form of a new individual, e.g., nixon, who is both
a quaker and a republican. We can add the new information to the rules of the background knowledge,
thereby getting the new set of rules</p>
      <p>ℛ′ = ℛ ∪ { 2,  3, quaker (nixon) ← , republican(nixon) ←} .</p>
      <p>Unfortunately, the ABA framework with rules ℛ′ admits no stable extensions, as any closed
extension is not conflict-free. Indeed, on one hand, from quaker (nixon), by  2, we get pacifist(nixon),
and therefore, by (, _) ← pacifist(), we conclude (nixon, _).
On the other hand, from (nixon), by  3, we get (nixon), and therefore, by
(, _) ← militarist (), we conclude (nixon, _), which is the contrary of
(nixon, _). Thus, we can conclude neither pacifist(nixon) nor (nixon).</p>
      <p>We now show how to use ABA Learning, together with an act of learning from an external source,
to derive a new ABA framework  ′′ that admits a stable extension, where either pacifist(nixon) or
militarist (nixon) is accepted. First of all, the rules learnt thus far are rendered defeasible by applying
the assumption introduction transformation and deriving the new rules:
 4. pacifist() ← (), _()
 5. militarist () ← (), _()
where _() and _() are new assumptions with contraries
_() = _() and
_() = _(),
respectively. Now, by rote learning, we may add one of the two rules:
 6. _() ←  = nixon
 7. _() ←  = nixon
each of which disallowing the deduction of conflicting claims about nixon. By folding, from  6 and  7,
respectively, we get:
 8. _() ← ()
 9. _() ← ()
The two derived ABA frameworks have rules, respectively:
ℛ ∪ { 4,  5,  8, quaker (nixon) ← , republican(nixon) ←} .</p>
      <p>ℛ ∪ { 4,  5,  9, quaker (nixon) ← , republican(nixon) ←} .</p>
      <p>They admit two distinct stable extensions: one accepting militarist(nixon) (and
hence (nixon, _)) and the other accepting pacifist (nixon) (and hence
(nixon, _)). To decide which atom between (nixon, _) and
(nixon, _) should be accepted, we assume that we can issue an action and
consult the past voting record for nixon. In general, we assume that (some of) the assumptions are
actionable,that is, they correspond to actions resulting in the acquisition (or rejection) as valid facts of
the background knowledge. Suppose that, in our example, the inquiry of nixon’s record established
(nixon, _). Then, by rote learning, we add the rule:</p>
      <p>10. (,  ) ←  = nixon,  = _.</p>
      <sec id="sec-3-1">
        <title>6We assign identifiers   to rules for ease of reference.</title>
        <p>This last step can be seen as the result of an active learning step. By doing so, we obtain an ABA
framework  ′′ with ℛ′′ = ℛ ∪ { 4,  5,  8,  10, quaker (nixon) ← , republican(nixon) ←} , ′′ =
 ∪ {_(), _()}, and the contrary mapping extended to the
new assumptions as indicated above.  ′′ has a single stable extension, which accepts the conclusion
militarist (nixon).</p>
        <p>E+, E-, ABAFBK</p>
        <p>ABAF
(predicted) claims</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Vision</title>
      <p>
        Figure 1 summarizes the approach to agentified ABA
Learning that we propose in this paper. Conventional observations actions
ABA Learning, in its original formulation [
        <xref ref-type="bibr" rid="ref16 ref2 ref3 ref4">2, 3, 4, 16</xref>
        ], Agentified
is depicted in the upper part of the figure: it takes pos- LeaArBnAing
itive and negative examples and a background
knowledge to generate an ABA framework (ABAF, for short)
from which the acceptability of claims, with respect to
a given semantics, can be predicted. This process
disregards the interaction with the external environment.
      </p>
      <p>Agentified ABA Learning aims to enhance this simple Figure 1: Agentified ABA Learning reasons
schema, and enable ABA Learning to empower agent with the learnt ABAF to determine
acinteractions with the external environment. This may tions (assumptions) to be performed
possibly include other agents, humans as well as data in the environment and observes the
repositories, as depicted in the bottom part of Figure 1. outcomes of those actions to update
We advocate two main novelties: 1) the reliance on its ABAF.
non-flat ABAF (whereas originally ABA Learning only considered flat ABAF); and 2) the treatment
of some assumptions as actions to be executed in the environment. These actions are autonomously
identified to guarantee predictions for previously unseen cases and result in the addition of rules with
assumptions as heads to the learnt ABAF, by adapting the same ABA learning process.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>We have proposed a novel vision for argumentative agents that can learn from examples while
autonomously deciding on actions to be executed in their environment to generate targeted expansions
of their knowledge. These argumentative agents are supported by an enhancement of ABA Learning,
leveraging on non-flat ABA frameworks.</p>
      <p>
        Much future work is ahead of us. First, we need to formally define the enhanced ABA Learning
algorithm, catering in particular non-flat ABA frameworks. Second, there is substantial work to be
done to realise our approach. To this extent, we plan to use the recent understanding of non-flat ABA
frameworks as denial integrity constraints [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] to extend the existing ASP-based implementations of
conventional ABA Learning [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. Third, we plan to explore the use of the enhanced ABA Learning
in diferent types of environment (with humans and/or data repositories and/or other agents), which
will require diferent approaches to render assumptions actionable. For example, if the environment
amounts to a human user, then actionability will amount to a conversation with the user that may
benefit from a Large Language Model. Lastly, it would be interesting to relate agentified ABA Learning
to argumentative forms of contestable AI [
        <xref ref-type="bibr" rid="ref17">17, 18</xref>
        ], given that the interactions with the environment can
be interpreted as contestations that need redressing via argumentative learning.
      </p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>We thank support from the Royal Society, UK (IEC\R2\222045). Toni was partially funded by the ERC
(grant agreement No. 101020934) and by J.P. Morgan and the RAEng, UK, under the Research Chairs
Fellowships scheme (RCSRF2021\11\45). De Angelis and Proietti were supported by the MUR PRIN 2022
Project DOMAIN funded by the EU – NextGenerationEU (2022TSYYKJ, CUP B53D23013220006, PNRR,
M4.C2.1.1), by the PNRR MUR project PE0000013-FAIR (CUP B53C22003630006), and by the INdAM
GNCS Project Argomentazione Computazionale per apprendimento automatico e modellazione di sistemi
intelligenti (CUP E53C24001950001). De Angelis and Proietti are members of the INdAM-GNCS research
group.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <sec id="sec-7-1">
        <title>The author(s) have not employed any Generative AI tools.</title>
        <p>[18] V. Dignum, L. Michael, J. C. Nieves, M. Slavkovik, J. Suarez, A. Theodorou, Contesting black-box
AI decisions, in: AAMAS 2025, 2025, p. 2854–2858.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rago</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Cyras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mumford</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Cocarascu</surname>
          </string-name>
          ,
          <article-title>Argumentation and machine learning</article-title>
          ,
          <source>CoRR abs/2410</source>
          .23724 (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .48550/ARXIV.2410.23724.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Proietti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          ,
          <article-title>Learning assumption-based argumentation frameworks</article-title>
          ,
          <source>in: ILP</source>
          <year>2022</year>
          , LNCS 13779, Springer,
          <year>2024</year>
          , pp.
          <fpage>100</fpage>
          -
          <lpage>116</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -55630-
          <issue>2</issue>
          _
          <fpage>8</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>De Angelis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Proietti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          ,
          <article-title>Learning brave assumption-based argumentation frameworks via ASP</article-title>
          ,
          <source>in: ECAI</source>
          <year>2024</year>
          , volume
          <volume>392</volume>
          <source>of FAIA</source>
          , IOS Press,
          <year>2024</year>
          , pp.
          <fpage>3445</fpage>
          -
          <lpage>3452</lpage>
          . doi:
          <volume>10</volume>
          .3233/ FAIA240896.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>De Angelis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Proietti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          ,
          <article-title>Greedy ABA learning for case-based reasoning</article-title>
          ,
          <source>in: AAMAS</source>
          <year>2025</year>
          ,
          <year>2025</year>
          , p.
          <fpage>556</fpage>
          -
          <lpage>564</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bondarenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Dung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Kowalski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          ,
          <article-title>An abstract, argumentation-theoretic approach to default reasoning, Artif</article-title>
          . Intell.
          <volume>93</volume>
          (
          <year>1997</year>
          )
          <fpage>63</fpage>
          -
          <lpage>101</lpage>
          . doi:
          <volume>10</volume>
          .1016/S0004-
          <volume>3702</volume>
          (
          <issue>97</issue>
          )
          <fpage>00015</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>Dung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kowalski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          ,
          <article-title>Assumption-based argumentation</article-title>
          , in: Argumentation in
          <string-name>
            <surname>AI</surname>
          </string-name>
          , Springer,
          <year>2009</year>
          , pp.
          <fpage>199</fpage>
          -
          <lpage>218</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-0-
          <fpage>387</fpage>
          -98197-0_
          <fpage>10</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          ,
          <article-title>A tutorial on assumption-based argumentation</article-title>
          ,
          <source>Argument &amp; Computation</source>
          <volume>5</volume>
          (
          <year>2014</year>
          )
          <fpage>89</fpage>
          -
          <lpage>117</lpage>
          . doi:
          <volume>10</volume>
          .1080/19462166.
          <year>2013</year>
          .
          <volume>869878</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>K.</given-names>
            <surname>Cyras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Schulz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          ,
          <article-title>Assumption-based argumentation: Disputes, explanations, preferences</article-title>
          ,
          <source>FLAP</source>
          <volume>4</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Reiter</surname>
          </string-name>
          , G. Criscuolo,
          <article-title>On interacting defaults</article-title>
          , in: P. J.
          <string-name>
            <surname>Hayes</surname>
          </string-name>
          (Ed.),
          <source>Proceedings of IJCAI '81</source>
          ,
          <string-name>
            <given-names>William</given-names>
            <surname>Kaufmann</surname>
          </string-name>
          ,
          <year>1981</year>
          , pp.
          <fpage>270</fpage>
          -
          <lpage>276</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>M. J. Wooldridge</surname>
          </string-name>
          , Introduction to multiagent systems, Wiley,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Kowalski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Satoh</surname>
          </string-name>
          ,
          <article-title>Obligation as optimal goal satisfaction</article-title>
          ,
          <source>J. Philos. Log</source>
          .
          <volume>47</volume>
          (
          <year>2018</year>
          )
          <fpage>579</fpage>
          -
          <lpage>609</lpage>
          . doi:
          <volume>10</volume>
          .1007/S10992-017-9440-3.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Kowalski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sadri</surname>
          </string-name>
          ,
          <article-title>Towards a unified agent architecture that combines rationality with reactivity</article-title>
          , in: D.
          <string-name>
            <surname>Pedreschi</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          Zaniolo (Eds.), Logic in Databases, International Workshop LID'96,
          <string-name>
            <surname>San</surname>
            <given-names>Miniato</given-names>
          </string-name>
          , Italy,
          <source>July 1-2</source>
          ,
          <year>1996</year>
          , Proceedings, volume
          <volume>1154</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>1996</year>
          , pp.
          <fpage>137</fpage>
          -
          <lpage>149</lpage>
          . doi:
          <volume>10</volume>
          .1007/BFB0031739.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rapberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ulbricht</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          ,
          <article-title>On the correspondence of non-flat assumption-based argumentation and logic programming with negation as failure in the head</article-title>
          ,
          <source>in: NMR</source>
          <year>2024</year>
          , volume
          <volume>3835</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>112</fpage>
          -
          <lpage>121</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rosenfeld</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kraus</surname>
          </string-name>
          ,
          <article-title>Strategical argumentative agent for human persuasion</article-title>
          ,
          <source>in: ECAI-PAIS</source>
          <year>2016</year>
          , volume
          <volume>285</volume>
          <source>of FAIA</source>
          , IOS Press,
          <year>2016</year>
          , pp.
          <fpage>320</fpage>
          -
          <lpage>328</lpage>
          . doi:
          <volume>10</volume>
          .3233/978-1-
          <fpage>61499</fpage>
          -672-9-320.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , J. Chen,
          <article-title>Argmed-agents: Explainable clinical decision reasoning with LLM disscusion via argumentation schemes</article-title>
          ,
          <source>in: BIBM</source>
          <year>2024</year>
          , IEEE,
          <year>2024</year>
          , pp.
          <fpage>5486</fpage>
          -
          <lpage>5493</lpage>
          . doi:
          <volume>10</volume>
          .1109/BIBM62325.
          <year>2024</year>
          .
          <volume>10822109</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>E. De Angelis</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Proietti</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Toni</surname>
          </string-name>
          ,
          <article-title>ABA learning via ASP</article-title>
          ,
          <source>in: ICLP</source>
          <year>2023</year>
          , volume
          <volume>385</volume>
          <source>of EPTCS</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . doi:
          <volume>10</volume>
          .4204/EPTCS.385.1.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>F.</given-names>
            <surname>Leofante</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ayoobi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dejl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Freedman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gorur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Paulino-Passos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rago</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rapberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Toni</surname>
          </string-name>
          ,
          <string-name>
            <surname>Contestable AI Needs Computational</surname>
            <given-names>Argumentation</given-names>
          </string-name>
          ,
          <source>in: KR</source>
          <year>2024</year>
          ,
          <year>2024</year>
          , pp.
          <fpage>888</fpage>
          -
          <lpage>896</lpage>
          . doi:
          <volume>10</volume>
          .24963/kr.2024/83.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>