<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Corresponding author.
† These authors contributed equally.
$ g.alfano@dimes.unical.it (G. Alfano); greco@dimes.unical.it (S. Greco); fparisi@dimes.unical.it (F. Parisi);
i.trubitsyna@dimes.unical.it (I. Trubitsyna)
 https://gianvincenzoalfano.net/ (G. Alfano); https://people.dimes.unical.it/sergiogreco/ (S. Greco);
http://wwwinfo.deis.unical.it/~parisi/ (F. Parisi); https://sites.google.com/dimes.unical.it/itrubitsyna/home (I. Trubitsyna)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>On Counterfactual and Semifactual Explanations in Abstract Argumentation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>(Discussion Paper)</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gianvincenzo Alfano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sergio Greco</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesco Parisi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Irina Trubitsyna</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Informatics, Modeling, Electronics and System Engineering (DIMES), University of Calabria</institution>
          ,
          <addr-line>Rende</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Explainable Artificial Intelligence and Formal Argumentation have received significant attention in recent years. Argumentation frameworks are useful for representing knowledge and reasoning on it. Counterfactual and semifactual explanations are interpretability techniques that provide insights into the outcome of a model by generating alternative hypothetical instances. While there has been important work on counterfactual and semifactual explanations for Machine Learning (ML) models, less attention has been devoted to these kinds of problems in argumentation. In this paper, we discuss counterfactual and semifactual reasoning in abstract Argumentation Framework recently proposed in [1].</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Formal Argumentation Theory</kwd>
        <kwd>Explainable AI</kwd>
        <kwd>Counterfactual and Semifactual Reasoning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>pasta
meat
fish
white
red</p>
      <p>Argumentation semantics can be also defined in terms of labelling [ 8]. Intuitively, a  -labelling for
an AF is a total function ℒ assigning to each argument the label in if its status is accepted, out if its
status is rejected, and und if its status is undecided under semantics  . For instance, the  -labellings
for AF Λ of Example 1, with  ∈ {st, pr, sst}, are as follows:
ℒ1 = {in(fish), out(meat), out(pasta), in(white), out(red)},
ℒ2 = {in(fish), out(meat), out(pasta), out(white), in(red)},
ℒ3 = {out(fish), in(meat), out(pasta), out(white), in(red)},
ℒ4 = {out(fish), out(meat), in(pasta), out(white) , in(red)},
where ℒ corresponds to extension , with  ∈ [1..4], respectively.</p>
      <p>Integrating explanations in argumentation-based reasoners is important for enhancing argumentation
and persuasion capabilities of software agents [9, 10, 11, 12]. For this reasons, several researchers
explored how to deal with explanations in formal argumentation. Counterfactual and semifactual
explanations are types of interpretability techniques that provide insights into the outcome of a model
by generating hypothetical instances, known as counterfactuals and semifactual, respectively [13, 14].
On one hand, a counterfactual explanation reveals what should have been diferent in an instance to
obtain a diverse outcome [15]—minimum changes w.r.t. the given instance are usually considered [16].
On the other hand, a semifactual explanation provides a maximally-changed instance yielding the same
outcome of that considered [17].</p>
      <p>While there has been interesting work on counterfactual and semifactual explanations for ML models,
e.g. [18, 19, 20, 21, 22, 23], less attention has been devoted to these problems in argumentation.</p>
      <p>
        In this paper, we discuss counterfactual and semifactual reasoning in AF [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Analogously to
counterfactual explanations in ML that reveal what should have been minimally diferent in an instance
to obtain a diferent outcome, our counterfactuals tell what should have been minimally diferent in a
solution, i.e. a  -labeling with a given acceptance status for a goal argument, to obtain an alternative
solution where the goal has a diferent status.
      </p>
      <p>Example 2. Continuing with Example 1, assume that the chef suggests the menu ℒ3 = {out(fish),
in(meat), out(pasta), out(white), in(red)} and the customer replies that (s)he likes everything
except meat (as (s)he is vegetarian). Therefore, the chef looks for the closest menus not containing meat,
that are ℒ2 = {in(fish), out(meat), out(pasta), out(white), in(red)} and ℒ4 = {out(fish),
out(meat), in(pasta), out(white), in(red)}. In this context, we say that ℒ2 and ℒ4 are
counterfactuals for ℒ3 w.r.t. the goal argument meat. □</p>
      <p>Given a  -labelling ℒ of an AF Λ, and a goal argument , a counterfactual of ℒ w.r.t.  is a closest
 -labelling ℒ′ of Λ that changes the acceptance status of . Hence, counterfactuals explain how to
minimally change a solution to avoid a given acceptance status of a goal argument.</p>
      <p>In contrast, semifactuals give the maximal changes to the considered solution in order to keep the
status of a goal argument. That is, a semifactual of ℒ w.r.t. goal  is a farthest  -labelling ℒ′ of Λ that
keeps the acceptance status of argument .</p>
      <p>Example 3. Continuing with Example 1, suppose now that a customer has tasted menu ℒ3 =
{out(fish), in(meat), out(pasta), out(white), in(red)}, and asks to try completely new flavors
while still maintaining the previous choice of wine as (s)he liked it a lot. Here the chef is interested in
the farthest menus containing red wine. These menus are ℒ2 = {in(fish), out(meat), out(pasta),
out(white), in(red)} and ℒ4 = {out(fish), out(meat), in(pasta), out(white), in(red)}. We
say that the labellings ℒ2 and ℒ4 are semifactuals for the labelling ℒ3 w.r.t. the goal argument red. □</p>
    </sec>
    <sec id="sec-2">
      <title>2. Counterfactual and Semifactual Reasoning</title>
      <p>Intuitively, a counterfactual of a given  -labelling w.r.t. a given goal argument  is a minimum-distance 
labelling altering the acceptance status of . More in detail, let ⟨A, R⟩ be an AF,  ∈ {gr, co, st, pr, sst}
a semantics,  ∈ A a goal argument, and ℒ a  -labelling for ⟨A, R⟩. Then, a labelling ℒ′ ∈  (⟨A, R⟩) is
a counterfactual of ℒ w.r.t.  if:
() ℒ() ̸= ℒ′(), and
() there exists no ℒ′′ ∈  (⟨A, R⟩) such that ℒ() ̸= ℒ′′() and  (ℒ, ℒ′′) &lt;  (ℒ, ℒ′).</p>
      <sec id="sec-2-1">
        <title>We use ℱ  (, ℒ) to denote the set of counterfactuals of ℒ w.r.t. .</title>
        <p>Example 4. Continuing with Example 2, under stable semantics, for the labelling ℒ3 = {out(fish),
in(meat), out(pasta), out(white), in(red)}, we have that ℒ2 = {in(fish), out(meat), out(pasta),
out(white), in(red)} and ℒ4 = {out(fish), out(meat), in(pasta), out(white), in(red)} are its
only counterfactuals w.r.t. argument meat, as their distance,  (ℒ3, ℒ2) =  (ℒ3, ℒ4) = 2, is
minimal. The other labelling ℒ1 = {in(fish), out(meat), out(pasta), in(white), out(red)}, such
that ℒ3(meat) ̸= ℒ1(meat) is not at minimum distance as  (ℒ3, ℒ1) = 4 &gt;  (ℒ3, ℒ2). Therefore,
ℱ st(meat, ℒ3) = {ℒ2, ℒ4}. □
The concept of semifactual is, in a sense, symmetrical and complementary to that of a counterfactual.</p>
        <p>Indeed, let ⟨A, R⟩ be an AF,  ∈ {gr, co, st, pr, sst} a semantics,  ∈ A a goal argument, and ℒ a
 -labelling for ⟨A, R⟩. Then, ℒ′ ∈  (⟨A, R⟩) is a semifactual of ℒ w.r.t.  if:
() ℒ() = ℒ′(), and
() there exists no ℒ′′ ∈  (⟨A, R⟩) such that ℒ() = ℒ′′() and  (ℒ, ℒ′′) &gt;  (ℒ, ℒ′).</p>
      </sec>
      <sec id="sec-2-2">
        <title>We use ℱ  (, ℒ) to denote the set of semifactuals of ℒ w.r.t. .</title>
        <p>Example 5. Consider the stable labelling ℒ3 = {out(fish), in(meat), out(pasta),out(white),
in(red)} for the AF of Example 3. We have that ℒ2 = {in(fish), out(meat), out(pasta), out(white),
in(red)} and ℒ4 = {out(fish), out(meat), in(pasta), out(white), in(red)} are the only
semifactuals of ℒ3 w.r.t. the argument red as there is no other st-labelling agreeing on red and having
distance greater than  (ℒ3, ℒ2) =  (ℒ3, ℒ4) = 2. In fact, ℒ1 = {in(fish), out(meat), out(pasta),
in(white), out(red)}, having distance  (ℒ3, ℒ1) = 4, is not a semifactual for ℒ3 w.r.t. red as
ℒ1(red) ̸= ℒ3(red). Thus, ℱ st(red, ℒ3) = {ℒ2, ℒ4}. □
2.1. Existence and Verification Problems
Finding a counterfactual (resp., semifactual) means looking for a minimum (resp., maximum) distance
labelling. The first problem we consider is a natural decision version of that problem.</p>
        <p>Given as input an AF Λ = ⟨A, R⟩, a semantics  ∈ {co, st, pr, sst}, a goal argument  ∈ A, an
integer  ∈ N, and a  -labelling ℒ ∈  (Λ), CF-EX (resp., SF-EX ) is the problem of deciding whether
there exists a labelling ℒ′ ∈  (Λ) s.t. ℒ() ̸= ℒ′() (resp., ℒ() = ℒ′()) and  (ℒ, ℒ′) ≤  (resp.,
 (ℒ, ℒ′) ≥ ).</p>
        <p>
          The complexity of the existence problem under counterfactual and semifactual reasoning (i.e., CF-EX
and SF-EX ) has been recently proved to be ) NP-complete for  ∈ {co, st}; and ) Σ2-complete for
 ∈ {pr, sst} [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
        </p>
        <p>A problem related to CF-EX and SF-EX is that of verifying whether a given labelling ℒ′ is a
counterfactual/semifactual for ℒ and , and thus that the distance between the two labelling is
minimum/maximum.</p>
        <p>Given as input an AF Λ = ⟨A, R⟩, a semantics  ∈ {co, st, pr, sst}, a goal argument  ∈ A, a
 -labelling ℒ ∈  (Λ), and a labelling ℒ′, CF-VE (resp., SF-VE ) is the problem of deciding whether
ℒ′ belongs to ℱ  (, ℒ) (resp., ℱ  (, ℒ)).</p>
        <p>
          The problems CF-VE and CF-EX (resp., SF-VE and SF-EX ) are on the same level of the polynomial
hierarchy. In fact CF-VE and SF-VE are ) coNP-complete for  ∈ {co, st}; and ) Π2-complete for
 ∈ {pr, sst} [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Conclusions</title>
      <p>Several researchers explored how to deal with explanations with in formal argumentation [24, 25, 26,
27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38]. Counterfactual reasoning in AF has been firstly introduced
in [39], where considering sentences of the form “if  were rejected, then  would be accepted”, an AF
Λ is modified to another AF Λ′ such that (i) argument  which is accepted in Λ is rejected in Λ′ (ii) and
the Λ′ is as close as possible to Λ.</p>
      <p>However, none of the above-mentioned approaches deals with semifactual reasoning and most of
them manipulate the AF by adding arguments or meta-knowledge. In contrast, in our approach, focusing
on a given AF, novel definitions of counterfactual and semifactual are introduced to help understand
what should be diferent in a solution (not in the AF) to accommodate a user requirement concerning
a given goal. It turns out that the complexity of the considered problems is not lower than those of
corresponding classical problems in AF, and is provably higher for fundamental problems such as the
verification problem.</p>
      <p>Although counterfactual- and semifactual-based reasoning sufers from high computational
complexity (as many other computational problems in argumentation [40, 41, 42, 43, 44, 45, 46, 47]), several
tools and techniques emerged in the last few years that can tackle such kinds of computational issues,
including ASP- and SAT-based solvers. This is witnessed by the several eficient approaches presented
at the ICCMA competition,1 which aims at nurturing research and development of implementations for
computational models of argumentation.</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgements</title>
      <p>We acknowledge the support from project Tech4You (ECS0000009), and PNRR MUR projects FAIR
(PE0000013) and SERICS (PE00000014).</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
[4] Y. Dimopoulos, J. Mailly, P. Moraitis, Argumentation-based negotiation with incomplete opponent
profiles, in: Proc. of International Joint Conference on Autonomous Agents and Multiagent
Systems, 2019, pp. 1252–1260.
[5] H. Prakken, Models of persuasion dialogue, in: Argumentation in Artificial Intelligence, 2009, pp.</p>
      <p>281–300.
[6] P. M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning,
logic programming and n-person games, Artif. Intell. 77 (1995) 321–358.
[7] M. Caminada, Semi-stable semantics, in: Proc. of COMMA, 2006, pp. 121–130.
[8] P. Baroni, M. Caminada, M. Giacomin, An introduction to argumentation semantics, Knowl. Eng.</p>
      <p>Rev. 26 (2011) 365–410.
[9] B. Moulin, H. Irandoust, M. Bélanger, G. Desbordes, Explanation and argumentation capabilities:</p>
      <p>Towards the creation of more persuasive agents, Artificial Intelligence Review 17 (2002) 169–222.
[10] F. Bex, D. Walton, Combining explanation and argumentation in dialogue, Argument &amp;
Computation 7 (2016) 55–68.
[11] K. Cyras, D. Birch, Y. Guo, F. Toni, R. Dulay, S. Turvey, D. Greenberg, T. Hapuarachchi, Explanations
by arbitrated argumentative dispute, Expert Systems with Applications 127 (2019) 141–156.
[12] T. Miller, Explanation in artificial intelligence: Insights from the social sciences, Artif. Intell. 267
(2019) 1–38.
[13] D. Kahneman, A. Tversky, The simulation heuristic, National Technical Information Service, 1981.
[14] R. McCloy, R. M. Byrne, Semifactual “even if” thinking, Thinking &amp; reasoning 8 (2002) 41–67.
[15] R. Guidotti, Counterfactual explanations and how to find them: literature review and benchmarking,</p>
      <p>Data Mining and Knowledge Discovery (2022) 1–55.
[16] P. Barceló, M. Monet, J. Pérez, B. Subercaseaux, Model interpretability through the lens of
computational complexity, in: Proc. of Advances in Neural Information Processing Systems, 2020.
[17] E. M. Kenny, M. T. Keane, On generating plausible counterfactual and semi-factual explanations
for deep learning, in: Proc. of AAAI Conference on Artificial Intelligence, 2021, pp. 11575–11585.
[18] Y. Wu, L. Zhang, X. Wu, Counterfactual fairness: Unidentification, bound and algorithm, in: Proc.</p>
      <p>of International Joint Conference on Artificial Intelligence (IJCAI), 2019, pp. 1438–1444.
[19] E. Albini, A. Rago, P. Baroni, F. Toni, Relation-based counterfactual explanations for bayesian
network classifiers., in: Proc. of International Joint Conference on Artificial Intelligence (IJCAI),
2020, pp. 451–457.
[20] G. Alfano, S. Greco, D. Mandaglio, F. Parisi, R. Shahbazian, I. Trubitsyna, Even-if explanations:
Formal foundations, priorities and complexity, in: In Proc. of AAAI Conference on Artificial
Intelligence, 2025, p. (to appear).
[21] P. Romashov, M. Gjoreski, K. Sokol, M. V. Martinez, M. Langheinrich, Baycon: Model-agnostic
bayesian counterfactual generator, in: Proc. of International Joint Conference on Artificial
Intelligence (IJCAI), 2022, pp. 23–29.
[22] S. Dandl, G. Casalicchio, B. Bischl, L. Bothmann, Interpretable regional descriptors:
Hyperboxbased local explanations, in: Proc. of Machine Learning and Knowledge Discovery in Databases,
volume 14171, 2023, pp. 479–495.
[23] S. Aryal, M. T. Keane, Even if explanations: Prior work, desiderata &amp; benchmarks for
semifactual xai, in: Proc. of International Joint Conference on Artificial Intelligence (IJCAI), 2023, pp.
6526–6535.
[24] K. Cyras, A. Rago, E. Albini, P. Baroni, F. Toni, Argumentative XAI: A survey, in: Proc. of Proc. of</p>
      <p>International Joint Conference on Artificial Intelligence (IJCAI), 2021, pp. 4392–4399.
[25] A. Vassiliades, N. Bassiliades, T. Patkos, Argumentation and explainable artificial intelligence: a
survey, Knowl. Eng. Rev. 36 (2021) e5.
[26] R. Craven, F. Toni, Argument graphs and assumption-based argumentation, Artif. Intell. 233 (2016)
1–59.
[27] P. M. Dung, R. A. Kowalski, F. Toni, Assumption-based argumentation, in: Argumentation in</p>
      <p>Artificial Intelligence, Springer, 2009, pp. 199–218.
[28] N. D. Hung, Computing probabilistic assumption-based argumentation, in: Proc. of Pacific Rim</p>
      <p>International Conference on Artificial Intelligence (PRICAI), 2016, pp. 152–166.
[29] P. Dung, P. Mancarella, F. Toni, Computing ideal sceptical argumentation, Artif. Intell. 171 (2007)
642–674.
[30] P. M. Thang, P. M. Dung, N. D. Hung, Towards a common framework for dialectical proof
procedures in abstract argumentation, Journal of Logic and Computation 19 (2009) 1071–1109.
[31] G. Alfano, M. Calautti, S. Greco, F. Parisi, I. Trubitsyna, Explainable acceptance in probabilistic
abstract argumentation: Complexity and approximation, in: Proc. of International Conference on
Principles of Knowledge Representation and Reasoning (KR), 2020, pp. 33–43.
[32] G. Alfano, M. Calautti, S. Greco, F. Parisi, I. Trubitsyna, Explainable acceptance in probabilistic
and incomplete abstract argumentation frameworks, Artif. Intell. 323 (2023) 103967.
[33] R. Baumann, M. Ulbricht, Choices and their consequences - explaining acceptable sets in abstract
argumentation frameworks, in: Proc. of International Conference on Principles of Knowledge
Representation and Reasoning (KR), 2021, pp. 110–119.
[34] M. Ulbricht, J. P. Wallner, Strong explanations in abstract argumentation, in: Proc. of AAAI</p>
      <p>Conference on Artificial Intelligence, 2021, pp. 6496–6504.
[35] G. Brewka, M. Ulbricht, Strong explanations for nonmonotonic reasoning, in: Description Logic,
Theory Combination, and All That, volume 11560 of Lecture Notes in Computer Science, 2019, pp.
135–146.
[36] G. Brewka, M. Thimm, M. Ulbricht, Strong inconsistency, Artif. Intell. 267 (2019) 78–117.
[37] Z. G. Saribatur, J. P. Wallner, S. Woltran, Explaining non-acceptability in abstract argumentation,
in: Proc. of European Conference on Artificial Intelligence (ECAI), 2020, pp. 881–888.
[38] O. Cocarascu, A. Rago, F. Toni, Extracting dialogical explanations for review aggregations with
argumentative dialogical agents, in: Proc. of AAMAS, 2019, pp. 1261–1269.
[39] C. Sakama, Counterfactual reasoning in argumentation frameworks., in: COMMA, 2014, pp.</p>
      <p>385–396.
[40] G. Alfano, S. Greco, F. Parisi, On scaling the enumeration of the preferred extensions of abstract
argumentation frameworks, in: Proceedings of ACM/SIGAPP Symposium on Applied Computing
(SAC), 2019, pp. 1147–1153.
[41] G. Alfano, S. Greco, F. Parisi, Incremental computation in dynamic argumentation frameworks,</p>
      <p>IEEE Intell. Syst. 36 (2021) 80–86.
[42] G. Alfano, S. Greco, F. Parisi, I. Trubitsyna, Preferences and constraints in abstract argumentation,
in: Proc. of International Joint Conference on Artificial Intelligence (IJCAI), ijcai.org, 2023, pp.
3095–3103.
[43] G. Alfano, S. Greco, F. Parisi, I. Trubitsyna, On acceptance conditions in abstract argumentation
frameworks, Inf. Sci. 625 (2023) 757–779.
[44] G. Alfano, S. Greco, F. Parisi, I. Trubitsyna, Epistemic abstract argumentation framework: Formal
foundations, computation and complexity, in: Proc. of International Conference on Autonomous
Agents and Multiagent Systems (AAMAS), ACM, 2023, pp. 409–417.
[45] G. Alfano, S. Greco, F. Parisi, I. Trubitsyna, Abstract argumentation framework with conditional
preferences, in: In Proc. of AAAI Conference on Artificial Intelligence, 2023, pp. 6218–6227.
[46] G. Alfano, S. Greco, D. Mandaglio, F. Parisi, I. Trubitsyna, Abstract argumentation frameworks
with strong and weak constraints, Artif. Intell. 336 (2024) 104205.
[47] G. Alfano, S. Greco, F. Parisi, I. Trubitsyna, Complexity of credulous and skeptical acceptance in
epistemic argumentation framework, in: Proc. of AAAI Conference on Artificial Intelligence, 2024,
pp. 10423–10432.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Alfano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Greco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Parisi</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Trubitsyna</surname>
          </string-name>
          ,
          <article-title>Counterfactual and Semifactual Explanations in Abstract Argumentation: Formal Foundations, Complexity and Computation</article-title>
          ,
          <source>in: Proc. of International Conference on Principles of Knowledge Representation and Reasoning (KR)</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>14</fpage>
          -
          <lpage>26</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gabbay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Giacomin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. R.</given-names>
            <surname>Simari</surname>
          </string-name>
          , M. Thimm (Eds.),
          <source>Handbook of Formal Argumentation</source>
          , volume
          <volume>2</volume>
          ,
          <string-name>
            <surname>College</surname>
            <given-names>Publications</given-names>
          </string-name>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Amgoud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dimopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Moraitis</surname>
          </string-name>
          ,
          <article-title>A unified and general framework for argumentation-based negotiation</article-title>
          ,
          <source>in: Proc. of International Joint Conference on Autonomous Agents and Multiagent Systems</source>
          ,
          <year>2007</year>
          , p.
          <fpage>158</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>