=Paper= {{Paper |id=Vol-3733/paper7 |storemode=property |title=Preserving Privacy in a (Timed) Concurrent Language for Argumentation |pdfUrl=https://ceur-ws.org/Vol-3733/paper7.pdf |volume=Vol-3733 |authors=Stefano Bistarelli,Maria Chiara Meo,Carlo Taticchi |dblpUrl=https://dblp.org/rec/conf/cilc/BistarelliMT24 }} ==Preserving Privacy in a (Timed) Concurrent Language for Argumentation== https://ceur-ws.org/Vol-3733/paper7.pdf
                         Preserving Privacy in a (Timed) Concurrent Language for
                         Argumentation
                         Stefano Bistarelli1 , Maria Chiara Meo2 and Carlo Taticchi1,*
                         1
                             Department of Mathematics and Computer Science, University of Perugia, Italy
                         2
                             Department of Economics, University “G. d’Annunzio” of Chieti-Pescara, Italy


                                        Abstract
                                        Modelling the behaviour of multiple agents concurrently interacting and reasoning in a dynamic environment
                                        is a challenging task. It necessitates tools capable of effectively capturing various forms of interaction, such as
                                        persuasion and deliberation while aiding agents in decision-making or consensus-building. In [1], we extended a
                                        language for modelling concurrent interactions between agents (tcla), which allowed us to specify agents equipped
                                        with a local argument memory and to reason with private knowledge. Furthermore, an initial formalisation of a
                                        multi-agent decision problem that preserves privacy has been provided. To illustrate the language’s capabilities,
                                        in this paper, we give a complete formalisation of a privacy-preserving multi-agent decision problem, and we
                                        demonstrate how it can be employed to define a general (correct and complete) translation function that generates
                                        a tcla program from a multi-agent decision-making process. Additionally, we present an application example
                                        that models a privacy-preserving multi-agent decision-making process to showcase an instance of our general
                                        translation function.

                                        Keywords
                                        Computational Argumentation, Concurrency, Locality




                         1. Introduction
                         Intelligent agents can exploit argumentation techniques to accomplish complex interactions like, for
                         instance, negotiation [2, 3] and persuasion [4, 5]. The Timed Concurrent Language for Argumentation
                         (tcla) [6, 7] offers constructs to implement such interactions. Agents involved in the process share
                         an argumentation store that serves as a knowledge base and where arguments and attacks represent
                         the agreed beliefs. The framework can be changed via a set of primitives that allow the addition and
                         removal of arguments and attacks. The language makes use of two kinds of expressions: a syntactic
                         check that verifies if a given set of arguments and attacks is contained in the knowledge base, and
                         semantic test operations that retrieve information about the acceptability of arguments in the AF.
                            The presented study extends the work in [1], where we introduced the formalism of local stores
                         and provided operational semantics for the locality operator within tcla. The primary focus of [1]
                         was to establish the theoretical foundation and mechanisms for autonomous agents to reason with
                         private information, allowing them to hide data they are unwilling to disclose and only reveal the
                         necessary information to reach desired outcomes. In contrast, this paper advances that foundational
                         work by defining a general translation function that generates a tcla program from a multi-agent
                         decision-making process. This implements the theoretical constructs introduced earlier and bridges
                         the gap between theory and practical applications. Additionally, we include practical examples to
                         demonstrate the effectiveness of tcla, with particular emphasis on using local stores. These examples
                         showcase how the enhanced framework can be applied to complex, real-world scenarios, providing a
                         more comprehensive evaluation of its capabilities and practical benefits.




                          CILC 2024: 39th Italian Conference on Computational Logic, June 26-28, 2024, Rome, Italy
                         *
                           Corresponding author.
                          $ stefano.bistarelli@unipg.it (S. Bistarelli); mariachiara.meo@unich.it (M. C. Meo); carlo.taticchi@unipg.it (C. Taticchi)
                           0000-0001-7411-9678 (S. Bistarelli); 0000-0002-3700-3788 (M. C. Meo); 0000-0003-1260-4672 (C. Taticchi)
                                       © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).


CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
2. Background
In this section, we recall some fundamental notions concerning Computational Argumentation. The
first definition we need is that of Abstract Argumentation Framework (AF) [8].

Definition 1. Let 𝑈 be the set of all possible arguments, which we refer to as the universe.1 An Abstract
Argumentation Framework is a pair ⟨𝐴𝑟𝑔, 𝑅⟩ where 𝐴𝑟𝑔 ⊆ 𝑈 is a set of adopted arguments and 𝑅 is a
binary relation on 𝐴𝑟𝑔 (representing attacks among adopted arguments).

   Given an AF, we aim to identify subsets of acceptable arguments using argumentation semantics.
Different semantics, such as admissible, complete, stable, semi-stable, preferred, and grounded (denoted
as adm, com, stb, sst, prf, and gde), have been introduced to reflect desirable qualities for argument
sets [9, 8]. Labelling-based semantics offer an approach to identify acceptable arguments by associating
an AF with a subset of possible labellings [10].

Definition 2. A labelling of an AF 𝐹 = ⟨𝐴𝑟𝑔, 𝑅⟩ is a total function 𝐿 : 𝐴𝑟𝑔 → {in, out, undec}.
Moreover, 𝐿 is an admissible labelling for 𝐹 when ∀𝑎 ∈ 𝐴𝑟𝑔

        • 𝐿(𝑎) = in =⇒ ∀𝑏 ∈ 𝐴𝑟𝑔 | (𝑏, 𝑎) ∈ 𝑅.𝐿(𝑏) = out;
        • 𝐿(𝑎) = out ⇐⇒ ∃𝑏 ∈ 𝐴𝑟𝑔 | (𝑏, 𝑎) ∈ 𝑅 ∧ 𝐿(𝑏) = in.

   In particular, in arguments are acceptable, while the others will be rejected. Similar criteria to that
shown in Definition 2 can be used to capture other semantics [10]. In the following, we will write ℒ𝐹𝜎
to identify the set of all possible labellings of 𝐹 with respect to the semantics 𝜎. Besides computing the
possible labellings with respect to a certain semantics 𝜎, one of the most common tasks performed on
AFs is to decide whether an argument 𝑎 is accepted (labelled as in) in some labelling of ℒ𝐹𝜎 or in all
labellings. In the former case, we say that 𝑎 is credulously accepted with respect to 𝜎; in the latter, 𝑎 is
instead sceptically accepted with respect to 𝜎. In the following, we say that an argument 𝑎 is admissible
(in 𝐴𝐹 ) if and only if 𝑎 is credulously accepted with respect to adm.
   In the following, we summarise the main features of tcla, including its syntax and operational
semantics. For more details, we direct the reader to the work in [1, 6]. Communication between tcla
agents is implemented via a shared memory consisting of an AF which agents can access and modify.
All agents are synchronised via a shared global clock, tracking the simultaneous execution of concurrent
agents. We present the syntax of tcla in Table 1, where 𝑃 denotes a generic process, 𝐶 a sequence of
procedure declarations (or clauses), 𝐴 a generic agent and 𝐸 a generic guarded agent.

Table 1
tcla syntax.
            𝑃 ::= let 𝐶 in 𝐴
            𝐶 ::= 𝑝(𝑥) :: 𝐴 | 𝐶, 𝐶
            𝐴 ::= success | failure | add(𝐴𝑟𝑔, 𝑅) → 𝐴 | rmv(𝐴𝑟𝑔, 𝑅) → 𝐴 | 𝐸 | 𝐴 ‖ 𝐴 | new 𝑆 in 𝐴 | 𝑝(𝑥)
            𝐸 ::= check𝑡 (𝐴𝑟𝑔, 𝑅) → 𝐴 | c-test𝑡 (𝑎, 𝑙, 𝜎) → 𝐴 | s-test𝑡 (𝑎, 𝑙, 𝜎) → 𝐴 | 𝐸 + 𝐸 | 𝐸 +𝑃 𝐸

where 𝐴𝑟𝑔 and 𝑆 are supposed to be sets of arguments, 𝑅 a set of attacks, 𝑎 an argument, 𝑙 ∈ {in, out, undec},
                                       𝜎 ∈ {adm, com, stb, prf, gde}.

   In a process 𝑃 = 𝑙𝑒𝑡 𝐶 𝑖𝑛 𝐴, 𝐴 is the initial agent to be executed in the context of the set of
declarations 𝐶. A clause defined with 𝐶, 𝐶 corresponds to the concatenation of more procedure
declarations. To specify how many times the execution can be repeated, guarded agents (i.e., the check,
c-test and s-test operations) are endowed with a timeout 𝑡 ∈ N ∪ {∞} stating after how many cycles of
the global clock the operation will expire and terminate with failure. Time passes between all concurrent
1
    The set 𝑈 is not present in the original definition by Dung, and we introduce it for our convenience to distinguish all possible
    arguments from the adopted ones.
agents on the same global clock via a timeout environment 𝑇 , as specified below. Let ℐ be a set of
(timeout) identifiers. A timeout environment is a partial mapping 𝑇 : ℐ ⇀ N ∪ {∞} such that the set
𝑑𝑜𝑚(𝑇 ) = {𝐼 ∈ ℐ | 𝑇 (𝐼) ∈ N ∪ {∞}} (domain of 𝑇 ) is finite. 𝑇0 is the empty timeout environment
(𝑑𝑜𝑚(𝑇0 ) = ∅).
   We introduce some utility functions as we need to manipulate 𝑇 , for instance, to insert new timeouts
or update a timer. If 𝑇1 and 𝑇2 are timeout environments such that for each 𝐼 ∈ 𝑑𝑜𝑚(𝑇1 ) ∩ 𝑑𝑜𝑚(𝑇2 )
we have that 𝑇1 (𝐼) = 𝑇2 (𝐼), then 𝑇1 ∪ 𝑇2 is the timeout environment such that 𝑑𝑜𝑚(𝑇1 ∪ 𝑇2 ) =
𝑑𝑜𝑚(𝑇1 ) ∪ 𝑑𝑜𝑚(𝑇2 ) and for each 𝐼 ∈ 𝑑𝑜𝑚(𝑇1 ∪ 𝑇2 )
                                              {︃
                                                𝑇1 (𝐼) if 𝐼 ∈ 𝑑𝑜𝑚(𝑇1 )
                              (𝑇1 ∪ 𝑇2 )(𝐼) =
                                                𝑇2 (𝐼) otherwise

We denote by 𝑇 [𝐼¯ : ¯𝑡] an update of 𝑇 , with a possibly enlarged domain, namely
                                                     {︃
                                                        𝑇 (𝐼) if 𝐼 ̸= 𝐼¯
                                    𝑇 [𝐼¯ : ¯𝑡](𝐼) =
                                                       ¯𝑡     otherwise

The passing of time is marked by decreasing timers in a timeout environment 𝑇 . At each step of the
execution, all timers are decreased according to the function 𝑑𝑒𝑐(𝑇 ) such that 𝑑𝑜𝑚(𝑑𝑒𝑐(𝑇 )) = 𝑑𝑜𝑚(𝑇 )
and                                    {︃
                                         𝑇 (𝐼) − 1 if 0 < 𝑇 (𝐼) ∈ N
                          𝑑𝑒𝑐(𝑇 )(𝐼) =
                                         𝑇 (𝐼)        if 𝑇 (𝐼) = 0 or 𝑇 (𝐼) = ∞
  Below, we outline the functioning of all tcla operators, focusing in particular on the management of
the timeouts and the composition of concurrent agents.
Notation 1. Let 𝐹 = ⟨𝐴𝑟𝑔, 𝑅⟩ be an AF and 𝑆 a set of arguments. We define:
    • 𝑅|𝑆 = {(𝑎, 𝑏) ∈ 𝑅 | 𝑎 ∈ 𝑆 or 𝑏 ∈ 𝑆};
    • 𝑅‖𝑆 = {(𝑎, 𝑏) ∈ 𝑅 | 𝑎 ∈ 𝑆 and 𝑏 ∈ 𝑆};
    • 𝐹 ↓ 𝑆 = ⟨𝐴𝑟𝑔 ∩ 𝑆 ′ , 𝑅|𝑆 ⟩ where 𝑆 ′ = 𝑆 ∪ {𝑏 | (𝑏, 𝑐) ∈ 𝑅|𝑆 or (𝑐, 𝑏) ∈ 𝑅|𝑆 };
    • 𝐹 ↑ 𝑆 = ⟨𝐴𝑟𝑔 ∖ 𝑆, 𝑅 ∖ 𝑅|𝑆 ⟩.
   The operational model of tcla processes can be formally described by a transition system 𝑇 =
(Conf , −→), where we assume that each transition step takes exactly one time unit. Configurations
(in) Conf are triples, each triple consisting in a process 𝑃 , an abstract argumentation framework
𝐴𝐹 representing a knowledge base and a timeout environment 𝑇 . The transition relation −→⊆
Conf × Conf is the least relation satisfying the rules in Tables 2 - 8, and it characterises the (temporal)
evolution of the system. So, ⟨𝐴, 𝐹, 𝑇 ⟩ −→ ⟨𝐴′ , 𝐹 ′ , 𝑇 ′ ⟩ means that, if at time 𝑡 we have the process 𝐴,
the framework 𝐹 and the time environment 𝑇 , then at time 𝑡 + 1 we have the process 𝐴′ , the framework
𝐹 ′ and the time environment 𝑇 ′ . In the following, we will usually write a tcla process 𝑃 = 𝐶.𝐴 as the
corresponding agent 𝐴, omitting 𝐶 when not required by the context.
   Rules Ad and Re of Table 2 modify the store by adding and removing, respectively, arguments and
attacks. Attacks can only be added between arguments that will be in the store after the execution of
the add operation. On the other hand, when an argument is removed, also the attacks involving that
argument are removed.

Table 2
Add and remove semantics.
        ⟨add(𝐴𝑟𝑔 ′ , 𝑅′ ) → 𝐴, ⟨𝐴𝑟𝑔, 𝑅⟩, 𝑇 ⟩ −→ ⟨𝐴, ⟨𝐴𝑟𝑔 ∪ 𝐴𝑟𝑔 ′ , (𝑅 ∪ 𝑅′ )‖(𝐴𝑟𝑔∪𝐴𝑟𝑔′ ) ⟩, dec(𝑇 )⟩   Ad
                  ′   ′                                           ′        ′
        ⟨rmv(𝐴𝑟𝑔 , 𝑅 ) → 𝐴, ⟨𝐴𝑟𝑔, 𝑅⟩, 𝑇 ⟩ −→ ⟨𝐴, ⟨𝐴𝑟𝑔 ∖ 𝐴𝑟𝑔 , (𝑅 ∖ 𝑅 )‖(𝐴𝑟𝑔∖𝐴𝑟𝑔′ ) ⟩, dec(𝑇 )⟩         Re

   Table 3 presents the semantics for the check operation. When a rule is executed, all timers in the
timeout environment 𝑇 decrease after the possible introduction of new timeouts. Initially, rules Ch1
and Ch2 require inserting a new timeout in 𝑇 . If not successful, the check repeats using the existing
timeout. Rule Ch2 adds a new timeout to 𝑇 if conditions fail and the timer has not expired, executing a
subsequent check. Rule Ch3 progresses the execution if the guard succeeds and the existing timer has
not expired. Rule Ch4 repeats a failed check without adding new timeouts. If a timer hits 0, the agent
fails per rules Ch5 and Ch6.

Table 3
Check semantics. In the rules, 𝐹 = ⟨𝐴𝑟𝑔, 𝑅⟩.

                                       𝐴𝑟𝑔 ′ ⊆ 𝐴𝑟𝑔 ∧ 𝑅′ ⊆ 𝑅, 𝑡 > 0
                                                                                                         Ch1
                             ⟨check𝑡 (𝐴𝑟𝑔 ′ , 𝑅′ ) → 𝐴, 𝐹, 𝑇 ⟩ −→ ⟨𝐴, 𝐹, dec(𝑇 )⟩
                                       ¬(𝐴𝑟𝑔 ′ ⊆ 𝐴𝑟𝑔 ∧ 𝑅′ ⊆ 𝑅), 𝑡 > 0
             ⟨check𝑡 (𝐴𝑟𝑔 ′ , 𝑅′ ) → 𝐴, 𝐹, 𝑇 ⟩ −→ ⟨check 𝐼 (𝐴𝑟𝑔 ′ , 𝑅′ ) → 𝐴, 𝐹, dec(𝑇 [𝐼 : 𝑡])⟩         Ch2
                                      where 𝐼 is a fresh timeout identifier

                                     𝐴𝑟𝑔 ′ ⊆ 𝐴𝑟𝑔 ∧ 𝑅′ ⊆ 𝑅, 𝑇 (𝐼) > 0
                                                                                                         Ch3
                            ⟨check 𝐼 (𝐴𝑟𝑔 ′ , 𝑅′ ) → 𝐴, 𝐹, 𝑇 ⟩ −→ ⟨𝐴, 𝐹, dec(𝑇 )⟩

                                     ¬(𝐴𝑟𝑔 ′ ⊆ 𝐴𝑟𝑔 ∧ 𝑅′ ⊆ 𝑅), 𝑇 (𝐼) > 0
                                                                                                         Ch4
               ⟨check 𝐼   (𝐴𝑟𝑔 ′ , 𝑅′ ) → 𝐴, 𝐹, 𝑇 ⟩ −→ ⟨check     ′  ′
                                                           𝐼 (𝐴𝑟𝑔 , 𝑅 ) → 𝐴, 𝐹, dec(𝑇 )⟩



                          ⟨check0 (𝐴𝑟𝑔 ′ , 𝑅′ ) → 𝐴, 𝐹, 𝑇 ⟩ −→ ⟨failure, 𝐹, dec(𝑇 )⟩                     Ch5


                                                    𝑇 (𝐼) = 0
                                                                                                         Ch6
                          ⟨check 𝐼   (𝐴𝑟𝑔 ′ , 𝑅′ ) → 𝐴, 𝐹, 𝑇 ⟩ −→ ⟨failure, 𝐹, dec(𝑇 )⟩

   Credulous and sceptical test operators validate the acceptability of arguments in the store. Their
rules follow the same idea as those for the check operation, with the only exception of the condition
to satisfy. The credulous test checks if at least one labelling 𝐿 in ℒ𝐹𝜎 assigns label 𝑙 to argument 𝑎
(∃𝐿 ∈ ℒ𝐹𝜎 | 𝐿(𝑎) = 𝑙 for c-test 𝐼 (𝑎, 𝑙, 𝜎)). The sceptical test requires all labellings to assign the same
label 𝑙 to 𝑎 (∀𝐿 ∈ ℒ𝐹𝜎 .𝐿(𝑎) = 𝑙 for s-test 𝐼 (𝑎, 𝑙, 𝜎)).
   Tables 4 and 5 detail rules for nondeterminism and if-then-else constructs. ℰ represents guarded
agents, with ℰ𝑓 comprising those with timers set to zero, ensuring failure. Rule ND1 enables two
non-failing expressions 𝐸1 and 𝐸2 to simultaneously transition into guarded expressions 𝐸1′ and 𝐸2′ ,
producing 𝐸1′ + 𝐸2′ . Rule ND2 progresses with 𝐸1 if it becomes non-guarded. Rule ND3 discards any
failing agent, continuing with the other.

Table 4
Nondeterminism semantics.

         ⟨𝐸1 , 𝐹, 𝑇 ⟩ −→ ⟨𝐸1′ , 𝐹, 𝑇1 ⟩, ⟨𝐸2 , 𝐹, 𝑇 ⟩ −→ ⟨𝐸2′ , 𝐹, 𝑇2 ⟩, 𝐸1 , 𝐸2 ̸∈ ℰ𝑓 , 𝐸1′ , 𝐸2′ ∈ ℰ
                                                                                                         ND1
                              ⟨𝐸1 + 𝐸2 , 𝐹, 𝑇 ⟩ −→ ⟨𝐸1′ + 𝐸2′ , 𝐹, 𝑇1 ∪ 𝑇2 ⟩

                              ⟨𝐸1 , 𝐹, 𝑇 ⟩ −→ ⟨𝐴1 , 𝐹, 𝑇 ′ ⟩, 𝐸1 ̸∈ ℰ𝑓 , 𝐴1 ̸∈ ℰ
                                                                                                         ND2
                                     ⟨𝐸1 + 𝐸2 , 𝐹, 𝑇 ⟩ −→ ⟨𝐴1 , 𝐹, 𝑇 ′ ⟩

                                     𝐸1 ∈ ℰ𝑓 , ⟨𝐸2 , 𝐹, 𝑇 ⟩ −→ ⟨𝐴2 , 𝐹, 𝑇 ′ ⟩
                                                                                                         ND3
                                      ⟨𝐸1 + 𝐸2 , 𝐹, 𝑇 ⟩ −→ ⟨𝐴2 , 𝐹, 𝑇 ′ ⟩

   The operator in Table 5 implements a non-commutative if-then-else: if 𝐸1 +𝑃 𝐸2 sees 𝐸1 transition
to 𝐸1′ , execution moves to 𝐸1′ +𝑃 𝐸2 (rule If1). If 𝐸1 becomes a non-guarded agent, 𝐸2 is discarded
(rule If2). If 𝐸1 fails, execution shifts to 𝐸2 (rule If3).
Table 5
If-then-else semantics.

                                ⟨𝐸1 , 𝐹, 𝑇 ⟩ −→ ⟨𝐸1′ , 𝐹, 𝑇 ′ ⟩, 𝐸1 ̸∈ ℰ𝑓 , 𝐸1′ ∈ ℰ
                                                                                                           If1
                                  ⟨𝐸1 +𝑃 𝐸2 , 𝐹, 𝑇 ⟩ −→ ⟨𝐸1′ +𝑃 𝐸2 , 𝐹, 𝑇 ′ ⟩

                                ⟨𝐸1 , 𝐹, 𝑇 ⟩ −→ ⟨𝐴1 , 𝐹, 𝑇 ′ ⟩, 𝐸1 ̸∈ ℰ𝑓 , 𝐴1 ̸∈ ℰ
                                                                                                           If2
                                       ⟨𝐸1 +𝑃 𝐸2 , 𝐹, 𝑇 ⟩ −→ ⟨𝐴1 , 𝐹, 𝑇 ′ ⟩

                                      𝐸1 ∈ ℰ𝑓 , ⟨𝐸2 , 𝐹, 𝑇 ⟩ −→ ⟨𝐴2 , 𝐹, 𝑇 ′ ⟩
                                                                                                           If3
                                       ⟨𝐸1 +𝑃 𝐸2 , 𝐹, 𝑇 ⟩ −→ ⟨𝐴2 , 𝐹, 𝑇 ′ ⟩


   The procedure call in Table 6 takes a single parameter 𝑥 that can be an argument, a label (in, out,
undec), a semantics 𝜎, or a time instant 𝑡. This setup can be modified to include more parameters or to
handle parameterless procedures. Execution involves replacing instances of 𝑝 with agent 𝐴 as defined
in the procedure declaration within context 𝐶.

Table 6
Procedure call semantics.
                  ⟨𝑝(𝑦), 𝐹, 𝑇 ⟩ −→ ⟨𝐴[𝑦/𝑥], 𝐹, dec(𝑇 )⟩ with 𝑝(𝑥) :: 𝐴 and 𝑥 ∈ {𝑎, 𝑙, 𝜎, 𝑡}                PC

   Rule TC in Table 7 depicts the true concurrency operator, where concurrent agents succeed only if
all components succeed. We use *(𝐹, 𝐹 ′ , 𝐹 ′′ ) := (𝐹 ′ ∩ 𝐹 ′′ ) ∪ ((𝐹 ′ ∪ 𝐹 ′′ ) ∖ 𝐹 ) to handle concurrent
additions and removals of arguments: if an argument 𝑎 is simultaneously added and removed, its final
status depends on its initial presence: added if absent, removed if present.

Table 7
True concurrency semantics.
                     ⟨𝐴1 , 𝐹, 𝑇 ⟩ −→ ⟨𝐴′1 , 𝐹 ′ , 𝑇1 ⟩, ⟨𝐴2 , 𝐹, 𝑇 ⟩ −→ ⟨𝐴′2 , 𝐹 ′′ , 𝑇2 ⟩
                                                                                                           TC
                       ⟨𝐴1 ‖ 𝐴2 , 𝐹, 𝑇 ⟩ −→ ⟨𝐴′1 ‖ 𝐴′2 , *(𝐹, 𝐹 ′ , 𝐹 ′′ ), 𝑇1 ∪ 𝑇2 ⟩

   In Tables 4 and 7, we have omitted the symmetric rules for the choice operator + and the parallel
composition ‖. Indeed, + is commutative, so 𝐸1 + 𝐸2 produces the same result as 𝐸2 + 𝐸1 . The same
is true for ‖. Moreover, 𝑠𝑢𝑐𝑐𝑒𝑠𝑠 and 𝑓 𝑎𝑖𝑙𝑢𝑟𝑒 are the identity and the absorbing elements, respectively,
under the parallel composition ‖.
   The rule in Table 8 defines the agent new 𝑆 in 𝐴 such that 𝑆 ⊆ 𝐴𝑟𝑔 is local to 𝐴, concealing 𝑆 from
the external argumentation framework (𝐴𝐹 ) and vice versa. This results in agent 𝐴 operating within
(𝐴𝐹 ↑ 𝑆) ∪ 𝐴𝐹𝑙𝑜𝑐 , where 𝐴𝐹𝑙𝑜𝑐 holds local information on 𝑆. The syntax extends to new 𝑆 in 𝐴𝐴𝐹𝑙𝑜𝑐 ,
initiating with an empty local 𝐴𝐹𝑙𝑜𝑐 . In this setup, new 𝑆 in 𝑠𝑢𝑐𝑐𝑒𝑠𝑠 and new 𝑆 in 𝑓 𝑎𝑖𝑙𝑢𝑟𝑒 return the
agents 𝑠𝑢𝑐𝑐𝑒𝑠𝑠 and 𝑓 𝑎𝑖𝑙𝑢𝑟𝑒 respectively.

Table 8
Locality semantics.
                                   ⟨𝐴, (𝐴𝐹 ↑ 𝑆) ∪ 𝐴𝐹𝑙𝑜𝑐 , 𝑇 ⟩ −→ ⟨B, 𝐴𝐹 ′ , 𝑇 ′ ⟩
                 ⟨new 𝑆 in 𝐴𝐴𝐹𝑙𝑜𝑐 , 𝐴𝐹, 𝑇 ⟩ −→ ⟨new 𝑆 in 𝐵 𝐴𝐹 ′ ↓𝑆 , (𝐴𝐹 ′ ↑ 𝑆) ∪ (𝐴𝐹 ′′ ↓ 𝑆), 𝑇 ′ ⟩      Loc
                    where 𝐴𝐹 = ⟨𝐴𝑟𝑔, 𝑅⟩, 𝐴𝐹 ′ = ⟨𝐴𝑟𝑔 ′ , 𝑅′ ⟩ and 𝐴𝐹 ′′ = ⟨𝐴𝑟𝑔, 𝑅‖𝐴𝑟𝑔′ ∪𝑆 ⟩

  In the next section, given 𝑆 = {𝑎1 , . . . , 𝑎𝑛 } ⊆ 𝐴𝑟𝑔, in order to simplify the presentation of the results,
we use ∃𝑥 ∈ 𝑆 | 𝑐-𝑡𝑒𝑠𝑡𝑡 (𝑥, 𝑙, 𝜎) → 𝐴 as a shorthand of (𝑐-𝑡𝑒𝑠𝑡𝑡 (𝑎1 , 𝑙, 𝜎) → 𝐴) + (𝑐-𝑡𝑒𝑠𝑡𝑡 (𝑎2 , 𝑙, 𝜎) →
𝐴) + · · · + (𝑐-𝑡𝑒𝑠𝑡𝑡 (𝑎𝑛 , 𝑙, 𝜎) → 𝐴).
  A possible use case for tcla can be identified in modelling Multi-Agent Decision Making with
Privacy Preserved (DMPP) in which agents need to communicate with other agents to make socially
optimal decisions but, at the same time, have some private information that they do not want to
share. This problem can be instantiated as done in other works like [11] as a debate in a multi-agent
environment where argumentation techniques are exploited for arriving at socially optimal outcomes by
only disclosing the “necessary” and “disclosable (public)” information. A DMPP problem is formalised
as follows.

Definition 3. [1] A Multi-Agent Decision Making with Privacy Preserved problem (DMPP) is a triple
⟨𝐴𝑔, 𝐴𝑐𝑡, 𝑆𝑜𝑙⟩, where

    • 𝐴𝑔 = {𝐴𝑔𝑒𝑛𝑡1 . . . , 𝐴𝑔𝑒𝑛𝑡𝑁 } is a finite set of agents;
    • 𝐴𝑐𝑡 = {𝑎1 , . . . , 𝑎𝑀 } is a finite set of available actions for the agents;
    • 𝑆𝑃 = {⟨𝐴𝑔𝑒𝑛𝑡1 : 𝑎1 , . . . , 𝐴𝑔𝑒𝑛𝑡𝑁 : 𝑎𝑁 ⟩ | {𝑎1 , . . . , 𝑎𝑁 } ⊆ 𝐴𝑐𝑡} is the set of all strategic profiles,
      namely the set of all the possible tuples of actions (one for each agent);
    • 𝑆𝑜𝑙 ⊆ 𝑆𝑃 is the set of acceptable solutions to the problem.


3. Preserving Privacy in Multi-Agent Decision with tcla
In this section, we focus on applying tcla to model DMPP problems. This involves scenarios where
agents must communicate and make decisions while safeguarding private information. We formalise
the DMPP problem and demonstrate how tcla can facilitate socially optimal decisions by disclosing
only necessary and public information.
Example 1. To demonstrate our formalisation and translation, we analyze an e-procurement scenario
involving a buyer’s acquisition of combined services and products from two hardware and software
suppliers. This particular e-procurement scenario was selected for evaluation due to its suitability,
including the use of distributed agents representing buyers and suppliers. Each agent represents a user
in the scenario.
   We consider a specific case where a buyer (Charlie) looks for an e-ordering system which consists of
hardware systems (Hw) and software packages (Sw). The agents providing Hw and Sw are Alice and
Bob. Charlie needs to purchase products or services from both operators, Alice and Bob, for business
reasons. Furthermore, Charlie will ask her first since he is Alice’s friend. If possible, Charlie would
prefer to buy the hardware from Alice (we represent this statement with C:⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩) because
typically, Alice’s selling price for hardware is lower than Bob’s. Moreover, Charlie does not want to buy
hardware from Bob because, in the last sale, Bob delivered the wrong goods (denoted by Wrong).
   Alice and Bob would prefer to provide the software (we represent this statement with A:Sw and B:Sw).
Alice would not want to provide the hardware because she does not have the hardware components
in stock at the moment, but this is confidential information (denoted by NoStock). However, she can
make a deal with the suppliers to make a special delivery (SpecDel). On the other hand, Bob is having
trouble providing the software quickly because, at the moment, his company lacks skilled technicians
for installation; this is private information (denoted by NoTec). However, two excellent technicians
will be taking over in the next few days (NewTwo).
   Charlie is aware that newspapers have recently reported that, due to the blockage of the Suez Canal,
it is not economically feasible to obtain one-off deliveries from suppliers (Suez). Moreover, Charlie
needs to purchase the service urgently and might not be willing to wait for the arrival of the two new
technicians hired by Bob (Urgency). Finally, Bob has a new, very well-organized office that handles
hardware sales (NewOffice). All these beliefs can be modelled as in the AFs of Figure 1.

Example 2. In Example 1, 𝑁 = 3, since there are three agents: 𝐶 (Charlie), 𝐴 (Alice) and 𝐵 (Bob),
𝑀 = 4 since there are four actions Sw and Hw (“provide the software” and “provide the hardware”,
respectively), and ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩ and ⟨Sw𝐴𝑙𝑖 , Hw𝐵𝑜𝑏 ⟩ (“buy the hardware from Alice and the software
from Bob” or vice versa, respectively), and two acceptable solutions to the problem, namely 𝑆𝑜𝑙 =
{⟨𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, A:Hw, B:Sw⟩, ⟨𝐶 : ⟨Sw𝐴𝑙𝑖 , Hw𝐵𝑜𝑏 ⟩, A:Sw, B:Hw⟩, }. Note that for the problem
specification, the remaining strategic profiles are not acceptable solutions.
             Charlie :          ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩       ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩ ←− Wrong            Suez         Urgency

             Alice :            Sw                     Hw ←− NoStock ←− SpecDel

             Bob :              Sw ←− NoTec ←− NewTwo                    Hw                 NewOffice

             Public Attacks : SpecDel ←− Suez          NewTwo ←− Urgency         Wrong ←− NewOffice
Figure 1: AFs representing Charlie’s, Alice’s and Bob’s beliefs and observation [11].


   In the following, as in [11], we distinguish two categories of arguments: practical and non-practical
(epistemic). Practical arguments recommend actions to agents. For simplicity, for each action and
agent, we allow for one and only one practical argument to recommend this action to the agent. Let
𝑎𝑖𝑗 denote the (one and only) practical argument recommending action 𝑎𝑗 to 𝐴𝑔𝑒𝑛𝑡𝑖 ,2 and denote the
set of all practical arguments belonging to 𝐴𝑔𝑒𝑛𝑡𝑖 by 𝑝𝑃 𝑟𝑎𝑖 (p is short for ‘private’, and Pra stands
for ‘Practical’), i.e. 𝑝𝑃 𝑟𝑎𝑖 = {𝑎𝑖𝑖1 , . . . , 𝑎𝑖𝑖𝑀 }, where {𝑎𝑖1 , . . . , 𝑎𝑖𝑀𝑖 } ⊆ 𝐴𝑐𝑡. Practical arguments are
                                                    𝑖
private since agents directly inform other agents about their action choices in the protocol of [11].
   We distinguish two subcategories for non-practical arguments: public (disclosable) and private
arguments, which an agent is willing and unwilling to disclose to other agents, respectively. We
denote 𝐴𝑔𝑒𝑛𝑡𝑖 ’s public and private non-practical argument sets by 𝑃 𝐴𝑟𝑔𝑖 and 𝑝𝐴𝑟𝑔𝑖 , respectively
and by 𝑃 𝐴𝑟𝑔 = ∪𝑖∈{1,...,𝑛} 𝑃 𝐴𝑟𝑔𝑖 the set of all public arguments. Moreover, analogously to [11], we
assume that each agent has a complete preorder ≤𝑖 expressing a preference on all available actions,
and we model the 𝐴𝑔𝑒𝑛𝑡𝑖 ’s preference as a sequence of actions 𝐿𝑖 = 𝑎′1 · · · 𝑎′𝑀𝑖 such that 𝑝𝑃 𝑟𝑎𝑖 =
{𝑎′1 , . . . , 𝑎′𝑀𝑖 } and if 𝑘 ≤ ℎ then 𝑎′ℎ ≤𝑖 𝑎′𝑘 (the action with the smaller index is the preferred one).
Finally, we can define a preorder ≤ on 𝑆𝑜𝑙 as follows: given 𝑠 = ⟨𝐴𝑔𝑒𝑛𝑡1 : 𝑎1 , . . . , 𝐴𝑔𝑒𝑛𝑡𝑁 : 𝑎𝑁 ⟩
and 𝑠′ = ⟨𝐴𝑔𝑒𝑛𝑡1 : 𝑏1 , . . . , 𝐴𝑔𝑒𝑛𝑡𝑁 : 𝑏𝑁 ⟩ we say that 𝑠 ≤ 𝑠′ if and only if for each 𝑖 ∈ {1, . . . , 𝑁 }
𝑎𝑖 ≤𝑖 𝑏𝑖 .
   We denote 𝐴𝑔𝑒𝑛𝑡𝑖 ’s private arguments set by 𝐴𝑟𝑔𝑖 = 𝑝𝑃 𝑟𝑎𝑖 ∪ 𝑝𝐴𝑟𝑔𝑖 . We impose that 𝑝𝑃 𝑟𝑎𝑖 and
𝑝𝐴𝑟𝑔𝑖 are disjoint and that 𝐴𝑟𝑔𝑖 is finite. In Figure 1, practical arguments are underlined, private
arguments are in boldface, and public arguments are in italics.
   Since a practical argument concerns the actions that the agents can perform, while a non-practical
argument justifies beliefs and observations on which practical arguments are built, in line with [12], prac-
tical arguments are not allowed to attack non-practical arguments. As for attacks between non-practical
arguments, since private arguments usually represent agents’ private beliefs/concerns, whereas public
arguments usually represent observations/facts (see Example 1), we assume that private arguments do
not attack public arguments. Finally, each private non-practical argument is attacked by some public
arguments. This guarantees that, when negotiating with other agents, other agents can ‘indirectly’
defend or attack each private argument by (directly) attacking or defending some public arguments
attacking it.
   Now, we discuss attacks between arguments of different agents and attacks between agents’ actions
and acceptable solutions. Following [11], since an agent cannot know another agent’s private arguments,
we restrict attacks between agents’ arguments only to the public arguments, i.e. given a practical or
private argument of 𝐴𝑔𝑒𝑛𝑡𝑖 , this argument cannot attack or be attacked by another agent’s practical
or private arguments. In addition, we assume that agents have a consensus on the attack relation
between (public) arguments. Note that with this assumption, we do not mean that an agent knows
another agent’s public (disclosable) arguments a priori; instead, we mean that if two public arguments
are presented in front of an agent, this agent can decide the attack relation between them, and all other
agents agree on this attack relation.
   Moreover, differently from [11], we model agents’ actions and acceptable solutions, which are common
and visible to all agents, as public arguments. Each agent 𝐴𝑔𝑒𝑛𝑡𝑖 can decide on its own actions and
2
    In the examples, by an abuse of notation, we use the same name both for the actions and for the arguments that recommend
    them.
not on those of the other agents. Therefore, the actions of one agent are disjointed from those of the
others and are of the form 𝐴𝑔𝑒𝑛𝑡𝑖 : 𝑎𝑗 , where 𝑎𝑗 ∈ 𝐴𝑐𝑡. We denote by 𝐴𝑐𝑡𝑖𝑜𝑛𝑠 = {𝐴𝑔𝑒𝑛𝑡𝑖 : 𝑎 | 𝑖 ∈
{1, . . . , 𝑁 }, 𝑎 ∈ 𝐴𝑐𝑡𝑖 }.
  Finally we define the attacks between agents’ actions and acceptable solutions as follows

          𝑁 𝑜𝐺 = {(𝐴𝑔𝑒𝑛𝑡𝑖 : 𝑎, ⟨𝐴𝑔𝑒𝑛𝑡1 : 𝑏1 , . . . , 𝐴𝑔𝑒𝑛𝑡𝑁 : 𝑏𝑁 ⟩ | 𝐴𝑔𝑒𝑛𝑡𝑖 : 𝑎 ∈ 𝐴𝑐𝑡𝑖𝑜𝑛𝑠
                       ⟨𝐴𝑔𝑒𝑛𝑡1 : 𝑏1 , . . . , 𝐴𝑔𝑒𝑛𝑡𝑁 : 𝑏𝑁 ⟩ ∈ 𝑆𝑜𝑙, and 𝑎 ̸= 𝑏𝑖 }

Definition 4. The attack relation between public arguments is denoted by 𝐴𝑡𝑡* ⊆ (𝑃 𝐴𝑟𝑔×𝑃 𝐴𝑟𝑔)∪𝑁 𝑜𝐺.

Example 3. In Example 1 we have that

              𝑁 𝑜𝐺 = { (𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, ⟨𝐶 : ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩, 𝐴 : Sw, 𝐵 : Hw⟩),
                       (𝐶 : ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩, ⟨𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, 𝐴 : Hw, 𝐵 : Sw⟩),
                       (𝐴 : Hw, ⟨𝐶 : ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩, 𝐴 : Sw, 𝐵 : Hw⟩),
                       (𝐴 : Sw, ⟨𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, 𝐴 : Hw, 𝐵 : Sw⟩),
                       (𝐵 : Hw, ⟨𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, 𝐴 : Hw, 𝐵 : Sw⟩),
                       (𝐵 : Sw, ⟨𝐶 : ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩, 𝐴 : Sw, 𝐵 : Hw⟩) }.

   In this formalisation, ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩ and ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩ are private argument, allowing Charlie
to internally decide his actions without revealing his reasoning. In contrast, 𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩ and
𝐶 : ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩, are public arguments that communicate Charlie’s decisions to Alice and Bob. Alice
and Bob analogously use private and public arguments to manage their decisions and communications.
   Given a DMPP, as detailed in [11], we define two specific argumentation frameworks for each agent:
the perfect-view 𝐴𝐹 𝑣, representing each agent’s ideal perspective, and the private internal 𝐴𝐹 𝑝,
containing confidential information specific to that agent. Additionally, we also define the global public
initial 𝐴𝐹 𝑑, which is common and visible to all agents in the DMPP.

Definition 5. Let ⟨𝐴𝑔, 𝐴𝑐𝑡, 𝑆𝑜𝑙⟩ a DMPP problem. We define

    • 𝐴𝑔𝑒𝑛𝑡𝑖 ’s perfect-view internal AF as 𝐴𝐹 𝑣𝑖 = ⟨𝐴𝑟𝑔𝑖 ∪ 𝑃 𝐴𝑟𝑔, 𝐴𝑡𝑡𝑖 ∪ 𝐴𝑡𝑡* ⟩;
    • 𝐴𝑔𝑒𝑛𝑡𝑖 ’s private internal AF as 𝐴𝐹 𝑝𝑖 = 𝐴𝐹 𝑣𝑖 ↓ (𝑝𝑃 𝑟𝑎𝑖 ∪ 𝑝𝐴𝑟𝑔𝑖 ) and
    • the public argumentation framework 𝐴𝐹 𝑑 ⊆ ⟨𝑃 𝐴𝑟𝑔 ∪ 𝑆𝑜𝑙, 𝐴𝑡𝑡* ⟩

  Note that if 𝐴𝐹 𝑝𝑖 = ⟨𝐴𝑟𝑔𝑖′ ; 𝐴𝑡𝑡𝑖 ⟩, then 𝐴𝑟𝑔𝑖′ ⊆ 𝐴𝑟𝑔𝑖 ∪ 𝑃 𝐴𝑟𝑔 and 𝐴𝑡𝑡𝑖 ⊆ (𝑝𝐴𝑟𝑔𝑖 ∪ 𝑃 𝐴𝑟𝑔) ×
(𝑝𝑃 𝑟𝑎𝑖 × 𝑝𝐴𝑟𝑔𝑖 ).
Example 4. Charlie, Alice and Bob’s private internal AFs are the following.

       𝐴𝐹 𝑝Charlie =⟨{⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩, Wrong}, {(Wrong, ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩)}⟩
       𝐴𝐹 𝑝Alice = ⟨{SwHw, NoStock, SpecDel}, {(NoStock, Hw)(SpecDel, NoStock)}⟩
       𝐴𝐹 𝑝Bob = ⟨{Sw, NoTec, NewTwo}, {(NoTec, Sw), (NewTwo, NoTec)}⟩

Moreover, we also know that Charlie’s preference is the sequence of actions ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩ ·
⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩, while Alice and Bob’s preference is the sequence of actions Sw · Hw.
  We want the solutions of the form 𝑠 = ⟨𝐴𝑔𝑒𝑛𝑡1 : 𝑎1 , . . . , 𝐴𝑔𝑒𝑛𝑡𝑁 : 𝑎𝑁 ⟩ obtained by our protocol to
meet the following requirements: i) feasible, i.e. if a strategy profile assigns action 𝑎𝑗 to 𝐴𝑔𝑒𝑛𝑡𝑖 , then
𝑎𝑗 should be ‘doable’ for 𝐴𝑔𝑒𝑛𝑡𝑖 ; ii) acceptable (i.e. is a solution of the problem) and socially optimal
with respect to agents’ preferences.

Definition 6. Let ⟨𝐴𝑔, 𝐴𝑐𝑡, 𝑆𝑜𝑙⟩ a DMPP problem and let 𝑠 = ⟨𝐴𝑔𝑒𝑛𝑡1 : 𝑎1 , . . . , 𝐴𝑔𝑒𝑛𝑡𝑁 : 𝑎𝑁 ⟩ ∈ 𝑆𝑜𝑙.

    • 𝑎𝑗 ∈ 𝐴𝑐𝑡 is (globally) feasible for the agent 𝐴𝑔𝑒𝑛𝑡𝑖 if argument 𝑎𝑖𝑗 ∈ 𝑝𝑃 𝑟𝑎𝑖 (the practical argument
      that recommends 𝑎𝑗 to 𝐴𝑔𝑒𝑛𝑡𝑖 ) is admissible in the 𝐴𝑔𝑒𝑛𝑡𝑖 ’s perfect view, 𝐴𝐹 𝑣𝑖 ;
    • 𝑠 is feasible if and only if, for each 𝑖 ∈ {1, . . . , 𝑁 } the action 𝑎𝑖 is globally feasible for 𝐴𝑔𝑒𝑛𝑡𝑖 ;
    • 𝑠 is socially optimal if and only if 𝑠 is feasible and there is no 𝑠′ ∈ 𝑆𝑜𝑙 such that 𝑠′ is feasible and
      𝑠′ < 𝑠.

  Now, let us consider a communication protocol inspired by Chronological synchronous backtracking
(SBT), one of the simplest while most fundamental algorithms for distributed constraint satisfaction
problems, which requires a static ordering of agents. Following this, agents try to make a social decision.
Agents pass a token among them; the agent holding the token checks whether it can extend the current
partial solution by making a move to obtain a new partial acceptable solution. In this case, he sends
the token to the next agent; otherwise, he sends a message ngd (short for “not good”) to the previous
agent. The communication terminates either because all the agents agree on a solution or because every
(partial) solution has been discarded. In the former case, the agents found a solution to the problem,
while in the latter case, the problem is unsatisfiable. SBT is guaranteed to terminate and is sound and
complete (i.e., it terminates only with correct answers and for all problems).
  We can write a tcla program emulating a DMPP ⟨𝐴𝑔, 𝐴𝑐𝑡, 𝑆𝑜𝑙⟩, where 𝐴𝑔 = {𝐴𝑔𝑒𝑛𝑡1 . . . , 𝐴𝑔𝑒𝑛𝑡𝑁 },
using 𝑁 tcla agents in parallel. Without loss of generality, we assume that agents with smaller index
numbers are higher in the static agent ordering required by SBT; therefore, 𝐴𝑔𝑒𝑛𝑡1 activates the
protocol.

Definition 7 (Translation). Let ⟨𝐴𝑔, 𝐴𝑐𝑡, 𝑆𝑜𝑙⟩ a DMPP problem, such that 𝐴𝑔 = {𝐴𝑔𝑒𝑛𝑡1 . . . , 𝐴𝑔𝑒𝑛𝑡𝑁 }
and for 𝑖 ∈ {1, . . . , 𝑛}, 𝐿𝑖 is the 𝐴𝑔𝑒𝑛𝑡𝑖 ’s preference. The translation of ⟨𝐴𝑔, 𝐴𝑐𝑡, 𝑆𝑜𝑙⟩ is
𝒯 (⟨𝐴𝑔, 𝐴𝑐𝑡, 𝑆𝑜𝑙⟩) = 𝒯 (𝐴𝑔𝑒𝑛𝑡1 )‖ · · · ‖𝒯 (𝐴𝑔𝑒𝑛𝑡𝑁 ), where

    • 𝒯 (𝐴𝑔𝑒𝑛𝑡1 ) = new (𝑝𝑃 𝑟𝑎1 ∪ 𝑝𝐴𝑟𝑔1 ) in 𝑇1 (𝐿1 )𝐴𝐹 𝑝1 and
    • for 1 < 𝑖 ≤ 𝑁 , 𝒯 (𝐴𝑔𝑒𝑛𝑡𝑖 ) = new (𝑝𝑃 𝑟𝑎𝑖 ∪ 𝑝𝐴𝑟𝑔𝑖 ) in 𝑝𝐴𝐹 𝑖
                                                                   𝑝𝑖
                                                                      such that 𝑝𝑖 is the (without parame-
      ters) procedure 𝑝𝑖 :: check∞ ({𝑡𝑜𝑘𝑖 }, ∅) → rmv({𝑡𝑜𝑘𝑖 }, ∅) → 𝑇𝑖 (𝐿𝑖 ) ∈ 𝑃

such that                           {︃
                                      𝑓 𝑎𝑖𝑙𝑢𝑟𝑒                     if 𝑖 = 1
                           𝑇𝑖 (𝜀) =
                                      add({𝑛𝑔𝑑𝑖−1 }, ∅) → 𝑝𝑖       if 1 < 𝑖 ≤ 𝑁

    • 𝑇𝑖 (𝑎 · 𝐿′ ) = 𝑐-𝑡𝑒𝑠𝑡1 (𝑎, in, 𝑎𝑑𝑚) → ( add({𝐴𝑔𝑒𝑛𝑡𝑖 : 𝑎}, 𝑁 𝑜𝐺|{𝐴𝑔𝑒𝑛𝑡𝑖 :𝑎} ) →
                               ( ∃𝑠 ∈ 𝑆𝑜𝑙 | 𝑐-𝑡𝑒𝑠𝑡1 (𝑠, in, 𝑎𝑑𝑚) → 𝑅𝑖 (𝑎 · 𝐿′ )
                                +𝑃
                                 𝑡𝑟𝑢𝑒 → rmv({𝐴𝑔𝑒𝑛𝑡𝑖 : 𝑎}, ∅) → 𝑇𝑖 (𝐿′ ) )
                      +𝑃
                      𝑡𝑟𝑢𝑒 → 𝑇𝑖 (𝐿′ )
      and
                              ⎧
                              ⎪ add({𝑡𝑜𝑘𝑖+1 }, ∅) →
                              ⎪
                              ⎪
                                                                                   if 1 ≤ 𝑖 < 𝑁
                                     ( check∞ ({𝑔𝑑}, ∅) → 𝑠𝑢𝑐𝑐𝑒𝑠𝑠
                              ⎪
                              ⎪
                              ⎪
                              ⎪
                              ⎪
                                      +
                              ⎪
                              ⎪
                              ⎪
                              ⎨
                         ′
                𝑅𝑖 (𝑎 · 𝐿 ) =         check∞ ({𝑛𝑔𝑑𝑖 }, ∅) →
                              ⎪
                                        rmv({𝑛𝑔𝑑𝑖 , 𝐴𝑔𝑒𝑛𝑡𝑖 : 𝑎}, ∅) → 𝑇𝑖 (𝐿′ )) )
                              ⎪
                              ⎪
                              ⎪
                              ⎪
                              ⎪
                              ⎪
                              ⎪
                              ⎪
                              ⎪
                              ⎪
                                  add({𝑔𝑑}, ∅) → 𝑠𝑢𝑐𝑐𝑒𝑠𝑠                           if 𝑖 = 𝑁
                              ⎩

   The translation of a DMPP problem is presented in Definition 7. First of all, note that the agents
in the translation have a non-empty local argumentation framework. We can assume that initially,
an agent builds its local framework by using an add step. The computation starts in the initial public
argumentation framework 𝐴𝐹 𝑑 (Definition 5) and the empty timeout 𝑇0 . The computation is started
by 𝐴𝑔𝑒𝑛𝑡1 , which checks for a (globally) feasible action 𝑎. When an 𝐴𝑔𝑒𝑛𝑡𝑖 receives the token 𝑡𝑜𝑘𝑖 ,
it iterates over all actions starting from the agent’s most preferred one(s). The step add({𝐴𝑔𝑒𝑛𝑡𝑖 :
𝑎}, 𝑁 𝑜𝐺|{𝐴𝑔𝑒𝑛𝑡𝑖 :𝑎} ) stores 𝐴𝑔𝑒𝑛𝑡𝑖 ’s action choice in the public shared argumentation framework. For
the current action 𝑎, the agent checks whether it is (globally) feasible (𝑐-𝑡𝑒𝑠𝑡𝑖 (𝑎, in, 𝑎𝑑𝑚)). In this case,
it adds the argument 𝐴𝑔𝑒𝑛𝑡𝑖 : 𝑎 to the global framework and checks the consistency of 𝐴𝑔𝑒𝑛𝑡𝑖 : 𝑎 wrt.
the current partial solution and the acceptable solutions, namely the partial consistency of 𝐴𝑔𝑒𝑛𝑡𝑖 : 𝑎
(∃𝑠 ∈ 𝑆𝑜𝑙 | 𝑐-𝑡𝑒𝑠𝑡1 (𝑠, in, 𝑎𝑑𝑚)). If the argument 𝐴𝑔𝑒𝑛𝑡𝑖 : 𝑎 is not consistent, 𝐴𝑔𝑒𝑛𝑡𝑖 removes it from
the global framework. After obtaining a partially consistent and feasible action 𝑎, 𝐴𝑔𝑒𝑛𝑡𝑖 either adds
𝑡𝑜𝑘𝑖+1 to the public framework if it is not the last (1 ≤ 𝑖 < 𝑁 ), or (if 𝑖 = 𝑁 ) it adds 𝑔𝑑 to the public
framework to communicate to other agents that an acceptable solution has been computed and then
terminates with success.
   However, when 𝑖 ̸= 1 (i.e. the current agent is not the first agent), if 𝐴𝑔𝑒𝑛𝑡𝑖 fails to find any partially
consistent and feasible action, it adds the argument 𝑛𝑔𝑑𝑖−1 to the public framework (for the previous
agent) to backtrack. Finally, if 𝐴𝑔𝑒𝑛𝑡1 cannot find any action which can be extended to find a solution,
the computation terminates with failure.
Example 5. Let us consider the DMPP problem of the Example 1. For the agent 𝐶ℎ𝑎𝑟𝑙𝑖𝑒 we have
𝒯 (Charlie) =
new ({⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩}) in 𝑇𝐶 (⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩)𝐴𝐹 𝑝Charlie , where
𝑇𝐶 (⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩) is defined in Table 9. Analogously for the agents 𝐴𝑙𝑖𝑐𝑒 and 𝐵𝑜𝑏.
The computation of 𝒯 (Charlie) ‖ 𝒯 (Alice) ‖ 𝒯 (Bob) starts in an initial public shared argumentation
framework


        𝐴𝐹 𝑑 = ⟨ { SpecDel, Suez, NewTwo, Urgency, Wrong, NewOffice,
                   ⟨𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, A:Hw, B:Sw⟩, ⟨𝐶 : ⟨Sw𝐴𝑙𝑖 , Hw𝐵𝑜𝑏 ⟩, A:Sw, B:Hw⟩ }
                 { (Suez, SpecDel), (Urgency, NewTwo), (NewOffice, Wrong) } ⟩.

   In Example 5, we translate the agent Charlie of Example 1 to coordinate the three agents’ actions.
Charlie is ranked higher in the static agent ordering, so he obtains the token first.
   Charlie checks whether the action ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩ (’buy hardware from Alice and software from Bob’)
is (globally) feasible, and in this case, he stores his most preferred action choice and the corresponding
attack in 𝑁 𝑜𝐺 in the shared AF. Then he checks the consistency of 𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩ wrt. the
acceptable solutions, namely he checks if ∃𝑠 ∈ 𝑆𝑜𝑙 | 𝑐-𝑡𝑒𝑠𝑡1 (𝑠, in, 𝑎𝑑𝑚). If the argument is not
consistent, Charlie removes it from the global framework. If vice versa, the argument is consistent,
Charlie adds 𝑡𝑜𝑘𝐴 (for Alice) to the global framework and then waits until either argument 𝑔𝑑 or
argument 𝑛𝑔𝑑𝐶 is checked for presence in the global framework. If the 𝑔𝑑 token is present in the global
framework, Charlie terminates successfully.
   In the other case, Charlie removes the argument 𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩ from the global framework
and he tries again to coordinate with Alice and Bob, adding the argument ⟨𝐶 : ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩ (’buy
the hardware from Bob and the software from Alice’), with its corresponding attack in 𝑁 𝑜𝐺, to the
global framework. Finally, if neither⟨𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩ nor ⟨𝐶 : ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩ can be extended to
find a solution, Charlie terminates with failure and causes the termination with failure of the entire
translation of the initial DMPP problem.
   Since all moves other than the first in our disputes are the same as in standard TPI-disputes, and
there are finitely many arguments presented in the first move, we have:
Proposition 1. Let 𝒯 be a translation of a DMPP ⟨𝐴𝑔, 𝐴𝑐𝑡, 𝑆𝑜𝑙⟩, 𝐴𝐹 𝑑 is as in Definition 5 and 𝑇0 is the
empty timeout environment. Then the computation starting from ⟨𝒯 , 𝐴𝐹 𝑑, 𝑇0 ⟩ is guaranteed to termi-
nate. Moreover there exist an 𝐴𝐹 and a timer 𝑇 such that either ⟨𝒯 , 𝐴𝐹 𝑑, 𝑇0 ⟩ −→ ⟨𝑓 𝑎𝑖𝑙𝑢𝑟𝑒, 𝐴𝐹, 𝑇 ⟩
or ⟨𝒯 , 𝐴𝐹 𝑑, 𝑇0 ⟩ −→ ⟨𝑠𝑢𝑐𝑐𝑒𝑠𝑠, 𝐴𝐹, 𝑇 ⟩.
Theorem 1 (Correctness and Completness). Let 𝒯 be a translation of a DMPP ⟨𝐴𝑔, 𝐴𝑐𝑡, 𝑆𝑜𝑙⟩, 𝐴𝐹 𝑑 as
specified in Definition 5 and 𝑇0 the empty timeout environment. The following holds:
   1. ⟨𝒯 , 𝐴𝐹 𝑑, 𝑇0 ⟩ −→* ⟨𝑠𝑢𝑐𝑐𝑒𝑠𝑠, 𝐴𝐹, 𝑇 ⟩ if and only if ∃𝑠 ∈ 𝑆𝑜𝑙 such that 𝑠 is feasible for
      ⟨𝐴𝑔, 𝐴𝑐𝑡, 𝑆𝑜𝑙⟩. Moreover if ⟨𝒯 , 𝐴𝐹 𝑑, 𝑇0 ⟩ −→* ⟨𝑠𝑢𝑐𝑐𝑒𝑠𝑠, 𝐴𝐹, 𝑇 ⟩ then there is a unique 𝑠′ ∈ 𝑆𝑜𝑙
      such that 𝑠′ is feasible in 𝐴𝐹 and so 𝑠′ is socially optimal for ⟨𝐴𝑔, 𝐴𝑐𝑡, 𝑆𝑜𝑙⟩.
Table 9
Definition of 𝑇𝐶 (⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩) for agent 𝐶ℎ𝑎𝑟𝑙𝑖𝑒 used in Example 5.
            𝑇𝐶 (⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩) =
             𝑐-𝑡𝑒𝑠𝑡1 (⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, in, 𝑎𝑑𝑚) →
                   ( add({𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩},
                          {(𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, ⟨𝐶 : ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩, 𝐴 : Sw, 𝐵 : Hw⟩)}) →
                          ( ( 𝑐-𝑡𝑒𝑠𝑡1 (⟨𝐶 : ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩, 𝐴 : Sw, 𝐵 : Hw⟩, in, 𝑎𝑑𝑚) ∨
                             𝑐-𝑡𝑒𝑠𝑡1 (⟨𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, 𝐴 : Hw, 𝐵 : Sw⟩, in, 𝑎𝑑𝑚) ) →
                                   ( add({𝑡𝑜𝑘𝐴 }, ∅) →
                                          ( check∞ ({𝑔𝑑}, ∅) → 𝑠𝑢𝑐𝑐𝑒𝑠𝑠 +
                                            check∞ ({𝑛𝑔𝑑𝐶 }, ∅) → rmv({𝑛𝑔𝑑𝐶 , 𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩}, ∅) →
                                                                       𝑇𝐶 (⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩) ) ) )
                            +𝑃
                            𝑡𝑟𝑢𝑒 → rmv({𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩}, ∅) → 𝑇𝐶 (⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩)))) )
             +𝑃
             𝑡𝑟𝑢𝑒 → 𝑇𝐶 (⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩)
                                                           and
            𝑇𝐶 (⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩) =
             𝑐-𝑡𝑒𝑠𝑡1 (⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩, in, 𝑎𝑑𝑚) →
                  ( add({𝐶 : ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩},
                          {(𝐶 : ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩, ⟨𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, 𝐴 : Hw, 𝐵 : Sw⟩)}) →
                          ( ( 𝑐-𝑡𝑒𝑠𝑡1 (⟨𝐶 : ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩, 𝐴 : Sw, 𝐵 : Hw⟩, in, 𝑎𝑑𝑚) ∨
                             𝑐-𝑡𝑒𝑠𝑡1 (⟨𝐶 : ⟨Hw𝐴𝑙𝑖 , Sw𝐵𝑜𝑏 ⟩, 𝐴 : Hw, 𝐵 : Sw⟩, in, 𝑎𝑑𝑚) ) →
                                   ( add({𝑡𝑜𝑘𝐴 }, ∅) →
                                          ( check∞ ({𝑔𝑑}, ∅) → 𝑠𝑢𝑐𝑐𝑒𝑠𝑠 +
                                           check∞ ({𝑛𝑔𝑑𝐶 }, ∅) → rmv({𝑛𝑔𝑑𝐶 , 𝐶 : ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩}, ∅) →
                                                                      𝑓 𝑎𝑖𝑙𝑢𝑟𝑒 ) ) )
                          +𝑃
                           𝑡𝑟𝑢𝑒 → rmv({𝐶 : ⟨Hw𝐵𝑜𝑏 , Sw𝐴𝑙𝑖 ⟩}, ∅) → 𝑓 𝑎𝑖𝑙𝑢𝑟𝑒 )
             +𝑃
             𝑡𝑟𝑢𝑒 → 𝑓 𝑎𝑖𝑙𝑢𝑟𝑒


   2. ⟨𝒯 , 𝐴𝐹 𝑑, 𝑇0 ⟩ −→* ⟨𝑓 𝑎𝑖𝑙𝑢𝑟𝑒, 𝐴𝐹, 𝑇 ⟩ if and only if ̸ ∃𝑠 ∈ 𝑆𝑜𝑙 such that 𝑠 is feasible for
      ⟨𝐴𝑔, 𝐴𝑐𝑡, 𝑆𝑜𝑙⟩.


4. Conclusion
We have discussed a new version of tcla (introduced in [1]) featuring local stores, enabling agents to
keep important data private while integrating it into their decision-making processes. Consequently, the
language is particularly effective at modelling real-life scenarios, as evidenced by our explicit provision
of a translation of a general DMPP in tcla with local stores, which is both correct and complete.
Additionally, we have provided a working example demonstrating its use in a privacy-preserving setting
for decision-making.
   In this paper, we performed a direct translation of the reasoning process from [11] to demonstrate how
a decision-making process can be implemented by considering local knowledge. This study, therefore,
mainly illustrates how the decision-making process can be implemented in tcla using the local store.
However, it is important to note that this language has several other features that could be used to
simplify the construction of such models. Exploring these features to achieve more natural interactions
with the native tcla constructs will be part of our future work. Besides, we plan to develop further
illustrative examples to showcase the framework’s effectiveness and highlight limitations across various
scenarios.
  Another goal is to extend tcla to cover additional aspects that could impact agents’ behaviour during
argument exchange interactions. For instance, we would like to model real-world applications where
agents can coordinate autonomously and concurrently without being bound to a fixed agent ordering.
Moreover, we want to endow the agents with a notion of ownership to establish which actions can
be performed on the shared arguments. In the current implementation, an autonomous agent could
remove arguments added to the shared store by its opponents in order to win a debate. However, this is
not an effective solution in practical cases and is also considered an illegal move in dialogue games.
Finally, we plan to use the language to model chatbot interactions [13] and explanation activities [14].


Acknowledgments
The authors are member of the INdAM Research group GNCS and of Consorzio CINI. This work has been
 partially supported by: GNCS-INdAM, CUP_E53C23001670001; GNCS-INdAM, CUP_E53C22001930001;
 EU PNRR MUR PRIN project EPICA: “Empowering Public Interest Communication with Argumentation”
- Next Generation (J53D23007220006); EU PNRR MUR project SERICS - Next Generation (PE00000014);
 PNRR MIUR project FAIR - Future AI Research (PE00000013), Spoke 9 - Green aware AI; University of
 Perugia - Fondo Ricerca di Ateneo (2020, 2021, 2022) - Projects BLOCKCHAIN4FOODCHAIN, FICO,
AIDMIX, “Civil Safety and Security for Society”; European Union - Next Generation EU NRRP-MUR
- Project J97G22000170005 VITALITY: “Innovation, digitalisation and sustainability for the diffused
 economy in Central Italy”; Piano di Sviluppo e Coesione del Ministero della Salute 2014-2020 - Project
 I83C22001350001 LIFE: “the itaLian system Wide Frailty nEtwork” (Linea di azione 2.1 “Creazione di una
 rete nazionale per le malattie ad alto impatto” - Traiettoria 2 “E-Health, diagnostica avanzata, medical
 devices e mini invasività”).


References
 [1] S. Bistarelli, M. C. Meo, C. Taticchi, On the role of local arguments in the (timed) concurrent
     language for argumentation, in: Proceedings of the 7th Workshop on Advances in Argumentation
     in Artificial Intelligence (AIˆ3 2023) co-located with the 22nd International Conference of the
     Italian Association for Artificial Intelligence (AIxIA 2023), Rome, Italy, November 9, 2023, volume
     3546 of CEUR Workshop Proceedings, CEUR-WS.org, 2023.
 [2] A. C. Kakas, P. Moraitis, Adaptive agent negotiation via argumentation, in: 5th International
     Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2006), ACM, 2006, pp.
     384–391.
 [3] I. Rahwan, S. D. Ramchurn, N. R. Jennings, P. McBurney, S. Parsons, L. Sonenberg, Argumentation-
     based negotiation, Knowl. Eng. Rev. 18 (2003) 343–375.
 [4] K. Atkinson, T. J. M. Bench-Capon, P. McBurney, Multi-agent argumentation for edemocracy,
     in: Proceedings of the Third European Workshop on Multi-Agent Systems, Koninklijke Vlaamse
     Academie van Belie voor Wetenschappen en Kunsten, 2005, pp. 35–46.
 [5] A. Rosenfeld, S. Kraus, Strategical argumentative agent for human persuasion, in: ECAI 2016
     - 22nd European Conference on Artificial Intelligence - Including Prestigious Applications of
     Artificial Intelligence (PAIS 2016), volume 285 of Frontiers in Artificial Intelligence and Applications,
     IOS Press, 2016, pp. 320–328.
 [6] S. Bistarelli, M. C. Meo, C. Taticchi, Timed concurrent language for argumentation with maximum
     parallelism, J. Log. Comput. 33 (2023) 712–737.
 [7] S. Bistarelli, M. C. Meo, C. Taticchi, Timed concurrent language for argumentation: An interleaving
     approach, in: Practical Aspects of Declarative Languages - 24th International Symposium, PADL
     2022, Proceedings, volume 13165 of LNCS, Springer, 2022, pp. 101–116.
 [8] P. M. Dung, On the Acceptability of Arguments and its Fundamental Role in Nonmonotonic
     Reasoning, Logic Programming and n-Person Games, Artif. Intell. 77 (1995) 321–358.
 [9] P. Baroni, M. Caminada, M. Giacomin, An introduction to argumentation semantics, Knowl. Eng.
     Rev. 26 (2011) 365–410.
[10] M. Caminada, On the issue of reinstatement in argumentation, in: Proc. of JELIA 2006 - 10th
     European Conference on Logics in Artificial Intelligence, volume 4160 of LNCS, Springer, 2006, pp.
     111–123.
[11] Y. Gao, F. Toni, H. Wang, F. Xu, Argumentation-based multi-agent decision making with privacy
     preserved, in: Proceedings of the 2016 International Conference on Autonomous Agents &
     Multiagent Systems, ACM, 2016, pp. 1153–1161.
[12] L. Amgoud, H. Prade, Using arguments for making and explaining decisions, Artif. Intell. 173
     (2009) 413–436.
[13] S. Bistarelli, C. Taticchi, F. Santini, A chatbot extended with argumentation, in: Proceedings of the
     5th Workshop on Advances in Argumentation in Artificial Intelligence 2021 co-located with the
     20th International Conference of the Italian Association for Artificial Intelligence (AIxIA 2021),
     Milan, Italy, November 29th, 2021, volume 3086 of CEUR Workshop Proceedings, CEUR-WS.org,
     2021.
[14] S. Bistarelli, A. Mancinelli, F. Santini, C. Taticchi, Arg-xai: a tool for explaining machine learning
     results, in: 34th IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2022,
     IEEE, 2022, pp. 205–212.