<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Applying Strategic Reasoning for Accountability Ascription in Multiagent Teams</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vahid Yazdanpanah</string-name>
          <email>v.yazdanpanah@soton.ac.uk</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sebastian Stein</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Enrico H. Gerding</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nicholas R. Jennings</string-name>
          <email>n.jennings@imperial.ac.uk</email>
        </contrib>
      </contrib-group>
      <abstract>
        <p>For developing human-centred trustworthy autonomous systems and ensuring their safe and effective integration with the society, it is crucial to enrich autonomous agents with the capacity to represent and reason about their accountability. This is, on one hand, about their accountability as collaborative teams and, on the other hand, their individual degree of accountability in a team. In this context, accountability is understood as being responsible for failing to deliver a task that a team was allocated and able to fulfil. To that end, the semantic (strategic reasoning) machinery of the Alternating-time Temporal Logic (ATL) is a natural modelling approach as it captures the temporal, strategic, and coalitional dynamics of the notion of accountability. This allows focusing on the main problem on: “Who is accountable for an unfulfilled task in multiagent teams: when, why, and to what extent?” We apply ATL-based semantics to define accountability in multiagent teams and develop a fair and computationally feasible procedure for ascribing a degree of accountability to involved agents in accountable teams. Our main results are on decidability, fairness properties, and computational complexity of the presented accountability ascription methods in multiagent teams.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>For developing human-centred Trustworthy Autonomous
Systems (TAS), accountability reasoning plays a key role as
it contributes to: assessing the reliability of task allocations,
ensuring verifiably safe and responsible human-agent
collectives, and measuring the extent of each individual agent’s
contribution to potential failures [Yazdanpanah et al., 2021b].</p>
      <p>Accountability, as the task-oriented form of responsibility,
is understood as being responsible for failing to deliver an
allocated task [Yazdanpanah et al., 2021a]. This notion
relates to but is distinguishable from the epistemic notion of
blameworthiness (as being responsible for knowingly causing
an outcome) and the normative notion of liability/culpability
(as being responsible for causing a normatively
undesirable outcome) [van de Poel, 2011; Alechina et al., 2017;
Chockler and Halpern, 2004]. We abstract from such
neighbouring notions, as well as the exact procedure of task
allocation, and merely focus on the notion of accountability.</p>
      <p>Supporting the reliability and verifiability of AI systems
are key to ensure a trustworthy performance of autonomous
systems in human-agent collectives, i.e., ensuring a
desirable behaviour of TAS [Jennings et al., 2014;
Abeywickrama et al., 2019]. Then, measuring the extent of agents’
contribution to potential failures—e.g., undelivered tasks—
constitutes their degree of accountability for such undesirable
state of affairs [van de Poel, 2011].</p>
      <p>In addition to technological importance, addressing the
accountability ascription problem—by determining
accountable teams and ascribing a degree of accountability to
involved agents in a fair and computationally feasible fashion—
also contributes to comply with ethical AI guidelines [EC:
The High-Level Expert Group on AI, 2019]. It enables
determining who is to account for a (potentially undesirable)
system behaviour and fosters the societal alignment of
autonomous systems [Russell, 2019; Office for Artificial
Intelligence - GOV.UK, 2020; Kalenka and Jennings, 1999].</p>
      <p>In relation to the concept of explainability in autonomous
systems [Miller, 2019; Belle, 2017], as the capacity to
describe why a particular behaviour is materialised,
accountability is focused on determining who and to what extent
should account for it [van de Poel, 2011]. In the responsibility
reasoning literature, accountability is understood as (i.e., is a
form of) task-oriented responsibility. In the prospective form,
one allocates a task to (an agent or) agent group and sees
them accountable to bring it about. Then in the retrospective
form of accountability (which is the main focus of this work),
if remains unfulfilled, is to account for ¬ . This is in
particular a challenging problem in situations where tasks are
allocated to agent groups or coordinated teams. Then,
observing that a task is not delivered makes it clear that a team
is to account for it; but the the extent/degree of accountability
of each individual is not well-defined. This problem
corresponds to what is known as a responsibility voids in the
literature on moral responsibility [Braham and van Hees, 2011]
and the retrospective dimension of task coordination in
multiagent systems [Yazdanpanah et al., 2020].</p>
      <p>Although responsibility voids are well-studied in the
philosophical literature [Braham and van Hees, 2011; van de Poel
et al., 2012] and multiagent systems research [Yazdanpanah
et al., 2019; Friedenberg and Halpern, 2019], various aspects
of their task-oriented dual, i.e., accountability voids in
multiagent teams, are less-explored. A notion of accountability is
used in [Baldoni et al., 2019] for engineering business
processes and in [Baldoni et al., 2020] to reason about
organisational robustness; but they do not capture accountability voids
in multiagent teams.</p>
      <p>For the first time, this paper presents a verifiable notion of
accountability in ATL semantics, approaches the problem of
accountability voids in multiagent teams, and develops
algorithmic logic-based methods to resolve them. We show the
applicability of our methods and present formal results on
decidability and computational complexity of the presented
solution concepts. Our accountability ascription techniques
contribute to developing verifiably safe and responsible
autonomous system and support their integrability to form
trustworthy human-agent collectives.1
2</p>
    </sec>
    <sec id="sec-2">
      <title>Accountability Analysis in ATL Semantics</title>
      <p>In this section, we present the intuition behind our work using
a running example, analyse various conceptual aspects, and
recall key formal notions.
2.1</p>
      <sec id="sec-2-1">
        <title>Conceptual Analysis</title>
        <p>Imagine a vaccination project that demands 6 units of
vaccine (each unit sufficient for vaccinating 1000 patients) to be
delivered and injected while we have vaccine delivery agents
a1, a2, and a3 with the capacity to deliver 5, 3, and 2 units of
vaccine, respectively; and injection specialist agents a4, a5,
and a6, respectively capable of injecting 1, 5, and 5 units of
vaccine. To fulfil this project, one needs to allocate the tasks
to capable teams of agents. The task allocation process itself
is well-studied in the multiagent systems context [Macarthur
et al., 2011] and is beyond the focus of this work. Task
allocation can be done in an efficient manner (e.g., by allocating
tasks to a minimal team of agents) or in a more resilient
fashion (e.g., by considering backup teams and allocating each
task to more than one capable team). For instance, a1, a3, a4,
and a5 can collectively handle the project as they are able to
efficiently fulfil both the delivery task as well as the injection
task in this project. In a more resilient allocation (which also
requires some form of coordination), delivery and injection
tasks in this project can be allocated also to the backup team
a1, a2, a4, and a6. This team overlaps with the main team
1Note that accountability is related to but different from the
normative and legal notion of liability [Hart, 2008; Yazdanpanah et al.,
2021a] — which is not the focus of this work. We argue that
distinguishing various forms of responsibility and developing operational
tools to reason about them in AI systems are key to ensuring the
trustworthy behaviour and safety of such systems. In this work, we
focus on reasoning about accountability in multiagent teams which
itself can be a base, but not the only requirement, for ascribing
liability in a given context and with regard to a set of legal rules and
regulative norms.
•
•
•
while a2 and a6 can substitute for tasks originally allocated
to a3 and a5.</p>
        <p>Our focus in this work is not on the allocation itself but
on the accountability ascription problem: verifying who are
the accountable teams if a task-oriented (vaccination) project
fails and on determining each agent’s degree of
accountability for such an outcome. Although we abstract from the
allocation process, it is crucial to note that the accountability
ascription problem follows the ascription process (in a
temporal order) and relates to the properties that the allocation
satisfies. In particular, if the allocation process merely gives
each task to a single agent, there is no need to determine
a degree of accountability as tasks are directly linked to an
accountable agent, i.e., each agent is fully accountable for
failed tasks that were allocated to her. However, in real-life
applications (e.g., in our vaccination project) single agents
may be incapable of delivering the tasks, thus it is
necessary to allow allocating tasks to agent teams. While
allocating tasks to teams provides more flexibility, any task failure
leads to so called “accountability voids” and what is known as
the “problem of many hands” [Braham and van Hees, 2011;
van de Poel et al., 2012]—where a team is clearly accountable
but the degree of accountability of each member is not
welldefined. We deem that having a clear understanding of, and
computationally tractable methods for ascribing, individual’s
degree of accountability is key for defining justifiable
sanctioning measures and, in turn, coordinating the behaviour of
multiagent teams towards desirable ones.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>ATL Notions and Formal Preliminaries</title>
        <p>To model Multiagent Systems (MAS) and reason about their
behaviour, we use standard Concurrent Game Structures
(CGS) and employ the syntax of the Alternating-time
Temporal Logic (ATL), adopted from [Alur et al., 2002]. The ATL
language and CGS, as its semantic machinery, allow
representing and reasoning about the temporal modalities of tasks
and accountability. In addition, ATL is implementable
using well-established model checking tools [Lomuscio et al.,
2017] and is expressive for specifying team-level capacities
(e.g., in contrast to similar logics like the Computation Tree
Logic). Having modalities to reason about the strategic
capacity of groups of agents, and not only individuals, make
ATL and the machinery of CGS natural choices as they allow
modelling team-level accountability and support the
transition towards individual-level abilities—crucial for resolving
accountability voids.</p>
        <p>Formally, we model a MAS as a CGS M =
⟨ ; Q; Act; ; ; d; o⟩ where:</p>
        <p>= {a1; : : : ; an} is a finite, non-empty set of n agents;
• Q is a finite, non-empty set of states;
• Act is a finite set of atomic actions;</p>
        <p>is a set of atomic propositions (with p ∈
proposition);
as a generic
∶ ↦ 2Q is a propositional evaluation function
(determining propositions that hold in a state);
• d ∶ × Q ↦ P(Act) a function that specifies the sets of
actions available to agents at each state;
• o is a transition function that assigns the outcome state
q′ = o(q; 1; : : : ; n) to state q and a tuple of actions
i ∈ d(ai; q) that can be executed by in q.</p>
        <p>In a CGS M, state formulae ' ∶∶= p S ¬' S ' ∧ ' (p ∈ )
specify properties of states and path formulae ∶∶= 4 ' S
'U ' S ◻' specify temporal properties over sequences of
states. p ∈ is a proposition (an atomic property that may
be valid in a state in Q), ¬ and ∧ are standard logical
operators, 4 ' means that ' is true in the next state of M, U '
means that has to hold at least until ' becomes true; and ◻'
means that ' is always true. We denote ¬ ◻ ¬' by ◇'. This
modality refers to the truth of ' in some point in time in
future and is known as the “existence” or “sometimes-in-future”
modality. In our work, we specify tasks as path formulae and
(task) projects as a set of tasks (examples will be provided
later). ATL is generic for specifying temporal properties over
infinite sequences. However, due to the temporally finite
nature of tasks in real-life applications (e.g., most tasks have a
deadline), we follow [De Giacomo and Vardi, 2015] and
introduce a finite notion of history, and base our accountability
reasoning on such finite traces. In the following, to improve
readability, we directly refer to elements of a specific (also
known as pointed) CGS M, e.g., as M is fixed, we write Q
instead of Q in M.</p>
        <p>Successors, Computations, and Histories: For two states q
and q′, we say q′ is a successor of q if there exist actions i ∈
d(ai; q) for ai ∈ in q such that q′ = o(q; 1; : : : ; n), i.e.,
agents in can collectively guarantee in q that q′ will be the
next system state. A computation of a CGS M is an infinite
sequence of states = q0; q1; : : : such that, for all k &gt; 0, we
have that qk is a successor of qk−1. We refer to a computation
that starts in q as a q-computation. We denote the k’th state
in by [k], and [0; k] and [k; ∞] respectively denote
the finite prefix q0; : : : ; qk and infinite suffix qk; qk+1; : : : of
. Finally, we say a finite sequence of states q0; : : : ; qn is a
q-history if qn = q, n ≥ 1, and for all 0 ≤ k &lt; n we have that
qk+1 is a successor of qk. We refer to any qk on a history h as
a member of h.</p>
        <p>Strategies and Outcomes: A strategy for an agent a ∈ is
a function a ∶ Q ↦ Act such that for all q ∈ Q, we have that
a(q) ∈ d(a; q). For a group of agents ⊆ , a collective
strategy Z = { ai S ai ∈ } is an indexed set of strategies,
one for every ai ∈ . Then, out(q; Z ) is defined as the set
of q-computations that agents in can enforce by following
their corresponding strategies in Z .</p>
        <p>Formulas of the language LAT L are defined by the
following syntax, '; ∶∶= p S ¬' S ' ∧ S ⟪ ⟫ 4 ' S ⟪ ⟫'U S
⟪ ⟫ ◻ ' where p ∈ is an atomic proposition, and ⊆
is a typical group of agents. Informally, ⟪ ⟫ 4 ' means
that has a strategy to ensure that the next state satisfies
'; ⟪ ⟫'U means that has a strategy to ensure while
maintaining the truth of '; and ⟪ ⟫ ◻ ' means that has
a strategy to ensure that ' is always true. The semantics of
AT L is defined relative to a CGS M and state q and is given
below:
• M; q ⊧ p iff q ∈ ( )</p>
        <p>p ;
• M; q ⊧ ¬' iff M; q ⊧~ ';
• M; q ⊧ ' ∧
iff M; q ⊧ ' and M; q ⊧ ;
• M; q ⊧ ⟪ ⟫ 4 ' iff exists a strategy Z such that for all
computations ∈ out(q; Z ), M; [1] ⊧ ';
• M; q ⊧ ⟪ ⟫'U iff exists a strategy Z such that for all
computations ∈ out(q; Z ), for some i, M; [i] ⊧ ,
and for all j &lt; i, M; [j] ⊧ ';
• M; q ⊧ ⟪ ⟫ ◻ ' iff exists a strategy Z such that for all
computations ∈ out(q; Z ), for all i, M; [i] ⊧ '.</p>
        <sec id="sec-2-2-1">
          <title>Moreover, given a formula ', we denote by '</title>
          <p>J KM the set
of states in which ' holds.
3</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Verifiably Accountable Teams</title>
      <p>We assume a function T d
the matrix of already alloMca(ted;'ta;sqk)sthanatdhraestudrnirsec1t iafcacegsisvetno
state formula ' is expected to be delivered by a team ⊆
by a particular state q ∈ Q, and returns 0 otherwise. Then
T Md;';q denotes the set of such teams. (In this formulation, we
represent single agents as singleton groups.) Accordingly, to
have a track of the point of allocation we use T Ma( ; '; q) in
which returning 1 means that in q, team received the task to
fulfil '. Then T Ma; ';q denotes the set of such teams. If a project
P is allocated to , then T a
superscripts “a” and “d” arMe (a p;a'rti;oqf)th=e1f u∶n∀c'tiion∈ Pna.mHeserteo,
distinguish whether q is the state in which a task is allocated
(a) or expected to be delivered (d). For instance, if the task of
delivering a vaccines box (denoted by v) is allocated to agent
group {Alice; Bob} in state q1 and expected to be fulfilled
by state q4 then T Ma({Alice; Bob}; v; q1) = 1 (determining
the point of task allocation) and T d Alice; Bob}; v; q4) =
1 (determining the expected pointMof({task delivery). These
auxiliary notions allow defining the task-oriented notion of
accountability as follows.</p>
      <p>Definition 1. In a multiagent system modelled by CGS M,
let ' be a state formula, q be a state, h = q0; : : : ; qn(qn = q)
be the materialised q-history, and be a team of agents. We
say is weakly q-accountable for ' based on h iff:
1. q ∈~ J'KM,
2. T Md( ; '; q) = 1, and
3. there exist qi; qj (i ≤ j) ∈ h such that T a</p>
      <p>and has a strategy in qj to ensure thatMM( ; q;'⊧; q'i.) = 1
Moreover, a team is q-accountable for ' based on h
iff it is weakly q-accountable and there exist no weakly
qaccountable ′ ⊂ for '. Analogously, is (weakly)
qaccountable for a project P based on h iff it is (weakly)
qaccountable for all ' ∈ P .</p>
      <p>Informally, a team is (weakly) q-accountable for ' only if
it is not the case while the team was tasked and able to see
to it that ' holds in q. Distinguishing the weak form of
accountability is to realise who are the core members of a team,
minimally accountable for a failed task. To have a reasonably
fair degree of accountability, we later focus on this minimal
group. Moreover, note that tasks in a project may be
fulfilled through a path and not necessarily jointly in a particular
state. This can be reduced to the availability of a path (and
accordingly a strategy) such that tasks are satisfied through
the path. If joint delivery is required in a domain, tasks can
be bounded together in a conjunctive form and defined as a
single task—and not as different members of a project. In
general, tasks in a project can be delivered sequentially. The
temporal modality inside each task specification determines
the temporal requirements on when it should be delivered.</p>
      <sec id="sec-3-1">
        <title>3.1 Accountability Reasoning in Practice</title>
        <p>Our vaccination scenario can be modelled as the 3-state
partial CGS presented in Figure 1. (We say partial as it depicts
only some of the states, necessary for our accountability
reasoning, and not all of the possible states.) As discussed,
various task allocation processes can be used. Imagine the case
that in q0, the delivery task with a temporal expectation that
vaccines should be delivered immediately, i.e., 4 'D, is
allocated to {a1; a3} and then in qD, the task to inject vaccines
immediately, i.e., 4 'I , is allocated to {a4; a5}.</p>
        <p>Given these allocations, if we reach to q1, we have the
history h = q0; q1 and the delivery team {a1; a3} is
q1accountable for 'D as they were tasked to and able to deliver
all the 6 units of vaccine. However, in this situation, no team
is q-accountable for 'I as the task to inject was specifically
allocated in qD which is not a state in the materialised history.
However, the allocation process could be less granular and
(instead of micro-managing each task in particular states and
efficiently allocating them to only one group) have allocated
the project P = {4 'D; 4 4 'I } to in q0. This means
that all the agents in are expected to ensure that vaccines
are delivered in an immediate next state after q0 and then are
all injected in one immediate state further. Then in q1, team
would be weakly q-accountable for both non-delivery and
non-injection. Next, we present the generalised form of such
properties on the relation between (weak) accountable teams
on the task level and project level.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Properties</title>
        <p>As discussed, verifying if a team is accountable for a
particular path formula ' is conditioned to whether they were
tasked to fulfil '. Thus, to verify accountability for a task, it
is crucial to consider that the allocation process may give a
task to more than one team (with the aim to have a level of
resiliency). We refer to the number of distinguishable teams
that are tasked to fulfil ' by its degree of resilience (as a result
of introducing redundancy) and denote it by DR('). For
instance, if the task to deliver the required units of vaccine ('D)
is allocated to two teams, then DR('D) = 2.</p>
        <p>Proposition 1. If 1 and 2 are q-accountable for ' and
DR('D) = 1, then 1 = 2.</p>
        <p>Proof. The minimality condition for accountability implies
that 1 and 2 have no excess members such that one team
can be a subset of another one. They can either fully overlap
or be distinct teams. Considering that the degree of resilience
is 1, the former is the case.
◻</p>
        <p>The proposition shows that accountability is a strong
concept as it requires the team to be a minimal weakly
accountable team of agents. As a corollary we have:
Corollary 1. If is q-accountable for ' then ′ ⊃
weakly q-accountable for '.
is</p>
        <p>Next, we show that a degree of resilience DR(') = k
implies having k teams weakly q-accountable for ' based on
h if the allocation process is suitable in the sense of
[Yazdanpanah et al., 2020]. Formally, suitability indicates that if
T Ma( ; '; q) = 1 then M; q ⊧ ⟪ ⟫'.</p>
        <p>Proposition 2. Let be a q-accountable team for ' based
on h. Given a valid task allocation, if DR(') = k then there
exist k − 1 teams ′ ≠ weakly q-accountable for ' based on
h.</p>
        <p>Proof. The suitability of the allocation process implies that
other k − 1 teams have a strategy to see to it that ' is the case.
The minimality condition cannot be satisfied necessarily as a
suitable allocation process may give a task to a team and its
super team(s). They posses a strategy to fulfil the task but
do not necessarily satisfy the minimality condition. Thus, in
accordance to Corollary 1, we have the result on these teams
being weakly accountable but not necessarily accountable for
'.
◻</p>
        <p>Note that this result holds even if the teams received the
task in question in different states through the history (and
not necessarily in the same state). We leave further dynamics
of task allocation with the notion of accountability and
studying how the coherency aspects of task allocation affects the
accountability ascription problem to further research.</p>
        <p>Moving to the project level accountability, we have:
Proposition 3. If is weakly q-accountable for P , then there
exist a q-accountable team for all ' ∈ P .</p>
        <p>Proof. In case is q-accountable for the project, then it is
the unique minimal team and accordingly q-accountable for
all the involved tasks. But in case it is a weakly q-accountable
team, it has a strategy to fulfil every ' from a point of
allocation through the history. However, it is not minimal. For
each ', eliminating excess members guarantees the existence
of minimal team.
◻
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Decidability</title>
        <p>In this section, we show that determining if a team is
accountable for a task is a decidable problem by proving the
following theorem in a constructive way.</p>
        <p>Theorem 1. The accountability verification problem in
ATLmodelled multiagent systems is decidable.</p>
        <p>To prove decidability, we give Algorithm 1 that, given a
multiagent system model M, a task allocation (accessible via
T d and T a
qlM), (an;d'a; qta)sk ' (pMat(h f;o'rm;qu)l)a,)a, rqe-thuirsntosrtyhehs=etqo0;f:w:
:e;aqkll(yqq=accountable teams for ' based on h.</p>
        <p>
          In this procedure, to verify if a team (among the teams
that were expected to bring about ') is accountable, we go
through the states of history h and use ATL model-checking
from [Alur et al., 2002] to determine whether was capable
of fulfilling the task. Note that we are taking a generic
approach as the procedure does not rely on a specific degree of
resilience and relaxes the assumption that the allocation
process was a suitable one
          <xref ref-type="bibr" rid="ref25 ref30 ref5 ref6">(in the sense of [Yazdanpanah et al.,
2020])</xref>
          .
        </p>
        <p>Next, we present computational complexity results for
accountability verification.
id
⟨</p>
        <p>;id
le; ⋆</p>
        <p>; ⋆⟩
; ⋆
le; ⋆
⟨D; ⋆; D; ⋆; ⋆; ⋆⟩</p>
        <p>⟨⋆; ⋆; ⋆; I; I; ⋆⟩
start</p>
        <p>q0
¬'D; ¬'I
qI
¬'D; 'I
Input: Model M; q ∈ Q; task ', set T Md;';q, and history
h = q0; : : : ; ql(q = ql).</p>
        <p>
          Result: AcchM;';q, the set of weakly q-accountable
teams for ' based on h.
1 AcchM;';q ← ∅;
2 if q ∈~ J'KM then
3 forall ∈ T Md;';q do
for i = 1 to l do
if M; ql−i ⊧ ⟪ ⟫'
          <xref ref-type="bibr" rid="ref3">(standard, see [Alur et
al., 2002])</xref>
          then
        </p>
        <p>AcchM;';q ← AcchM;';q ∪ { };
In this section, we establish the complexity of the presented
accountability verification (Algorithm 1). One of the main
advantages of accountability reasoning using ATL semantics
is that its (task) formulae, in turn the process for verifying
accountability, can be model-checked in deterministic linear
time [Alur et al., 2002; Bulling et al., 2010].</p>
        <p>Theorem 2. Accountability verification in ATL-modelled
multiagent systems is P-complete, and can be done in time
O(l:DR('):SMS:S'S), where SMS is given by the number of
transitions in M, l is the length of the history, and DR(') is
the degree of resilience for '.</p>
        <p>Proof. The complexity of the model checking part (line 5 in
Algorithm 1) is provided by the complexity of model
checking ATL [Alur et al., 2002] which is polynomial an can be
done in O(SMS:S'S). In Algorithm 1, we call this
modelchecking for every member of T Md;';q (with the cardinality
equal to DR(')) through all the states of history h with
length l.
◻</p>
        <p>And for project-level accountability verification, we have
the following result.</p>
        <p>Proposition 4. Verifying if a team is accountable for a
project P is SP S times the task complexity (Theorem 2) for
verifying the longest ' ∈ P .</p>
        <p>This shows a desirable tractability for verifying
projectlevel accountability as it requires SP S calls to Algorithm 1.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>A Fair Degree of Accountability</title>
      <p>Accountability voids are situations in which a task is
unfulfilled and a (non-singleton) team of agents or, more
problematically, various teams of agents are found to be accountable
for it. For instance, imagine that we allocate the task to
deliver vaccines (Figure 1) to {a1; a2} and also to {a1; a3} (to
have a degree of resilience equal to 2 on this task). Then,
as the system evolves, if we realise that the vaccines are
not delivered, we will have two accountable teams. Then
the question is: “to what extent each of the team members
are accountable?” As agent a1 is a member of both of the
accountable teams, it seems unreasonable to see all of the
three agents equally accountable and ascribe accountability—
eventually, potential sanctioning measures or penalties—in a
uniform way.</p>
      <p>In this section, we present a novel rule-based method, with
a tractable complexity, for ascribing accountability.2 This
method corresponds to Marginal Contribution Networks (MC
Nets) [Ieong and Shoham, 2005], and in turn provides a
computationally tractable way to compute a degree of
accountability that satisfies the Shapley-based notion of
fairness [Shapley, 1953].</p>
      <p>2There are recent attempts to address this problem using
Shapley-based cost allocation [Yazdanpanah et al., 2019;
Friedenberg and Halpern, 2019]. However, such approaches seem
operationally infeasible due to non-tractable complexity of the standard
method for computing the Shapley value. See more on related work
and positioning of our contribution in Section 5.</p>
      <sec id="sec-4-1">
        <title>4.1 A Rule-Based Accountability Ascription</title>
        <p>For accountability ascription in multiagent teams, we present
a two-phase procedure. For readability, we present these two
phases separately (in Algorithm 2 and Definition 2). This
separation also allows computing the degree of accountability
of each agent in a modular way. This results in a tractable
complexity as we do not need to go through all the agents and
compute all the degrees. The first phase, takes the randomly
indexed set AcchM;';q of accountable teams and generates a set
of accountability rules.</p>
        <sec id="sec-4-1-1">
          <title>Algorithm 2: Generating Accountability Rules</title>
          <p>Input: Randomly indexed Acch;' .</p>
          <p>M;q</p>
          <p>Result: RhM;';q, the set of accountability rules for '.
1 k ← SAcchM;';qS;
2 RhM;';q ← ∅;
3 for i = 1 to k do
4 rule ← i ∶ ( i; ∅) ↦ 1~k ;
5 RhM;';q ← RhM;';q ∪ {rule};
6 end</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>7 return Rh;' ;</title>
      <p>M;q</p>
      <p>As a result, this process generates k rules, where k is the
size of Acch;' . In Section 4.3, we will show that the
pre</p>
      <p>M;q
sented process generates a rule-based representation of a
cooperative game, use this auxiliary game to compute the
contribution of individuals to accountable groups, and formulate
their individual degree of accountability. In each rule of type
i ∶ (Pi; Ni) ↦ vi, we refer to i as the title/index of the ith
rule, Pi as the positive set in the rule, Ni as the negative set in
the rule, and vi as the value of the rule. Intuitively, by
assigning 1~k to k accountable groups (in each rule i generated by
Algorithm 2), we see all groups in AcchM;';q equally
accountable while the contribution of agents to such groups is the
base for computing their individual degree of accountability
(in Definition 2).</p>
    </sec>
    <sec id="sec-6">
      <title>Definition 2. Let Rh;'</title>
      <p>M;q be the set of q-accountability rules
generated by Algorithm 2 for ' based on h. For agent a ∈ ,
we say a rule r is applicable if a ∈ Pr and by !(a) denote the
set of rule indices that are applicable to a. Then we say agent
a’s degree of q-accountability for ' based on h, denoted by
acch;' vr
other</p>
      <p>M;q(a), is equal to 0 if !(a) = ∅ and ∑r∈!(a) SPrS
wise.</p>
      <p>Analogously, agents degree of accountability for a project
is commutable based on the set of accountability rules that
is generated for the project in question. In the following, we
apply this notion to our vaccination example and present the
fairness properties as well as the computational complexity
for computing this degree.</p>
      <sec id="sec-6-1">
        <title>4.2 Accountability Ascription in Practice</title>
        <p>In the vaccination scenario, in order to ascribe degrees of
accountability to agents, we first generate rules that
correspond to the set of accountable teams (in AcchM;';q). In this
case, as both {a1; a2} and {a1; a3} are accountable, we will
have two rules in Rh;'</p>
        <p>M;q = { 1 ∶ ({a1; a2}; ∅) ↦ 1~2; 2 ∶
({a1; a3}; ∅) ↦ 1~2}. Then, for all the agents the
second case of Definition 2 applies as they are all a member
of a positive set in a rule. Computing the share that each
agent gets in its corresponding applicable rules, we have that
acch;' M;q(a2) = acch;'</p>
        <p>M;q(a1) = 1~2 and acch;' M;q(a3) = 1~4.</p>
        <p>As observed, this degree is responsive to the larger
contribution of a1 (as it could contribute to two accountable teams)
and the symmetric presence of a2 and a3 (both contributory
to only one accountable teams). Note that although a2 and a3
had different delivery capacities, they received similar tasks
in this scenario, i.e., to cooperate with a1 and provide the
vaccine unit that a1 could not deliver.</p>
        <p>These desirable properties, generally known as fairness
properties in the game-theoretic literature, are not specific to
this scenario but generally valid for this degree of
accountability.
4.3</p>
      </sec>
      <sec id="sec-6-2">
        <title>Fairness Properties</title>
        <p>The following Theorem shows that the presented degree of
accountability satisfies all the Shapley-based fairness
axioms [Shapley, 1953].</p>
        <p>Theorem 3. The presented degree of accountability
acch;'</p>
        <p>M;q(a) in Definition 2 guarantees the following
fairness axioms: (1) ∑ai∈ acch;'</p>
        <p>, acch;' M;q(ai) = 1 (Efficiency); (2)
for any ai; aj ∈ M;q(ai) = acchM;';q(aj ) if for all
∈ AcchM;';q we have that ai ∈ Ô⇒ aj ∈ (Symmetry);
(3) acchM;';q(ai) = 0 if for all ∈ AcchM;';q, we have that
ai ∈~ (Dummy Player); (4) the summation of ai’s degree of
accountability for k different tasks '1; : : : ; 'k is k times its
degree for {'1; : : : ; 'k} (Additivity).</p>
        <p>To prove, we use the following lemmas and show that the
presented set of rules constitute a basic Marginal Contribution
Net (MC Net) [Ieong and Shoham, 2005] and that the degree
corresponds to the Shapley value of agents in this MC Net.
Accordingly, we have that our accountability degree satisfies
the four axiomatic fairness properties that uniquely
characterise the Shapley value.</p>
        <p>Lemma 1. RhM;';q is a basic Marginal Contribution Net (MC
Net) [Ieong and Shoham, 2005].</p>
        <p>Proof. The set of rules in Rh;'
M;q correspond to the
settheoretic representation of MC Nets in [Ohta et al., 2009]. ◻</p>
        <p>Note that in each rule, the intersection of the positive set
and the negative set is empty by definition (Pi ∩ Ni = ∅).
This allows relying on a linear computation of the Shapley
value of the MC Net.</p>
        <p>Lemma 2. For each ai ∈
ley value of ai in Rh;' .</p>
        <p>M;q
, acch;'</p>
        <p>M;q(a) computes the
ShapProof. In RhM;';q, rules only consist of positive literals [Ieong
and Shoham, 2005]. In such an MC Net, Shapley value of
each agent is equal to the summation of its Shapley value in
all the applicable rules.
Next, we show the desirable complexity of computing the
degree of accountability.
time for
computing
Theorem 4. The total running
acch;'</p>
        <p>M;q(a) is linear in the size of the input.</p>
        <p>Proof. We show this by first focusing on the computation of
the degree itself and then the complexity for generating Rh;'
M;q
as its input. To compute the degree of any agent, we are
computing its Shapley value in each rule and then apply a
summation for all the applicable rules. In MC Nets with positive
literals, each agent’s Shapely is equal to the value of the rule
over the number of members in P [Ieong and Shoham, 2005].
We have that the upper bound for the number of rules is the
degree of resilience of the task. Thus, as the Shapley in a
given rule can be computed in time linear in the pattern of the
rule, the total running time for computing the degree is linear
in the size of the input. And we have that the preceding rule
generation process goes through the set of accountable teams
which is at most equal to the degree of resilience, thus does
not affect the computing time.
◻
5</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Related Work</title>
      <p>In relation to past work, in particular to recent work
on logic-based responsibility reasoning in multiagent
settings [Friedenberg and Halpern, 2019; Yazdanpanah et al.,
2019], we focused on verifying the task-based notion of
accountability (as a specific form of responsibility reasoning)
while [Friedenberg and Halpern, 2019] study the epistemic
notion of blameworthiness and [Yazdanpanah et al., 2019]
focus on the responsibility of agents with imperfect
information. In comparison to these, we assumed perfect information
and focused on task dynamics and the task-oriented notion of
accountability. As discussed in [Yazdanpanah et al., 2021a],
task-based accountability focuses on reasoning about agents
that received a task but failed to deliver it while responsibility
in its generic from is about the ability to cause or avoid a state
of affair and blameworthiness is concerned with agents who
not only caused a situation but did it knowingly.</p>
      <p>
        In this work, we used finite histories as a natural choice
for modelling the temporally-bounded concept of task. This
corresponds with the so called provenance traces [Tsakalakis
et al., 2020], commonly used for reasoning about the
reasons behind a materialised situation and providing
behaviouraware explanations. Finally, we share the perspective with
[Friedenberg and Halpern, 2019; Yazdanpanah et al., 2019]
for applicability of cost sharing methods, such as the Shapley
value, for ascribing responsibility (in our case
accountability). However, in comparison to the standard Shapley
calculation with computationally intractable complexity, our
rulebased representation resulted in a computationally tractable
method for ascribing accountability degrees. Interestingly,
this low-complexity accountability ascription process is also
applicable to handle the complexity in imperfect
information settings
        <xref ref-type="bibr" rid="ref1 ref20 ref29 ref4">(e.g., in combination with [Yazdanpanah et al.,
◻
2019])</xref>
        as it is a module that comes after the verification of
accountable teams, hence does not require any changes to their
model-checking under imperfect information.
6
      </p>
    </sec>
    <sec id="sec-8">
      <title>Conclusions</title>
      <p>We proposed a formal account of the notion of accountability
and presented ATL-based techniques to verify accountability
in multiagent settings and ascribe a fair degree of
accountability to individual agents. Based on a novel rule-based
representation, we developed a fair and computationally tractable
degree for resolving accountability voids in multiagent teams.</p>
      <p>The results of this study and developed accountability
ascription method also contribute to integrating ethics into AI
systems and ensuring their safety and trustworthiness. In
particular, as discussed in [Yazdanpanah et al., 2021a],
developing computational tools to verify and reason about
different forms of responsibility and task-oriented accountability is
necessary for design and development of safe and
trustworthy AI. Such AI systems are expected to make autonomous
decisions and, at the same time, need to make sure that their
decisions are in compliance with safety concerns and ethical
values. To that end, we need to enrich such systems (and
their behaviour monitoring units) with the capacity to
contemplate how accountabilities, for potential consequences of
such decisions, are to be ascribed. Furthermore, embedding
accountability reasoning into AI systems contributes to
providing transparency on who is, and to what extent they are,
accountable for potentially undesirable behaviour of a given
AI-based product [Winfield and Jirotka, 2018].</p>
      <p>In this work, we focused on teams of agents assuming their
intra-team capacity to coordinate towards the fulfilment of
tasks. An interesting extension would be to consider the
feasibility of team formations based on the value of inter-agent
interactions [Beal et al., 2020]. This way, we can reason about
feasible team structures in presence of inter-agent
incompatibilities and study their dynamics with the accountability
ascription problem in multiagent teams.</p>
      <sec id="sec-8-1">
        <title>Acknowledgements</title>
        <p>This work was supported by the UK Engineering and
Physical Sciences Research Council (EPSRC) through the
Trustworthy Autonomous Systems Hub (EP/V00784X/1), the
platform grant entitled “AutoTrust: Designing a Human-Centred
Trusted, Secure, Intelligent and Usable Internet of Vehicles”
(EP/R029563/1), and the Turing AI Fellowship on
CitizenCentric AI Systems (EP/V022067/1).</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [Abeywickrama et al.,
          <year>2019</year>
          ] Dhaminda
          <string-name>
            <given-names>B Abeywickrama</given-names>
            ,
            <surname>Corina Cirstea</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Sarvapali D</given-names>
            <surname>Ramchurn</surname>
          </string-name>
          .
          <article-title>Model checking human-agent collectives for responsible AI</article-title>
          .
          <source>In Proceedings of Robot and Human Interactive Communication</source>
          , pages
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [Alechina et al.,
          <year>2017</year>
          ]
          <string-name>
            <given-names>Natasha</given-names>
            <surname>Alechina</surname>
          </string-name>
          , Joseph Y. Halpern, and
          <string-name>
            <given-names>Brian</given-names>
            <surname>Logan</surname>
          </string-name>
          .
          <article-title>Causality, responsibility and blame in team plans</article-title>
          .
          <source>In Proceedings of AAMAS-2017</source>
          , pages
          <fpage>1091</fpage>
          -
          <lpage>1099</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [Alur et al.,
          <year>2002</year>
          ]
          <string-name>
            <given-names>Rajeev</given-names>
            <surname>Alur</surname>
          </string-name>
          , Thomas A.
          <string-name>
            <surname>Henzinger</surname>
            , and
            <given-names>Orna</given-names>
          </string-name>
          <string-name>
            <surname>Kupferman</surname>
          </string-name>
          .
          <article-title>Alternating-time temporal logic</article-title>
          .
          <source>J. ACM</source>
          ,
          <volume>49</volume>
          (
          <issue>5</issue>
          ):
          <fpage>672</fpage>
          -
          <lpage>713</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [Baldoni et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Matteo</given-names>
            <surname>Baldoni</surname>
          </string-name>
          , Cristina Baroglio, Olivier Boissier, Roberto Micalizio, and
          <string-name>
            <given-names>Stefano</given-names>
            <surname>Tedeschi</surname>
          </string-name>
          .
          <article-title>Accountability and responsibility in multiagent organizations for engineering business processes</article-title>
          .
          <source>In Proceedings of EMAS</source>
          , pages
          <fpage>3</fpage>
          -
          <lpage>24</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [Baldoni et al.,
          <year>2020</year>
          ]
          <string-name>
            <given-names>Matteo</given-names>
            <surname>Baldoni</surname>
          </string-name>
          , Cristina Baroglio, and
          <string-name>
            <given-names>Roberto</given-names>
            <surname>Micalizio</surname>
          </string-name>
          .
          <article-title>Fragility and robustness in multiagent systems</article-title>
          .
          <source>In Proceedings of EMAS</source>
          , pages
          <fpage>61</fpage>
          -
          <lpage>77</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [Beal et al.,
          <year>2020</year>
          ]
          <string-name>
            <given-names>Ryan</given-names>
            <surname>Beal</surname>
          </string-name>
          , Narayan Changder, Timothy Norman, and
          <string-name>
            <given-names>Sarvapali</given-names>
            <surname>Ramchurn</surname>
          </string-name>
          .
          <article-title>Learning the value of teamwork to form efficient teams</article-title>
          .
          <source>In Proceedings of AAAI2020</source>
          , volume
          <volume>34</volume>
          , pages
          <fpage>7063</fpage>
          -
          <lpage>7070</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <source>[Belle</source>
          , 2017]
          <string-name>
            <given-names>Vaishak</given-names>
            <surname>Belle</surname>
          </string-name>
          .
          <article-title>Logic meets probability: Towards explainable ai systems for uncertain worlds</article-title>
          .
          <source>In IJCAI</source>
          , pages
          <fpage>5116</fpage>
          -
          <lpage>5120</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>[Braham and van Hees</source>
          ,
          <year>2011</year>
          ]
          <string-name>
            <given-names>Matthew</given-names>
            <surname>Braham</surname>
          </string-name>
          and Martin van Hees.
          <article-title>Responsibility voids</article-title>
          .
          <source>The Philosophical Quarterly</source>
          ,
          <volume>61</volume>
          (
          <issue>242</issue>
          ):
          <fpage>6</fpage>
          -
          <lpage>15</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [Bulling et al.,
          <year>2010</year>
          ]
          <string-name>
            <given-names>Nils</given-names>
            <surname>Bulling</surname>
          </string-name>
          , Jurgen Dix, and
          <string-name>
            <given-names>Wojciech</given-names>
            <surname>Jamroga</surname>
          </string-name>
          .
          <article-title>Model checking logics of strategic ability: Complexity</article-title>
          .
          <source>In Specification and Verification of Multiagent Systems</source>
          , pages
          <fpage>125</fpage>
          -
          <lpage>159</lpage>
          . Springer,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <source>[Chockler and Halpern</source>
          , 2004]
          <string-name>
            <given-names>Hana</given-names>
            <surname>Chockler</surname>
          </string-name>
          and Joseph Y Halpern.
          <article-title>Responsibility and blame: A structural-model approach</article-title>
          .
          <source>JAIR</source>
          ,
          <volume>22</volume>
          :
          <fpage>93</fpage>
          -
          <lpage>115</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [De Giacomo and Vardi, 2015] Giuseppe De Giacomo and
          <string-name>
            <given-names>Moshe</given-names>
            <surname>Vardi</surname>
          </string-name>
          .
          <article-title>Synthesis for LTL and LDL on finite traces</article-title>
          .
          <source>In Proceedings of IJCAI-2015</source>
          . Citeseer,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <source>[EC: The High-Level Expert Group on AI</source>
          ,
          <year>2019</year>
          ]
          <article-title>EC: The High-Level Expert Group on AI. Ethics guidelines for trustworthy AI</article-title>
          . https://ec.europa.
          <article-title>eu/digital-single-market/ en/news/ethics-guidelines-trustworthy-</article-title>
          <string-name>
            <surname>ai</surname>
          </string-name>
          ,
          <year>2019</year>
          . Accessed:
          <fpage>2021</fpage>
          -05-01.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <source>[Friedenberg and Halpern</source>
          , 2019]
          <string-name>
            <given-names>Meir</given-names>
            <surname>Friedenberg</surname>
          </string-name>
          and Joseph Y Halpern.
          <article-title>Blameworthiness in multi-agent settings</article-title>
          .
          <source>In Proceedings of AAAI-2019</source>
          , pages
          <fpage>525</fpage>
          -
          <lpage>532</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <source>[Hart</source>
          , 2008]
          <article-title>Herbert Lionel Adolphus Hart. Punishment and responsibility: Essays in the philosophy of law</article-title>
          . Oxford University Press,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <source>[Ieong and Shoham</source>
          , 2005]
          <string-name>
            <given-names>Samuel</given-names>
            <surname>Ieong</surname>
          </string-name>
          and
          <string-name>
            <given-names>Yoav</given-names>
            <surname>Shoham</surname>
          </string-name>
          .
          <article-title>Marginal contribution nets: a compact representation scheme for coalitional games</article-title>
          .
          <source>In Proceedings of ECommerce-2005</source>
          , pages
          <fpage>193</fpage>
          -
          <lpage>202</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [Jennings et al.,
          <year>2014</year>
          ] Nicholas R Jennings, Luc Moreau, David Nicholson,
          <string-name>
            <given-names>Sarvapali</given-names>
            <surname>Ramchurn</surname>
          </string-name>
          , Stephen Roberts, Tom Rodden, and
          <string-name>
            <given-names>Alex</given-names>
            <surname>Rogers</surname>
          </string-name>
          .
          <article-title>Human-agent collectives</article-title>
          .
          <source>Communications of the ACM</source>
          ,
          <volume>57</volume>
          (
          <issue>12</issue>
          ):
          <fpage>80</fpage>
          -
          <lpage>88</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <source>[Kalenka and Jennings</source>
          , 1999]
          <string-name>
            <given-names>Susanne</given-names>
            <surname>Kalenka and Nicholas R Jennings.</surname>
          </string-name>
          <article-title>Socially responsible decision making by autonomous agents</article-title>
          .
          <source>In Cognition, Agency and Rationality</source>
          , pages
          <fpage>135</fpage>
          -
          <lpage>149</lpage>
          . Springer,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [Lomuscio et al.,
          <year>2017</year>
          ]
          <string-name>
            <given-names>Alessio</given-names>
            <surname>Lomuscio</surname>
          </string-name>
          , Hongyang Qu, and
          <string-name>
            <given-names>Franco</given-names>
            <surname>Raimondi</surname>
          </string-name>
          .
          <article-title>MCMAS: an open-source model checker for the verification of multi-agent systems</article-title>
          .
          <source>Int. J. Softw. Tools Technol</source>
          . Transf.,
          <volume>19</volume>
          (
          <issue>1</issue>
          ):
          <fpage>9</fpage>
          -
          <lpage>30</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [Macarthur et al.,
          <year>2011</year>
          ]
          <string-name>
            <given-names>Kathryn</given-names>
            <surname>Sarah</surname>
          </string-name>
          <string-name>
            <given-names>Macarthur</given-names>
            , Ruben Stranders, Sarvapali Ramchurn, and
            <surname>Nicholas</surname>
          </string-name>
          <string-name>
            <given-names>R.</given-names>
            <surname>Jennings</surname>
          </string-name>
          .
          <article-title>A distributed anytime algorithm for dynamic task allocation in multi-agent systems</article-title>
          .
          <source>In Proceedings of AAAI-2011</source>
          , pages
          <fpage>701</fpage>
          -
          <lpage>706</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <source>[Miller</source>
          ,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Tim</given-names>
            <surname>Miller</surname>
          </string-name>
          .
          <article-title>Explanation in artificial intelligence: Insights from the social sciences</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>267</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>38</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <source>[Office for Artificial Intelligence - GOV</source>
          .UK,
          <year>2020</year>
          ]
          <article-title>Office for Artificial Intelligence - GOV</article-title>
          .UK.
          <article-title>A guide to using artificial intelligence in the public sector</article-title>
          . https://www.gov.uk/government/publications/ a
          <article-title>-guide-to-using-artificial-intelligence-in-the-public-</article-title>
          <string-name>
            <surname>sector</surname>
          </string-name>
          ,
          <year>2020</year>
          . Accessed:
          <fpage>2021</fpage>
          -05-01.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [Ohta et al.,
          <year>2009</year>
          ]
          <string-name>
            <given-names>Naoki</given-names>
            <surname>Ohta</surname>
          </string-name>
          , Vincent Conitzer, Ryo Ichimura, Yuko Sakurai, Atsushi Iwasaki, and
          <string-name>
            <given-names>Makoto</given-names>
            <surname>Yokoo</surname>
          </string-name>
          .
          <article-title>Coalition structure generation utilizing compact characteristic function representations</article-title>
          .
          <source>In International Conference on Principles and Practice of Constraint Programming</source>
          , pages
          <fpage>623</fpage>
          -
          <lpage>638</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <source>[Russell</source>
          , 2019]
          <string-name>
            <given-names>Stuart</given-names>
            <surname>Russell</surname>
          </string-name>
          .
          <article-title>Human compatible: Artificial intelligence and the problem of control</article-title>
          .
          <source>Penguin</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <source>[Shapley</source>
          , 1953]
          <article-title>Lloyd S Shapley. A value for n-person games</article-title>
          .
          <source>Contributions to the Theory of Games</source>
          ,
          <volume>2</volume>
          (
          <issue>28</issue>
          ):
          <fpage>307</fpage>
          -
          <lpage>317</lpage>
          ,
          <year>1953</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [Tsakalakis et al.,
          <year>2020</year>
          ]
          <string-name>
            <given-names>Niko</given-names>
            <surname>Tsakalakis</surname>
          </string-name>
          , Laura Carmichael, Sophie Stalla-Bourdillon, Luc Moreau, Dong Huynh, and
          <string-name>
            <given-names>Ayah</given-names>
            <surname>Helal</surname>
          </string-name>
          .
          <article-title>Explanations for AI: Computable or not?</article-title>
          <source>In Web Science Companion</source>
          , pages
          <fpage>77</fpage>
          -
          <lpage>77</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <string-name>
            <surname>[van de Poel</surname>
          </string-name>
          et al.,
          <year>2012</year>
          ] Ibo van de Poel, Jessica Nihle´n Fahlquist, Neelke Doorn, Sjoerd Zwart, and
          <string-name>
            <given-names>Lamber</given-names>
            <surname>Royakkers</surname>
          </string-name>
          .
          <article-title>The problem of many hands: Climate change as an example</article-title>
          .
          <source>Science and engineering ethics</source>
          ,
          <volume>18</volume>
          (
          <issue>1</issue>
          ):
          <fpage>49</fpage>
          -
          <lpage>67</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <surname>[van de Poel</surname>
          </string-name>
          ,
          <year>2011</year>
          ] Ibo van de Poel.
          <article-title>The relation between forward-looking and backward-looking responsibility</article-title>
          .
          <source>In Moral responsibility</source>
          , pages
          <fpage>37</fpage>
          -
          <lpage>52</lpage>
          . Springer,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <source>[Winfield and Jirotka</source>
          , 2018]
          <article-title>Alan FT Winfield and Marina Jirotka. Ethical governance is essential to building trust in robotics and artificial intelligence systems</article-title>
          .
          <source>Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [Yazdanpanah et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Vahid</given-names>
            <surname>Yazdanpanah</surname>
          </string-name>
          , Mehdi Dastani, Wojciech Jamroga, Natasha Alechina, and
          <string-name>
            <given-names>Brian</given-names>
            <surname>Logan</surname>
          </string-name>
          .
          <article-title>Strategic responsibility under imperfect information</article-title>
          .
          <source>In Proceedings of AAMAS-2019</source>
          , pages
          <fpage>592</fpage>
          -
          <lpage>600</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [Yazdanpanah et al.,
          <year>2020</year>
          ]
          <string-name>
            <given-names>Vahid</given-names>
            <surname>Yazdanpanah</surname>
          </string-name>
          , Mehdi Dastani, Shaheen Fatima,
          <string-name>
            <surname>Nicholas R. Jennings</surname>
            , Devrim Murat Yazan, and
            <given-names>W. Henk</given-names>
          </string-name>
          <string-name>
            <surname>Zijm</surname>
          </string-name>
          .
          <article-title>Task coordination in multiagent systems</article-title>
          .
          <source>In Proceedings of AAMAS-2020</source>
          , pages
          <fpage>2056</fpage>
          -
          <lpage>2058</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [Yazdanpanah et al., 2021a]
          <string-name>
            <given-names>Vahid</given-names>
            <surname>Yazdanpanah</surname>
          </string-name>
          ,
          <string-name>
            <surname>Enrico H. Gerding</surname>
          </string-name>
          , Sebastian Stein, Corina Cirstea, m.c. schraefel, Timothy J.
          <string-name>
            <surname>Norman</surname>
          </string-name>
          , and
          <string-name>
            <surname>Nicholas</surname>
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Jennings</surname>
          </string-name>
          .
          <article-title>Collective responsibility in multiagent settings</article-title>
          .
          <source>In ACM Collective Intelligence Conference</source>
          <year>2021</year>
          (CI-
          <year>2021</year>
          ),
          <year>April 2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [Yazdanpanah et al., 2021b]
          <string-name>
            <given-names>Vahid</given-names>
            <surname>Yazdanpanah</surname>
          </string-name>
          ,
          <string-name>
            <surname>Enrico H. Gerding</surname>
          </string-name>
          , Sebastian Stein, Mehdi Dastani,
          <string-name>
            <surname>Catholijn M. Jonker</surname>
            , and
            <given-names>Timothy J.</given-names>
          </string-name>
          <string-name>
            <surname>Norman</surname>
          </string-name>
          .
          <article-title>Responsibility research for trustworthy autonomous systems</article-title>
          .
          <source>In Proceedings of AAMAS-2021, page 57-62</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>