<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>From Trust Among Agents to Reputation of Abstract Arguments by Using Subjective Logic</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Mathematics and Computer Science, University of Perugia</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <fpage>65</fpage>
      <lpage>79</lpage>
      <abstract>
        <p>Subjective Logic provides a standard set of logical operators intended for use in domains containing uncertainty. At the same time, the motivations behind the adoption of Argumentation in AI are rooted into reasoning and explanation in presence of incomplete and uncertain information. This work uses Subjective Logic as a means to represent the beliefs of different agents towards arguments and attacks, and to aggregate them with the purpose to have an overall reputation from all the considered agents in the considered community. Agents are also allowed to form their opinion from others' opinions by exploiting trust paths. Finally, the obtained beliefs can be used to compute the community-biased expectation that a set of (abstract) arguments satisfies a given semantics.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        An Abstract Argumentation Framework (AAF) [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] is an abstract structure consisting
of a set arguments, whose origin, nature and possible internal organisation is not
specified, and by a binary relation of attack on the set of arguments, whose meaning is
not specified either: that is, an AAF can be represented as a pair hA , Ri, which in
turn can be represented as a directed graph where nodes are arguments and a ! b if
(a, b) 2 R. As a classical example, argument a may stand for “Tomorrow will rain
because the national weather forecast says so”, while argument b for “Tomorrow will not
rain because the regional weather forecast says so”; the corresponding framework is
hA = {a, b}, R = {(a, b), (b, a)}i.
      </p>
      <p>
        Given a framework, it is possible to examine the question on which set(s) of
arguments can be accepted, hence collectively surviving the conflict defined by R.
Answering this question corresponds to defining an argumentation semantics [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Considering
the previous example, either {a} or {b} alone can be accepted, while {a, b} cannot be
accepted because of the internal conflict.
      </p>
      <p>
        Subjective Logic (SL) [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] is a calculus for subjective opinions which in turn
represent probabilities affected by degrees of uncertainty. In general, SL is suitable for
modelling and analysing situations involving uncertainty and relatively unreliable sources.
A subjective opinion can express trust in a source or it can express belief about events
and propositions. A binomial opinion applies to a binary state variable, and can be
represented as a Beta PDF (Probability Density Function) [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. A multinomial opinion
applies to a state variable of multiple possible values, and can be represented as a Dirichlet
PDF [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. SL has been already used for modelling subjective trust networks [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] and
structured Argumentation [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
      </p>
      <p>Copyright c 2019 for this paper by its authors. Use permitted under Creative Commons</p>
      <p>License Attribution 4.0 International (CC BY 4.0).</p>
      <p>
        Since arguments are often uncertain, it can be useful to quantify the uncertainty
associated with each argument, as previously explored in other works in the literature [
        <xref ref-type="bibr" rid="ref13 ref14 ref19 ref26">14,
13, 19, 26</xref>
        ]. Do we believe more in national or regional weather forecast? How much are
we certain about our belief? For this reason, we define Subjective Logic-based AAFs
(slAAFs), where both arguments and attacks are associated with a binomial opinion
defined in SL, i.e., described in terms of belief, disbelief, and uncertainty values, i.e.,
hb, d, ui. As shown in [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], slAAFs can be straightforwardly reconnected to the
constellations approach proposed in [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], but information is more granular due to the fact that a
probability value can be derived from a triple hb, d, ui. A dogmatic opinion, that is with
u = 0, is equivalent to probabilities. An absolute opinion, that is b = 1, is equivalent to
true. A vacuous opinion, that is u = 1 is equivalent to undefined.
      </p>
      <p>
        Afterwards, with the purpose to find a framework to assign opinions to arguments
and attacks, we introduce agents on top of slAAFs. In this scenario, new with respect to
[
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], different opinions related to arguments and attacks between arguments come from
different agents. Consequently, SL operators can be used to aggregate these subjective
opinions together in a resulting opinion, which describes the belief/disbelief/uncertainty
of the whole group of agents. This represents the reputation of an argument (or attack)
in the considered community, which consist of individuals bounds together by social
relationships. This reputation comes from all the direct subjective-opinions of agents,
but also from (indirect) opinions of other agents in the same community, by considering
transitive trust-relationships: if A trusts B who strongly believes in argument a (i.e., high
belief and low uncertainty rating), then the direct opinion w aA can be aggregated with
w aB through the opinion of A towards B: w BA. If A has no opinion about a, then she
can make one as just explained. By aggregating the beliefs of all the agents w.r.t. the
same argument/attack, e.g., w aA-w aB-w aC, then we compute the reputation of a. The same
example can be rephrased by computing the reputation towards an attack from a to b,
i.e., (a, b): w (Aa,b)-w (Ba,b)-w (Ca,b). Finally, these opinions can be used in the same way as
in the constellations approach (Section 5), for instance to find the community-biased
expectation that a set of arguments satisfies a given semantics.
      </p>
      <p>
        This work extends the results in [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] by introducing Trust Network-based slAAF
as a way to connect trust in agents with trust in arguments and attacks. Concisely, the
paper links a trust network among agents with the belief the same agents have in the
components of the considered AAF.
      </p>
      <p>The paper is organised as follows: in Section 2 we summarise the background
notions behind SL. Section 3 proposes Subjective Logic-based AAFs and how to work
with them by computing the expectations of semantics and argument acceptance by
using opinions instead of probability values. Then, in Section 4 we embed a trust model
in top of slAAFs: we describe how trust paths among agents can be used to compute an
indirect opinion on arguments and attacks. Section 5 and Section 6 ends the paper with
related work and conclusions respectively.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Subjective Logic</title>
      <p>
        A subjective opinion expresses belief about states of a state space called a “frame of
discernment”, or “frame” for short. In practice, a state in a frame can be regarded
where P (X ) denotes the powerset of X and |P (X )| = 2k. All proper subsets of X are
states of R(X ), but the frame X and the empty set 0/ are not states of R(X ), in line with
the hyper-Dirichlet model [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. R(X ) has cardinality k = 2k 2.
      </p>
      <p>
        An opinion is a composite function consisting of belief masses, uncertainty mass
and base rates. It applies to a frame, also called a state space, and can have an attribute
that identifies the belief owner. An opinion is a composite function that consists of a
belief vector b,1 an uncertainty parameter u, and base rate vector a,2 which take values
in the interval [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]. An opinion satisfies the following additivity requirements.
      </p>
      <p>Belief additivity: uX + Â bX (xi) = 1.</p>
      <p>xi2R(X)
k
Base rate additivity: Â
i=1</p>
      <p>aX (xi) = 1, where xi 2 X .</p>
      <p>A subjective (hyper) opinion of user A over the frame X is denoted as w XA = (bX , uX ,
aX ), where bX is a belief vector over the states of R(X ), uX is the complementary
uncertainty mass, and aX is a base rate vector over X , all seen from the viewpoint of
belief owner A. The belief vector bX has (2k 2) parameters, whereas the base rate
vector aX only has k parameters. The uncertainty parameter uX is a simple scalar. Thus,
a general opinion contains (2k + k 1) parameters and hence it is a hyper opinion.
However, given that Eq.(2) and Eq.(3) remove one degree of freedom each, opinions
over a frame of cardinality k only have (2k + k 3) degrees of freedom. The probability
projection of hyper opinions is the vector denoted as EX in Eq.(4).
as a statement or proposition, so that a frame contains a set of statements. Let X =
{x1, x2, . . . , xk} be a frame of cardinality k, where xi (1  i  k) represents a specific
state. An opinion distributes belief mass over the reduced power-set of the frame
denoted as R(X ) defined as:</p>
      <p>EX (xi) =</p>
      <p>Â
x j2R(X)</p>
      <p>aX (xi/x j) bX (x j) + aX (xi) uX ,
where xi 2 R(X ) and aX (xi/x j) denotes relative base rate, i.e. the base rate of subset xi
relative to the base rate of (partially) overlapping subset x j, defined as follows:
(1)
(2)
(3)
(4)
(5)
aX (xi/x j) =
aX (xi \ x j)
aX (x j)</p>
      <p>, 8 xi, x j 2 R(X ).</p>
      <p>
        Equivalent probabilistic representations of opinions, e.g. as Beta pdf (probability
density function) or a Dirichlet pdf, offer an alternative interpretation of subjective
opinions in terms of traditional statistics [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. There is no simple visualisation of hyper
1 A belief vector b specifies the distribution of belief masses over the elements of R(X ).
2 Base rate generally refers to the (base) class probabilities unconditioned on featural evidence,
frequently also known as prior probabilities. The concept of base rates is central in the theory
of probability. Base rates are for example useful for default and for conditional reasoning.
Opinion
Zx
dx
      </p>
      <p>Projector</p>
      <p>Ex
Expectation value
ax
Base rate</p>
      <p>bx
opinions, but simple visualisations can be used for binomial and multinomial opinions
as explained below.</p>
      <p>Binomial opinions, which be extensively used in the remainder of the paper, apply
to binary frames and have a special notation as described below. Let X = {x, x} be a
binary frame, then a binomial opinion about the truth of state x is the ordered quadruple
w x = hb, d, u, ai where:
b, belief: belief mass in support of x being true;
d, disbelief: belief mass in support of x (NOT x);
u, uncertainty: uncertainty about probability of x;
a, base rate: non-informative prior probability of x.</p>
      <p>The special case of Eq.(2) in case of binomial opinions is expressed by Eq.(6).
b + d + u = 1.</p>
      <p>Ex = b + au.</p>
      <p>Similarly, the special case of the probability expectation value of Eq.(4) in case of
binomial opinions is expressed by Eq.(7).</p>
      <p>A binomial opinion can be visualised as a point inside an equal sided triangle as
shown in Figure 1, where the belief, disbelief and uncertainty axes go perpendicularly
from each edge to the opposite vertex indicated by bx, dx and ux. The base rate ax shows
on the base line, and the probability expectation value Ex is determined by projecting
the opinion point to the base line in parallel with the base rate director.</p>
      <p>In case the opinion point is located at the left or right corner of the triangle, i.e.
with d = 1 or b = 1 and u = 0, the opinion is equivalent to boolean TRUE or FALSE,
then SL becomes equivalent to binary logic. Moreover, where b + d = 1 a binomial
opinion is equivalent to a traditional probability, where b + d &lt; 1 it expresses degrees
of uncertainty, and where b + d = 0 it expresses total uncertainty.</p>
      <p>Most operators in Table 1 are generalisations of binary logic and probability
operators. For example, addition is simply a generalisation of addition/union of probabilities,
(6)
(7)
Subjective Logic operators</p>
      <p>Addition
Subtraction
Multiplication</p>
      <p>Division
Comultiplication</p>
      <p>Codivision
Complement
Deduction</p>
      <p>Abduction</p>
      <p>Transitivity / discounting
Cumulative fusion / consensus</p>
      <p>Averaging fusion
Constraint fusion</p>
      <p>Operator notation
w xA[ y = w xA + w yA
w xA\y = w xA w yA
w xA^ y = w xA · w yA
w xA^ ¯ y = w xA\w yA
w xA_ y = w xA t w yA
w xA_ ¯ y = w xA t¯w yA</p>
      <p>
        w xA¯ = ¬w xA
w yAkx = w xA (w yA|x, w yA|x¯)
w yAk¯x w=xAw :BxA=¯ (ww BAyA|⌦x, ww xxAB|y¯, ay)
w xA⇧B = w xA w xB
w xA⇧B = w xA w xB
w xA&amp;B = w xA w xB
while multiplication is conjunction/and. Other operators, e.g., deduction, abduction,
discounting, are not related to logic instead. For the mathematical details of the
operators in Table 1, refer to [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Some of the operators are only meaningful for combining
binomial opinions, but some also apply to multinomial opinions. Most of the operators
in Table 1 are binary, but complement is unary, deduction is ternary and abduction is
quaternary.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>SL-based Abstract Argumentation Frameworks</title>
      <p>
        In this section we redefine the constellations approach in probabilistic argumentation
(see Section 5) by using SL instead of plain probability values on arguments and attacks
(as accomplished in the standard definition of the constellations approach instead). All
the results in this section are background information taken from [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] (for what
concerning AAFs) and [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] (concerning slAAFs). We start by recalling the classical
definitions behind AAFs:
      </p>
      <sec id="sec-3-1">
        <title>Definition 1 (Abstract Argumentation Frameworks [10]). An Abstract Argumenta</title>
        <p>tion Framework (AAF) is a pair hA, Ri of a set A of arguments and a binary relation R
on A, called attack relation. 8 ai, a j 2 A, R(ai, a j) means that ai attacks a j (R is
asymmetric).</p>
        <p>A semantics specifies how to derive a set of extensions from an AAF, where an
extension B ✓ A is a subset of “collectively” acceptable arguments.</p>
        <p>
          Definition 2 (Semantics [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]). Let F = hA, Ri be an AAF. A set B ✓ A is conflict-free,
denoted B 2 cf (F ), iff there are no a, b 2 B, such that R(a, b). An argument a 2 A is
defended by a set B ✓ A if for each b 2 A, such that R(b, a), there is c 2 B s.t. R(c, b). A
conflict-free set is also admissible, that is S 2 adm(F), if each a 2 B is defended by B.
Given a conflict-free B, the semantics originally defined in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] are:
complete: B 2 com(F), if B 2 adm(F) and for each a 2 A defended by B, a 2 B holds;
preferred: B 2 prf (F), if B 2 adm(F) and there is no C 2 adm(F) with B ⇢ C;
stable: B 2 stb(F), if for each a 2 A\B, 9 b 2 B s.t. R(b, a);
grounded: B = grd(F) if B 2 com(F) and there is no C 2 com(F) with C ⇢ B.
        </p>
        <p>The acceptance state of a single argument can be conceived in terms of its extension
membership.</p>
        <p>
          Definition 3 (Argument acceptance [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]). Given one of the semantics s 2 {com,
stb, prf } and a framework F, an argument a is i) justified iff 8 B 2 s (F), a 2 B, ii)
a is defensible if 9 B 2 s (F), a 2 B and a is not justified, iii) a is overruled iff 6 9 B 2
s (F), a 2 B.
        </p>
        <p>A SL-based AAF extends Dung’s argument framework by associating an opinion
with each argument and attacks in the original AAF.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Definition 4 (SL-based Argumentation Frameworks). A SL-based Abstract Argu</title>
        <p>mentation framework (slAAF) is a tuple hA , R, OA , ORi where hA , Ri is a Dung’s
AAF (Definition 1), OA : A ! W A and OR : R ! W R , where W A and W R
respectively are the set of binomial opinions on each argument and each attack.</p>
        <p>Hence, given A = {a1, a2, . . . , an}, for each ai 2 A we have that Xai = {ai, ai}
represents a binary frame where, with an abuse of notation, ai indicates that “argument ai
is trustworthy” and where ai states that “argument ai is not trustworthy”. The same
considerations hold for R = {(ai, a j), . . . , (al , ak)}: X(ai,a j) = {(ai, a j), (ai, a j)} represents a
binary frame where (ai, a j) indicates “attack (ai, a j) is trustworthy”, and (ai, a j) states
“ (ai, a j) is not trustworthy”.3 Therefore, our framework collects a binomial opinion
w ai = hb, d, u, ai for each ai 2 A , and w (ai,a j) = hb, d, u, ai for each (ai, a j) 2 R.
Remark 1. In this paper, we suppose agents trust an argument if they generically believe
in that argument: for instance, if the believe its premises are true, and if they believe the
consequence of the claim is logically sound. Therefore, an agent trusts an argument if
it believes it is both valid and sound. Indeed this evaluation is subjective: some agents
might not catch a statement is a fallacy instead,4 or the fact some of the premises are just
false instead of true. Different agents possess different knowledge about the same facts.
Similarly, agents can differently judge whether two arguments are in conflict or not, if
such arguments do not exactly negate each other: for example, “doing a” or “doing b”
in the same time interval are not in conflict if they there is time to do both of them in
sequence.
3 Note that i can be equal to j in case we have a self attack R(ai, ai).
4 A fallacy is the use of invalid or otherwise faulty reasoning in the construction of an argument.</p>
        <p>For instance, hasty generalization is making assumptions about a whole group or range of
cases based on a sample, e.g., “graduated students are nerd”.
c
w c
d
w d</p>
        <p>
          In Figure 2 we show an example of slAAF; we use the same example used in [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], in
order to better show the differences between the original constellations approach in [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ],
and by using SL instead. In Table 2 we provide the values for the tuples w ai = hb, d, u, ai
and w w(ai,a j) = hb, d, u, ai with respect to the AAF in Figure 2.
        </p>
        <p>As a reminder from Section 2, the base rate a is the prior probability of the
proposition in the absence of specific belief or disbelief. The default value is the relative
atomicity, i.e., 0.5 for a binary state space containing the proposition and its negation. For
this reason, a is not reported in Table 2, and quadruples are in the following simplified
as triples hb, d, ui. The opinion related to the complement, e.g., w b, is not reported
because it can be obtained easily from w b = h0.6, 0.2, 0.2i as h0.2, 0.6, 0.2i, by exchanging
belief with disbelief (see complement operator in Table 1).</p>
        <p>
          A slAAF represents the set of all Dung’s classical frameworks that can potentially
be created from it. Similarly to [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], we call this creation process the inducement of an
AAF from a slAAF. All arguments and attacks with a probability expectation of 1 will
be found in the induced AAF, which can also contain additional arguments and attacks,
as specified in Definition 5.
        </p>
        <p>Definition 5 (Inducing an AAF from a slAAF). A Dung’s framework AAF = hA, Ri
is said to be induced from a slAAF = hA , R, OA , ORi iff the remainder holds:
– A ✓ A ,
– R ✓ (R \ (A ⇥ A)),
– 8 a 2 A such that w a = h1, 0, 0i, then a 2 A,
– 8 (ai, a j) 2 R such that w (ai,a j) = h1, 0, 0i and w ai = w a j = h1, 0, 0i, then (ai, a j) 2</p>
        <p>R.</p>
        <p>Moreover, we write I(slAAF) to represent the set of all AAFs that can be induced from
a slAAF.</p>
        <p>Given Definition 5, an AAF induced from a slAAF contains a subset of the
arguments found in the source slAAF, together with a subset of attacks in the slAAF, subject
to these defeats containing only arguments found within the induced AAF.</p>
        <p>In practice, the process described in Definition 5 splits the uncertainty expresses
in a slAAF into constellations (see Section 5) of different possible worlds, each with
a different probability. For instance, given the slAAF in Figure 2 and Table 2, then
I(slAAF) is equivalent to the following set of four derived frameworks:
F1 = h{a, c}, 0/i
F3 = h{a, c, d}, {(c, d)}i</p>
        <p>F2 = h{a, b, c}, {(b, c)}i</p>
        <p>F4 = h{a, b, c, d}, {(b, c), (c, d)}i</p>
        <p>
          This allows us to compute the expectation of some AAF being induced from a
slAAF. Informally, such expectation value can be computed via the joint expectations
of the arguments and attack relations appearing in the considered slAAF. In order to
formalise such a concept compactly, we first need to identify the set of attacks that may
appear in an induced AAF, as accomplished in [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. We call this set RA:
        </p>
        <p>RA = {(ai, a j) | ai, a j 2 A and (ai, a j) 2 R}</p>
        <p>Hence, it is possible to compute the expectation of some AAF being induced from
a slAAF, as defined in Definition 6. The expectations Eai and E(ai,a j) are computed
from the opinions returned by OA (ai) and OR (ai, a j) respectively, for each ai 2 A and
(ai, a j) 2 R. As a remainder from Section 2, expectations for binomial opinion is given
by b + au from hb, d, ui, with a = 0.5 (see Eq. 7 in Section 2).</p>
        <p>Definition 6 (Expectation of an induced AAF). Given slAAF = hA , R, OA , ORi, the
expectation of F = hA, Ri 2 I(slAAF) can be computed as in Eq. 8:</p>
        <p>EFI = ’
ai2A</p>
        <p>Eai ’
ai2(A \A)
(1</p>
        <p>Eai ) ’
(ai,a j)2R</p>
        <p>E(ai,a j)</p>
        <p>’
(ai,a j)2(RA \R)
(1</p>
        <p>E(ai,a j))
(8)</p>
        <p>We can then list the expectation value for all the four induced AAFs: EFI1 = 0.105,
EFI2 = 0.245, EFI3 = 0.195, EFI4 = 0.455. For example, EFI1 = (1 ⇥ 1) ⇥ ((1 0.7) ⇥ (1
0.65)) = 0.105; no attack is considered in the computation because F1 = h{a, c}, 0/ i.</p>
        <p>Hence, also the semantics change over these two AAFs: in F1 the set {a, c} satisfies
the grounded and stable semantics (no attack is present), while F2 returns different
extensions: stb(F2) = {{a, b}}, and grd(F2) = {a, b}.</p>
        <p>
          Similarly to [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], we can derive the following property:
Proposition 1. The sum of all the expectation values of all the AAFs that can be
induced from a slAAF is 1:
        </p>
        <p>The proof simply derives from exhaustively considering all the possible worlds; in
our running example, 0.105 + 0.245 + 0.195 + 0.455 = 1.</p>
        <p>We can now define the expectation of some set of arguments satisfying one of the
semantics s in the literature, for example the properties introduced in Definition 2,
i.e, s 2 {com, prf , stb, grd} (notice that other semantics have been successively in the
literature [2, Ch. 2]). For this reason, we define a function v : (s , B, F) ! { f alse, true}
that returns true if and only if the set of arguments B represents one of the extensions
satisfying s given a framework F: that is, v(s , B, F) is true if and only if B 2 s (F),
false otherwise.</p>
        <p>Definition 7 (Semantics expectation). Given a slAAF = hA , R, OA , ORi, the
expectation that a given set of arguments B 2 A satisfies a semantics s is:</p>
        <p>EsI (B, slAAF) =</p>
        <p>For instance, the expectation EgIrd({a, c}, slAAF) = 0.105 + 0.195 = 0.3: the set
{a, c} represents a grounded extension in F1 and F3, whose expectation is respectively
0.105 and 0.195. EsItb({a, b, d}, slAAF) = 0.455 since the set {a, b, d} is a stable
extension only in F4, whose expectation is 0.455.</p>
        <p>In the same way, we can compute the expectation of acceptance of an argument
w.r.t. I(slAAF) and s : the same argument can be justified/defensible/overruled (i.e.,
j/d/o, see Definition 3) in multiple generated worlds. We take advantage of a function
z : (s , acpt, ai, F) ! { f alse, true}, which returns true if argument ai is accepted as
requested (acpt 2 {j, d, o}) in F, given a semantics s .</p>
        <p>Definition 8 (Acceptance expectation). Given a slAAF = hA , R, OA , ORi, the
expectation that an argument a 2 A is justified/defensible/overruled (acpt 2 {j, d, o}) w.r.t.
semantics s is:</p>
        <p>EsI ,acpt(a, slAAF) =
F3, and F4), while EaIdm,d(c, slAAF) = 0.105 + 0.195 = 0.3: argument c is accepted in
F1 (expectation 0.105) and F3 (expectation 0.195).</p>
        <p>
          Note that generating all the possible worlds in the constellations and then enumerate
all the extensions for each of them can lead to computational issues: the number of
worlds exponentially grows in the size of the considered slAAF. Even if the state of the
art of argumentation solvers is quite advanced [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], the exact expectation value need to
be approximated [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ].
4
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>From Trust Between Agents to Belief in Arguments</title>
      <p>
        The work in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] describes a method for trust network analysis using subjective logic
(TNA-SL). It provides a simple notation for expressing transitive trust relationships, and
defines a method for simplifying complex trust networks so that they can be expressed
in a concise form and be computationally analysed. Trust measures are expressed as
beliefs, and Subjective Logic operators are used to compute trust between arbitrary
parties in the network.
      </p>
      <p>
        In this section we outline a computational framework where we use TNA-SL to
compute the reputation in the attacks and arguments uttered in a public (for a
community) debate, for which we suppose not all of the agents in that community have
had the opportunity to attend to. Alternatively, some agents could have attended but
have not been able to form an opinion because of impediments as, for instance,
cultural differences in the audience, the education level, or cognitive limitations in
general [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. Hence, some of the agents form their derived opinion from friends and
acquaintances by using trust relationships (derived opinions represent recommendations).
At the same time, also agents who have a direct opinion are influenced by other parties
they know [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The proposed approach follows these steps:
1. direct and indirect opinions of the same agent can be aggregated in order to produce
a single belief for the same argument or attack, and
2. aggregated opinions of single agents can be further aggregated with the purpose to
produce a reputation for an argument or attack, which reflects the belief of all the
considered community;
3. finally, the obtained slAAF, where each argument and attack is weighed with an
opinion as derived from items 1 and 2, can be studied by using the constellations
approach proposed in Section 3.
      </p>
      <p>We first define Trust Network-based slAAFs.</p>
      <sec id="sec-4-1">
        <title>Definition 9 (Trust Network-based slAAF). A Trust Network-based slAAF (abbre</title>
        <p>viated to TN-slAAF) is formed by a slAAF hA , R, OA , ORi (see Definition 4), and a
Trust Network represented as hP, T i, where P is the set of agents (we require A \ P = 0/ )
and T is a binary trust relation on P: 8 pi, p j 2 P, T (pi, p j) means that pi trusts p j (T
is asymmetric). Moreover, there is a further binary relation N of direct and derived
binomial opinions (see Sec. 2), where each element (p, x) 2 N relates an agent p 2 P
with x 2 A or x 2 R.</p>
        <p>The next definition is used to describe trust paths in a TN-slAAF.</p>
        <p>Definition 10 (Trust path in TN-slAAF). Given a TN-slAAF, a trust path is always
rooted in p 2 P and either ends in a 2 A or (a, b) 2 R.</p>
        <p>
          In the remainder we will use capital letters A, B, . . . for agent names in P, and
lowercase letters for arguments (i.e., a, b, . . . ). Moreover, when describing trust paths, the
symbol “:” will be used to denote the transitive connection of two consecutive trust arcs
to form a transitive trust path. The “⇧” symbol visually resembles a simple graph of two
parallel paths between a pair of agents [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. With no restrictions on the possible trust
arcs, trust paths from a given source X to a given target y can contain cycles, which
could result in inconsistent calculative results. Cycles in the trust graph must therefore
be controlled when applying calculative methods to derive measures of trust between
w DC
c
w (c,d)
d
w d
        </p>
        <p>w ED
D</p>
        <p>
          E
w (d,e)
e
w e
slAAF
two parties. Normalisation and simplification are two different control approaches, and
the trust model presented in this paper can take advantage from these techniques as
introduced in [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. For the sake of brevity, we point the reader to [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] for a more
exhaustive explanation.
        </p>
        <p>An example of TN-slAAF is reported in Figure 3: the upper part of the figure
represents all the agents in a community, while the lower part shows the considered debate
in the form of a slAAF as described in Section 3. A Trust Network is thus tied to an
AAF, and the opinions related to arguments and attacks are represented and aggregated
in SL. Arguments are detailed in Example 1.</p>
        <p>Example 1. We detail the slAAF arguments in Figure 3, taking into consideration a
discussion in favour/against the legalisation of Marijuana. A, B, C, D, E in Figure 3 are
the audience of a debate concerning this topic.</p>
        <p>– a: Official report from rating agencies say the financial crisis dramatically impacted
on the overall financial budget.
– b: The budget allocated to healthcare for light drugs needs to be increased, because
statistics say the number of light drugs users suffering from effects is increasing
and treatments are expensive.
– c: Marijuana should not be legalised because it would rise healthcare expenses.
– d: Marijuana should be legalised because prisons are overcrowded and a large part
of prisoners is in custody because they are marijuana users.</p>
        <p>In this example, we focus on agent A in Figure 3 (the complete network can be more
complex than Figure 3), and we derive an indirect opinion towards argument a along
the trust path [A,C] : [C, a], and an indirect opinion towards attack (c, b) along the path
([A, E]) = ([A, B] : [B, D]) ⇧ ([A,C] : [C, D]) : [D, E] : [E, (c, b)].</p>
        <p>
          The discounting [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] operator w xA:B = w BA ⌦ w xB in Table 1 can be used to compute
transitive trust along a trust path, while the consensus [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] operator w xA⇧B = w xA w xB in
Table 1 can be used to fuse two beliefs into one, thus composing parallel paths together.
Formally, w xA:B = hbBA bxB, dBA dxB, dBA + uAB + bAB uxBi, and w xA⇧B = hbBA bxB, dBA dxB, dBA + uAB +
bAB uxBi. With the cumulative fusion operator, i.e. ⌦ , the observations are supposed as
independent; the cumulative rule is equivalent to a posteriori updating of Dirichlet
distributions.5
        </p>
        <p>The effect of discounting in a transitive path is to increase uncertainty, that is to
reduce the confidence in the expectation value. The effect of the consensus operator is
to reduce uncertainty, that is to increase the confidence in the expectation value. Then,
we can compute w aA and w (Ac,b) as</p>
        <p>w aA = w CA ⌦ w aC
w (Ac,b) = ((w BA ⌦ w DB) (w CA ⌦ w DC)) ⌦ w ED ⌦ w (Ec,b).</p>
        <p>Given the beliefs in Table 5, we can compute w aA = h0.28, 0.48, 0.24i and w (Ac,b) =
h0.44, 0.09, 0.48i. Finally, these beliefs can be assigned to dashed edges in Figure 3.</p>
        <p>As previously advance, we can use SL to aggregate direct and derived beliefs of the
same agent, and to aggregate beliefs of different agents. This is visually described in
two small TN-slAAF examples, respectively in Figure 4 and in Figure 5. In Figure 4,
agent A can aggregate its direct opinion with two derived ones, which are obtained by
two trust paths (not shown in the figure) as previously introduced in this section: hence,
w aA = w ¯aA w eaA w baA. In Figure 5 the reputation of argument a can be computed via
the consensus operator, i.e., w a = w aA w aB w aC; w a can be then directly used in the
computational framework presented in Section 3.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Related Work</title>
      <p>
        We revise the literature about probabilistic argumentation (i.e., constellations and
epistemic approach), and also from the point of view of trust sources and systems.
5 More details on this operator, and how to compute it, can be found in [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
w baA
w eaA
w ¯aA
A
a
wB
      </p>
      <p>a
wC
a
a</p>
      <p>Fig. 5. An example of multiple
opinions from different agents to be
aggregated and reach a final reputation for a.</p>
      <p>
        Constellations. In the constellations approach, uncertainty in the topology of the graph
(probabilities on arguments and attacks) is used to make probabilistic assessments on
the acceptance of arguments. The authors of [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] provided the first proposal to extend
abstract argumentation with a probability distribution over sets of arguments which
they use with a version of assumption-based argumentation in which a subset of the
rules are probabilistic rules. In [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] a probability distribution over the sub-graphs of the
argument graph is introduced, and this can then be used to give a probability assignment
for a set of arguments being an admissible set or extension of the argument graph. In [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
the authors characterise the different semantics from the approach of [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] in terms of
probabilistic logic with the purpose of providing an uniform logical formalisation and
also pave the way for future implementations.
      </p>
      <p>
        Epistemic. In the epistemic approach instead, the topology of the graph is fixed but
probabilistic assessments on the acceptance of arguments are evaluated w.r.t. the
relations of the arguments in the graph. For instance, in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] the authors cast epistemic
probabilities in the context of de Finetti’s theory of subjective probability, and they analyse
and revise the relevant rationality properties in relation with de Finettis notion of
coherence. However, most of the work in this directions is authored by M. Thimm [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]
and A. Hunter [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. In the first work, the authors proposes a probabilistic semantics for
Abstract Argumentation is proposed in order to assign probabilities or degrees of
belief to individual arguments. The presented semantics generalise the classical notions of
semantics [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. In the second work, the author starts from considering logic-based
argumentation with uncertain arguments, but ends showing how this formalisation relates
to uncertainty of abstract arguments. The two authors join their efforts in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
Trust and Argumentation. Trust and Argumentation are two strictly related concepts,
as the florid literature of the last years proves. In [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] the authors investigate the
combination of trust measures on agents and the use of argumentation for reasoning about
belief, thus combining an existing system for reasoning about trust and an existing
system of argumentation. In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] the authors study how the different arguments interact and
how an agent may decide to trust another source and thus to accept information coming
from that source. The system also deals with graded trust (like agent i trusts to some
extent agent j). Trust of sources is also studied in [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], together with a model to trust
in a trustworthy way. In [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] the authors identify two types of argumentative relevance:
internal relevance, i.e. the extent to which a premise has a bearing on its purported
conclusion (thus considering structured arguments), and external relevance, i.e. a measure
of how much a whole argument is pertinent to the matter under discussion. Two more
works on Trust and Argumentation are [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
6
      </p>
    </sec>
    <sec id="sec-6">
      <title>Conclusion</title>
      <p>
        The aim of this paper is to encompass Trust Network Analysis [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and to irradiate the
effect of direct and derived trust among the agents in a network towards a Subjective
Logic-based AAF. Hence, entities can form their opinion by considering their direct
belief, and the beliefs of parties through trust paths linking them together. In addition, all
these subjective opinions can be fused into a reputation score related to each argument
and attack. Such a score represents how much the studied community of agents
evaluates their credibility. Finally, the resulting slAAF can be studied using the constellations
approach as in related works [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
      </p>
      <p>
        In the future, we would like to extend this study along two different lines. The fist
one concerns the argumentation side of our proposal: for instance, we are interested
in deal with slAAFs from the point of view of the epistemic approach (see Section 5).
The second line concerns the trust analysis of the network among agents. Future goals
are to enrich the framework by taking into consideration ageing factors: agents (and in
particular human agents) may change their behaviour over time, so it is desirable to give
greater weight to more recent ratings using longevity factors. In addition, we would like
to enrich the picture with distrust besides trust, also by exploring other computational
frameworks as [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Amgoud</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Demolombe</surname>
            ,
            <given-names>R.:</given-names>
          </string-name>
          <article-title>An argumentation-based approach for reasoning about trust in information sources</article-title>
          .
          <source>Argument &amp; Computation</source>
          <volume>5</volume>
          (
          <issue>2-3</issue>
          ),
          <fpage>191</fpage>
          -
          <lpage>215</lpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Arrow</surname>
            ,
            <given-names>K.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suzumura</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Handbook of Social Choice and Welfare</article-title>
          . North-Holland (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Baroni</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giacomin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vicig</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>On rationality conditions for epistemic probabilities in abstract argumentation</article-title>
          .
          <source>In: Computational Models of Argument - Proceedings of COMMA. FAIA</source>
          , vol.
          <volume>266</volume>
          , pp.
          <fpage>121</fpage>
          -
          <lpage>132</lpage>
          . IOS Press (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Bistarelli</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ross</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Santini</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Not only size, but also shape counts: abstract argumentation solvers are benchmark-sensitive</article-title>
          .
          <source>J. Log. Comput</source>
          .
          <volume>28</volume>
          (
          <issue>1</issue>
          ),
          <fpage>85</fpage>
          -
          <lpage>117</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Bistarelli</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Santini</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>On merging two trust-networks in one with bipolar preferences</article-title>
          .
          <source>Mathematical Structures in Computer Science</source>
          <volume>27</volume>
          (
          <issue>2</issue>
          ),
          <fpage>215</fpage>
          -
          <lpage>233</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Campbell-Meiklejohn</surname>
            ,
            <given-names>D.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bach</surname>
            ,
            <given-names>D.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roepstorff</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dolan</surname>
            ,
            <given-names>R.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frith</surname>
          </string-name>
          , C.D.:
          <article-title>How the opinion of others affects our valuation of objects</article-title>
          .
          <source>Current Biology</source>
          <volume>20</volume>
          (
          <issue>13</issue>
          ),
          <fpage>1165</fpage>
          -
          <lpage>1170</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Doder</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Woltran</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Probabilistic argumentation frameworks - A logical approach</article-title>
          .
          <source>In: Scalable Uncertainty Management - 8th International Conference, SUM. Lecture Notes in Computer Science</source>
          , vol.
          <volume>8720</volume>
          , pp.
          <fpage>134</fpage>
          -
          <lpage>147</lpage>
          . Springer (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Dondio</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Longo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Trust-based techniques for collective intelligence in social search systems</article-title>
          . In:
          <article-title>Next Generation Data Technologies for Collective Computational Intelligence</article-title>
          ,
          <source>Studies in Computational Intelligence</source>
          , vol.
          <volume>352</volume>
          , pp.
          <fpage>113</fpage>
          -
          <lpage>135</lpage>
          . Springer (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Dondio</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Longo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Computing trust as a form of presumptive reasoning</article-title>
          . In: 2014 IEEE/WIC/ACM International Joint Conferences on
          <article-title>Web Intelligence (WI) and Intelligent Agent Technologies (IAT</article-title>
          . pp.
          <fpage>274</fpage>
          -
          <lpage>281</lpage>
          . IEEE Computer Society (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Dung</surname>
            ,
            <given-names>P.M.</given-names>
          </string-name>
          :
          <article-title>On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>77</volume>
          (
          <issue>2</issue>
          ),
          <fpage>321</fpage>
          -
          <lpage>357</lpage>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Dung</surname>
            ,
            <given-names>P.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thang</surname>
            ,
            <given-names>P.M.:</given-names>
          </string-name>
          <article-title>Towards (probabilistic) argumentation for jury-based dispute resolution</article-title>
          .
          <source>In: Computational Models of Argument: Proceedings of COMMA. FAIA</source>
          , vol.
          <volume>216</volume>
          , pp.
          <fpage>171</fpage>
          -
          <lpage>182</lpage>
          . IOS Press (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Hankin</surname>
            ,
            <given-names>R.K.</given-names>
          </string-name>
          :
          <article-title>A Generalization of the Dirichlet Distribution</article-title>
          .
          <source>Journal of Statistical Software</source>
          <volume>33</volume>
          (
          <issue>11</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          (
          <year>February 2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Hunter</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thimm</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Probabilistic reasoning with abstract argumentation frameworks</article-title>
          .
          <source>J. Artif. Intell. Res</source>
          .
          <volume>59</volume>
          ,
          <fpage>565</fpage>
          -
          <lpage>611</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Hunter</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>A probabilistic approach to modelling uncertain logical arguments</article-title>
          .
          <source>Int. J. Approx. Reasoning</source>
          <volume>54</volume>
          (
          <issue>1</issue>
          ),
          <fpage>47</fpage>
          -
          <lpage>81</lpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Jøsang</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hayward</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pope</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Trust network analysis with subjective logic</article-title>
          .
          <source>In: Computer Science</source>
          <year>2006</year>
          ,
          <string-name>
            <surname>Twenty-Nineth Australasian</surname>
          </string-name>
          Computer Science Conference (ACSC).
          <source>CRPIT</source>
          , vol.
          <volume>48</volume>
          , pp.
          <fpage>85</fpage>
          -
          <lpage>94</lpage>
          . Australian Computer Society (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Jøsang</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Subjective Logic - A Formalism for Reasoning Under Uncertainty</article-title>
          .
          <source>Artificial Intelligence: Foundations, Theory, and Algorithms</source>
          , Springer (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Jøsang</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bhuiyan</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Optimal trust network analysis with subjective logic</article-title>
          .
          <source>In: Proceedings of the Second International Conference on Emerging Security Information, Systems and Technologies</source>
          , SECURWARE. pp.
          <fpage>179</fpage>
          -
          <lpage>184</lpage>
          . IEEE Computer Society (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Jøsang</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hankin</surname>
            ,
            <given-names>R.K.</given-names>
          </string-name>
          :
          <article-title>Interpretation and Fusion of Hyper Opinions in Subjective Logic</article-title>
          .
          <source>In: Proceedings of the 15th International Conference on Information Fusion (FUSION</source>
          <year>2012</year>
          )
          <article-title>(</article-title>
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oren</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Norman</surname>
          </string-name>
          , T.J.:
          <article-title>Probabilistic argumentation frameworks</article-title>
          .
          <source>In: Theorie and Applications of Formal</source>
          Argumentation - First International Workshop, TAFA. LNCS, vol.
          <volume>7132</volume>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . Springer (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Oren</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Norman</surname>
            ,
            <given-names>T.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Preece</surname>
            ,
            <given-names>A.D.</given-names>
          </string-name>
          :
          <article-title>Subjective logic and arguing with evidence</article-title>
          .
          <source>Artif. Intell</source>
          .
          <volume>171</volume>
          (
          <issue>10</issue>
          -
          <fpage>15</fpage>
          ),
          <fpage>838</fpage>
          -
          <lpage>854</lpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Paglieri</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Castelfranchi</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Trust, relevance, and arguments</article-title>
          .
          <source>Argument &amp; Computation</source>
          <volume>5</volume>
          (
          <issue>2-3</issue>
          ),
          <fpage>216</fpage>
          -
          <lpage>236</lpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Parsons</surname>
            , S., T.,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sklar</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McBurney</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cai</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Argumentation-based reasoning in agents with varying degrees of trust</article-title>
          .
          <source>In: 10th International Conference on Autonomous Agents and Multiagent Systems (AAMAS)</source>
          . pp.
          <fpage>879</fpage>
          -
          <lpage>886</lpage>
          . IFAAMAS (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Prakken</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vreeswijk</surname>
          </string-name>
          , G.:
          <article-title>Logics for defeasible argumentation</article-title>
          .
          <source>In: Handbook of philosophical logic</source>
          , pp.
          <fpage>219</fpage>
          -
          <lpage>318</lpage>
          . Springer (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Santini</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jøsang</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pini</surname>
            ,
            <given-names>M.S.:</given-names>
          </string-name>
          <article-title>Are my arguments trustworthy? abstract argumentation with subjective logic</article-title>
          .
          <source>In: 21st International Conference on Information Fusion, FUSION</source>
          . pp.
          <fpage>1982</fpage>
          -
          <lpage>1989</lpage>
          . IEEE (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Tajfel</surname>
          </string-name>
          , H.:
          <article-title>Social and cultural factors in perception</article-title>
          .
          <source>Handbook of social psychology 3</source>
          ,
          <fpage>315</fpage>
          -
          <lpage>394</lpage>
          (
          <year>1969</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Thimm</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A probabilistic semantics for abstract argumentation</article-title>
          .
          <source>In: ECAI - 20th European Conference on Artificial Intelligence. FAIA</source>
          , vol.
          <volume>242</volume>
          , pp.
          <fpage>750</fpage>
          -
          <lpage>755</lpage>
          . IOS Press (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Villata</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boella</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gabbay</surname>
          </string-name>
          , D.M.,
          <string-name>
            <surname>van der Torre</surname>
            ,
            <given-names>L.W.N.:</given-names>
          </string-name>
          <article-title>A socio-cognitive model of trust using argumentation theory</article-title>
          .
          <source>Int. J. Approx. Reasoning</source>
          <volume>54</volume>
          (
          <issue>4</issue>
          ),
          <fpage>541</fpage>
          -
          <lpage>559</lpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>