<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Method to Evaluate Uncertain and Con icting Trust and Authenticity Statements</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Communication Networks and Computer Engineering, Universitat Stuttgart</institution>
          ,
          <addr-line>70569 Stuttgart</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <fpage>62</fpage>
      <lpage>82</lpage>
      <abstract>
        <p>Countless Internet applications and platforms make it easy to communicate, to collaborate and to exchange information and opinions with a huge number of individuals. However, it is getting more and more di cult to distinguish honest and reliable individuals from malicious users distributing false information or malware. Digital signatures, webs of trust and reputation systems can help to securely identify communication partners and to estimate the trustworthiness of others, but there is a lack of trust and authenticity evaluation methods that do not show counterintuitive e ects in the case of con icting opinions. This article proposes a new integrated method to evaluate uncertain and con icting trust and authenticity statements. It introduces a set of operators and inference rules for combining and reasoning with these statements, it de nes an approach to compute the resulting con dence values of derived statements and it compares di erent computation algorithms. The computation is based on a probability theoretical model in order to exclude inconsistencies and counter-intuitive e ects.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>An increasing number of di erent Internet applications, platforms and social
networks makes it easy to communicate with a huge number of individuals, to
exchange and share information, news, photos, les and product
recommendations and to socialize with people sharing similar interests. However, for users
participating in a large number of social networks, discussion forums, etc. it is
getting more and more di cult to nd out who their new \friends" actually are
and whether they can trust them.</p>
      <p>With a reputation system users can share their knowledge and opinions about
other users. The reputation system collects and evaluates the opinions of all
users about the trustworthiness of others. On request it computes the resulting
con dence value for the requested entity according to a trust model.</p>
      <p>
        The authenticity of all opinion statements should be protected, e. g., with
digital signatures, to prevent manipulations and to make the evaluation veri
able to the users. Digital signatures are only useful if the users can identify the
signature key holder. If no global trusted public key infrastructure is available,
users can share their knowledge about the ownership of keys in a so-called web
of trust (e. g., the PGP/GnuPG web of trust [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]) by exchanging digitally signed
identity certi cates. However, these authenticity statements are only useful if the
users can verify that the issuer is trustworthy to verify the ownership of public
keys. Trust and authenticity evaluation are thus highly interdependent.
      </p>
      <p>
        Various di erent computational trust models, reputation systems and
applications using trust have been proposed [1{10]. To obtain an intuitive and
consistent trust model one must de ne clearly what a con dence value
represents and nd a sound mathematical basis for the computation with con dence
values. Con dence values have strong similarities to probability values. The most
sophisticated trust models are therefore based on probability theory. Maurer [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
proposed a probabilistic model in which con dence values for trust and
authenticity relations correspond to probability values. However, it does not model
negative opinions (distrust) and entities cannot have more than one key.
Credential Networks and related models proposed by Haenni [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], Jonczy [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and
Kohlas [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] also model con dence values as probabilities. Besides degrees of
support and uncertainty the con dence values can express also degrees of refutation
(e. g., distrust). However, con dence values may contain either degrees of belief
and ignorance1, disbelief and ignorance, or belief and disbelief, but they
cannot contain degrees of belief, disbelief and ignorance at the same time. J sang's
Subjective Logic [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] can express opinions with degrees of belief, ignorance and
disbelief (at the same time), but the approach to compute the resulting con
dence values is (although based on probability theory, too) quite di erent. In
the model of Maurer and in Credential Networks the initial con dence values
are uncertain and the inference rules are deterministic, whereas in Subjective
Logic the uncertainty is modeled in the operators of the inference rules, i. e., the
con dence value of a conclusion is computed from the con dence values of the
preconditions. Unfortunately, this leads to the problem that the resulting con
dence value generally depends on the order in which the operators are applied.
It seems that Subjective Logic can not be used to evaluate arbitrary networks of
trust and authenticity statements without using questionable workarounds [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        If users can express both positive (supporting) and negative (refuting)
opinions, then the combination of contradictory opinions can lead to con icts. In
Credential Networks and Subjective Logic the probability mass associated with
con icting combinations is eliminated and the remaining probability mass is
renormalized. Zadeh [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] has shown that con ict elimination and re-normalization
approaches (like Dempster's rule of combination [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]) can produce
counterintuitive e ects.
      </p>
      <p>
        This article proposes a new integrated approach to evaluate uncertain and
con icting trust and authenticity statements without eliminating con ict. This
1 the terms uncertainty and ignorance are used interchangeable
avoids the counter-intuitive e ects of re-normalizations. The trust model is based
on a combination of the inference rules from [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and the calculus and operators
from [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], extended by a new operator for reasoning with authenticity statements.
Sect. 2 describes the representation of trust and authenticity statements, Sect. 3
the representation of con dence values and the corresponding operators. The new
inference rules are formulated in Sect. 4. Sect. 5 proposes di erent algorithms to
compute the resulting con dence value, Sect. 6 compares the computation time
of these algorithms and Sect. 7 concludes the article.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Model of Trust and Authenticity Statements</title>
      <p>An opinion refers to a trust or authenticity statement Hj with an associated
con dence value tj . A rst-hand opinion is an opinion that is based only on
the experience and knowledge of a single entity (the trustor or issuer ) and that
is independent of other opinions. A second-hand opinion is an opinion that is
derived from other opinions and that is thus not independent.</p>
      <sec id="sec-2-1">
        <title>We de ne trust as \a unidirectional relation between a trustor and a trustee</title>
        <p>
          expressing the strong belief of the trustor that the trustee will behave as expected
with respect to a particular capability within a particular context" [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. Therefore
we represent the standard form of a trust statement as follows:
        </p>
        <sec id="sec-2-1-1">
          <title>Trust(trustor ; trustee; r; hmin::hmax)</title>
          <p>(1)
The trustor can be an entity or a key, the trustee an entity, a description or a key
(see Tab. 1). An entity (EA, EB, . . . ) can be a person, an organization, a network
node, etc. referred to by a local identi er. To exchange opinions with others
users have to use unique descriptions or public keys to refer to other entities.
A description (DA, DB, . . . ) consists of a list of names, identi ers or attributes
that uniquely identi es the described entity. Entities may have several di erent
descriptions. A public key (KA, KB, . . . ) is the public part of an asymmetric
key pair. The holder uses the key pair to sign trust or authenticity statements
(certi cates). An entity can use several di erent key pairs at the same time.</p>
          <p>The capability r refers to an application speci c capability (r1, r2, . . . ) or
to the capability rPKI, which represents the capability to honestly and carefully
verify that a description uniquely refers to the holder of a particular key pair.</p>
          <p>We distinguish di erent types of trust identi ed by a di erent number of
recommendation hops (h): Functional trust expresses the belief that the trustee
has the capability r and is described by h = 0. Recommendation trust for h = 1
hop expresses the belief that the trustee can recommend someone with capability
r, recommendation trust for h = 2 hops that the trustee can recommend someone
who can recommend someone with capability r, etc. Each standard form trust
statement can specify the desired range of recommendation hops hmin::hmax.</p>
          <p>For the evaluation of trust statements we need in addition trust statements in
the slightly di erent internal form. These trust statements refer not to a range,
but to a single recommendation hop value h 0 and they have an additional
parameter, the chain length l 1:</p>
        </sec>
        <sec id="sec-2-1-2">
          <title>Trust(trustor ; trustee; r; h; l)</title>
          <p>Trust is not transitive in general, but trust statements can be combined in certain
cases to trust chains according to the transitive trust inference rule (7) described
in Sect. 4.1. The chain length l of the derived trust statement refers to the number
of rst-hand trust statements in the trust chain.</p>
          <p>Authenticity statements express the strong belief of the issuer that a
description belongs to an entity, that a public key belongs to an entity or that a
description belongs to the holder of a public key:</p>
        </sec>
        <sec id="sec-2-1-3">
          <title>Auth(issuer ; actor 1; actor 2)</title>
          <p>The issuer is an entity or a public key, actor 1 and actor 2 are entities, descriptions
or public keys. All four possible combinations are listed in Tab. 12.
(2)
(3)
3</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Con dence Values</title>
      <p>This section introduces discrete and continuous con dence values as well as
operators for reasoning with discrete con dence values. Users express their opinions
with continuous con dence values while the discrete con dence values are used
internally only for reasoning with opinions.
3.1</p>
      <p>
        Representation of Discrete and Continuous Con dence Values
Users can have di erent and possibly con icting opinions about trust and
authenticity statements. Therefore, we can not de nitively decide whether a
statement H is \true" or \false". We can only evaluate known indications that support
or refute H. It is possible that neither supporting nor refuting or that both
supporting and refuting indications for H are found. Therefore we describe
knowledge of supporting and refuting indications independently. For each statement
H we introduce the propositions H+ and H to describe that the reputation
2 certi cates can not contain local identi ers for entities (EA; EB; : : :) because they
would be meaningless to other entities
system is aware of indications that imply that H must be true and that H must
be false, respectively. We also introduce the four discrete con dence values belief
(+), ignorance (?), disbelief ( ) and con ict ( ) to represent the four
possible combinations of these propositions (see Tab. 2). They can be seen as \truth
values" of a paraconsistent logic [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>As statements can in fact not be both true and false at the same time we
can conclude that rst-hand opinions can not have the con dence value con ict.
However, if we combine statements of di erent (disagreeing) entities, it is
possible to nd both H+ and H , i. e., the con dence value of derived (second-hand )
opinions can be con ict. Con ict must not be confused with partial support and
partial refutation (ambivalent opinions ). An entity that has for example
experienced some positive and some negative interactions can express this opinion
with continuous con dence values.</p>
      <p>Continuous con dence values t = (b; i; d; c) with b; i; d; c 2 [0; 1] and b + i +
d + c = 1 express degrees of belief, ignorance, disbelief and con ict. The value
b represents the issuer's subjective estimation of the probability that there are
indications supporting (but no refuting) H. Similarly, d represents the
subjective estimation of the probability that there are indications refuting (but no
supporting) H. c represents the subjective estimation of the probability that
there are both supporting and refuting indications for H at the same time, and
i represents the subjective estimation of the probability that there are neither
supporting nor refuting indications for H. For the same reason as before, c must
be zero in all rst-hand opinions, whereas second-hand opinions can contain
con ict. Nevertheless, ambivalent rst-hand opinions can be expressed by
continuous con dence values with both b &gt; 0 and d &gt; 0. A user that has made
many positive and few negative experiences can choose, for example, a
rsthand con dence value with b = 0:7 and d = 0:1 (i. e., t = (0:7; 0:2; 0:1; 0). Thus,
in rst-hand statements b can be seen as the lower bound and 1 d as the upper
bound for the estimated subjective probability that H must be true.</p>
      <p>The degrees of ignorance and con ict in resulting con dence values have
different meanings, and applications should handle high degrees of ignorance and
con ict di erently: A high degree of ignorance indicates that the reputation
system has little information about the requested statement and suggests searching
more relevant statements, if possible. A high degree of con ict, however, shows
that the requested statement H is controversial. This suggests that the requester
should verify whether the trust and authenticity assignments he made and that
cause the con ict are correct.</p>
      <p>Continuous con dence value can be condensed to a single value w, if desired:
w = b + wii + wdd + wcc
(4)
The parameters wi, wd and wc represent weights for the degrees of ignorance,
disbelief and con ict, e. g., wi = 0:5, wd = 1 and wc = 0. They can be chosen
according to the preferences of the application and allow for rather optimistic
or rather pessimistic behavior in the cases of uncertainty and con ict.
3.2</p>
      <p>Operators to Combine Discrete Con dence Values
This section describes the recommendation and authentication operators. The
operators de ne whether Hz+ and Hz can be derived from a set of premises (Hx+,
Hx , Hy+, Hy ). The short notation with statements is provided for convenience
and will be used to formulate the inference rules in Sect. 4.</p>
      <p>Recommendation Operator The recommendation operator ( ) is used to
concatenate two trust statements or a trust with an authenticity statement. It
is reasonable for a user to adopt the opinions of trustworthy entities. However,
it is not reasonable (it is in fact even dangerous) to assume that untrustworthy
(malicious or incompetent) entities always tell the opposite of the truth. Instead,
opinions of untrustworthy entities should be ignored. Therefore, we do not draw
any conclusions from Hx . The operator is thus de ned as follows:
Hx</p>
      <p>Hy
Hz
,</p>
      <p>Hx+</p>
      <p>Hz+</p>
      <p>Hy+ ;</p>
      <p>Hx+</p>
      <p>Hy
Hz
(5)
This reads as follows: Hz follows from a combination of Hx and Hy with the
recommendation operator. If there are supporting indications for Hx and for Hy,
then infer Hz+. If there are supporting indications for Hx and refuting indications
for Hy, then infer Hz . Fig. 1 (left) shows the corresponding \truth table".
t0z = t0x</p>
      <p>t0
Authentication Operator The authentication operator ( ) is used to reason
with two authenticity relations between entities, descriptions and public keys:
Hx</p>
      <p>Hy
Hz
,</p>
      <p>Hx+</p>
      <p>Hz+</p>
      <p>Hy+ ; Hx+</p>
      <p>Hy ; Hx
Hz</p>
      <p>Hz</p>
      <p>Hy+
The operator de nition can be understood as follows: Assume Hx and Hy
represent statements like \A and B belong together" and \B and C belong together",
respectively. If we have supporting indications for both statements, then this
supports that A and C belong together (Hz). If we have indications that A and
B belong together but that B does not belong to C, then we conclude that A
does not belong to C either. If neither A belongs to B nor does B belong to
C, then we can draw no conclusion about A and C. Fig. 1 (right) shows the
corresponding truth table.</p>
    </sec>
    <sec id="sec-4">
      <title>4 Inference Rules</title>
      <p>(6)
(7)
The inference rules specify which conclusions the reputation system can draw
from a set of given trust and authenticity propositions.
4.1</p>
      <p>Transitive Trust Inference Rule
This inference rule describes the transitivity property of trust statements. It
de nes in which cases two trust statements for the same capability r can be
combined with the recommendation operator in order to derive a new trust
statement from the trustor of the rst statement (A) to the trustee of the second
statement (C). The trustor A can be an entity (EA) or a public key (KA). The
second statement can be a trust statement or a trust certi cate, i. e., B can be
an entity (EB) or a public key (KB). The nal trustee C can be an entity (EC ),
a public key (KC ) or a description (DC ).</p>
      <p>Trust(A; B; r; h + l2; l1) Trust(B; C; r; h; l2)</p>
      <p>Trust(A; C; r; h; l1 + l2)
This inference rule di ers from other proposed transitive trust inference rules
in that it allows the combination of trust statements only if the number of
recommendation hops matches: The number of recommendation hops of the
rst statement must equal the sum of the recommendation hops plus the chain
length of the second statement. The chain length of the resulting statement is
the sum of the chain lengths of the input statements. This ensures that the
recommendation hop value of the trust statements decreases by one throughout
the chain of rst-hand trust relations (e. g., h = 2, h = 1, h = 0).</p>
      <p>The example in Fig. 2 illustrates the inference rule. The transitive trust
inference rule allows to combine H1+ = Trust+(EA; EB; r; 2; 1) with H2+ =
Trust+(EB; EC ; r; 1; 1) to H4+ = Trust+(EA; EC ; r; 1; 2) and then H4+ with H3 =
Trust (EC ; ED; r; 0; 1) to H5 = Trust (EA; ED; r; 0; 3).</p>
      <p>H2+
h = 1; l = 1
EB</p>
      <p>H4+
h = 1; l = 2
A number of simple rules allow to infer from trust assigned to an entity to trust
assigned to the holder of a key and to trust assigned to an entity identi ed by a
description, and vice versa. If an entity is trustworthy, then the holder of a key
that belongs to this entity is trustworthy, too, and vice versa:</p>
      <p>Auth(EA; KC ; EC ) Trust(EA; EC ; r; h; l)</p>
      <p>Trust(EA; KC ; r; h; l)
Auth(EA; KC ; EC ) Trust(EA; KC ; r; h; l)</p>
      <p>Trust(EA; EC ; r; h; l)
Auth(EA; DC ; EC ) Trust(EA; EC ; r; h; l)</p>
      <p>Trust(EA; DC ; r; h; l)
Auth(EA; DC ; EC ) Trust(EA; DC ; r; h; l)</p>
      <p>Trust(EA; EC ; r; h; l)
Auth(EA; KC ; DC ) Trust(EA; KC ; r; h; l)</p>
      <p>Trust(EA; DC ; r; h; l)
Auth(EA; KC ; DC ) Trust(EA; DC ; r; h; l)</p>
      <p>Trust(EA; KC ; r; h; l)
Auth(KA; KC ; DC ) Trust(KA; KC ; r; h; l)</p>
      <p>Trust(KA; DC ; r; h; l)
Auth(KA; KC ; DC ) Trust(KA; DC ; r; h; l)</p>
      <p>Trust(KA; KC ; r; h; l)</p>
      <p>If an entity is trustworthy, then the entity identi ed by a description that
belongs to this entity is trustworthy, too, and vice versa:</p>
      <p>If the holder of a key is trustworthy, then the entity identi ed by a description
that belongs to this key holder is trustworthy, too, and vice versa. This applies
to trust relations and trust certi cates:
(8)
(9)
(10)
(11)
(12)
(13)
(14)</p>
      <p>Local Authenticity Inference Rule
If an entity EA has partial knowledge about whether an entity EB is the holder
of a key KB, whether a description DB refers to the entity EB or whether the
description DB refers to the holder of the key KB, then it can draw further
conclusions about the con dence values of the authenticity statements between EB,
KB and DB. If the con dence values of two corresponding authenticity relations
are known, then the con dence value of the third authenticity relation can be
derived with the authentication operator:
(16)
(17)
(18)
(19)
(20)
(21)
Auth(EA; KC ; DC )</p>
      <p>Auth(EA; DC ; EC )
Auth(EA; KC ; DC )</p>
      <p>Auth(EA; KC ; EC )</p>
      <p>Auth(EA; KC ; EC )</p>
      <p>Auth(EA; DC ; EC )
Auth(EA; KC ; EC )</p>
      <p>Auth(EA; KC ; DC )</p>
      <p>Auth(EA; DC ; EC )
4.4</p>
      <p>Authenticity Inference with Authenticity Con rmation
If a trustor (EA or KA) trusts a trustee (EB or KB) to issue only correct
authenticity relations or identity certi cates (property rPKI), then the trustor can
conclude that authenticity relations or identity certi cates of the trustee are
correct:</p>
      <p>Trust(EA; EB; rPKI; 0; l)</p>
      <p>Auth(EA; KC ; DC )</p>
      <p>Auth(EB; KC ; DC )
Trust(EA; KB; rPKI; 0; l)</p>
      <p>Auth(EA; KC ; DC )
Trust(KA; KB; rPKI; 0; l)</p>
      <p>Auth(KA; KC ; DC )</p>
      <p>Auth(KB; KC ; DC )
Auth(KB; KC ; DC )
4.5</p>
      <p>Uniqueness Conditions
Two further conclusions can be drawn from the condition that each public key
has only one holder and that each description refers to only one entity. If A
knows that EB is the holder of KB, then it can infer that all other entities are
not the holder of KB. Similarly, if A knows that EB has the description DB,
then it can infer that all other entities do not have the description DB (A can
be an entity or a key).</p>
      <p>Auth+(A; KB; EB) ; Auth+(A; DB; EB)
Auth (A; KB; Ej) Auth (A; DB; Ej)</p>
    </sec>
    <sec id="sec-5">
      <title>Con dence Value Computation</title>
      <p>The reputation system collects all issued rst-hand trust and authenticity
opinions Hj with associated continuous con dence value tj (with cj = 0). Users can
then send requests in the form of a standard form trust statement or an
authenticity statement to the reputation system. The reputation system then processes
all collected opinions. It applies the inference rules to derive trust and
authenticity statements and it computes the resulting continuous con dence value t0 of
the requested statement H0 from the con dence values of the relevant rst-hand
statements. As the components of the continuous rst-hand con dence values
(b, i and d) represent probabilities, we de ne the resulting con dence value by a
random experiment and propose di erent algorithms for the computation of the
resulting con dence value.
5.1</p>
      <p>Probabilistic Model for the Con dence Value Computation
The components of the computed resulting con dence value t0 = (b0; i0; d0; c0)
for H0 are computed from the combination of all available rst-hand opinions
with the inference rules under the assumption that the con dence values of the
opinions of the requestor are correct. In short, b0 is the computed lower bound for
the probability that the combination of the available rst-hand opinions leads to
the conclusion that H0 must be true (but not that H0 must be false). Similarly,
d0 is the computed lower bound for the probability that the combination of the
available rst-hand opinions leads to the conclusion that H0 must be false (but
not that H0 must be true). The degree of con ict c0 is the computed probability
that the combination of the rst-hand opinions leads to the contradicting
conclusion that H0 must be both true and false at the same time. The degree of
ignorance is the remaining probability i0 = 1 b0 d0 c0.</p>
      <p>The following description of a random experiment provides a more detailed
de nition for t0: We assume that the reputation system has collected J rst-hand
opinions, i. e., the statements Hj (j = 1; 2; : : : J ) with associated continuous
con dence values tj = (bj ; ij ; dj ; 0). For each rst-hand statement Hj choose
a discrete con dence value t0j from f+; ?; g according to the weights bj , ij
and dj , i. e., choose t0j = + with probability bj , t0j = ? with probability ij
and t0j = with probability dj . Statements with the discrete con dence value
ignorance don't contribute knowledge and can be discarded3. Each remaining
rst-hand statement Hj with associated discrete con dence value t0j corresponds
to a set of rst-hand propositions according to Tab. 2.</p>
      <p>The inference rules always operate on trust propositions in the internal
representation. We therefore have to replace each standard-form trust statement
Trust(A; B; r; hmin::hmax) by a list of single-hop trust statements in internal
form with chain length l = 1: Trust(A; B; r; hmin; l), Trust(A; B; r; hmin + 1; l),
. . . , Trust(A; B; r; hmax; l). The internal trust statements inherit their assigned
3 this optimization does not change the resulting con dence value, the resulting
continuous con dence value t0 nevertheless contains the correct degree of ignorance
discrete con dence value from the standard-form trust statement. Next, we
apply all inference rules (see Sect. 4) to derive all (positive and negative) deducible
propositions from the set of all known rst-hand propositions and all already
derived propositions. To get back to trust statements in standard form we conclude
H0+ = Trust+(A; B; r; hmin::hmax) if we have been able to derive a proposition
H0+;h = Trust+(A; B; r; h; l) with hmin h hmax. Similarly, we conclude H0 if
we have been able to derive a proposition H0;h.</p>
      <p>To obtain the resulting continuous con dence value of a requested trust or
authenticity statement we compute the probability that the random experiment
leads to a set of rst-hand propositions from which we can derive positive and
negative propositions for the requested statement H0. The components of the
resulting con dence value t0 = (b0; i0; d0; c0) are de ned as follows: b0 is the
probability that H0+ (but not H0 ) can be derived and d0 is the probability that
H0 (but not H0+) can be derived. The probability that neither H0+ nor H0 can
be derived is i0, and c0 is the probability that both H0+ and H0 can be derived.</p>
      <p>
        In contrast to other trust models (e. g., [
        <xref ref-type="bibr" rid="ref10 ref11 ref5 ref6">5, 6, 10, 11</xref>
        ]) we propose not to
eliminate the degree of con ict, not only to avoid counter-intuitive e ects of
renormalizations but also because it provides valuable information to the
requesting user or application (see Sect. 3.1).
5.2
      </p>
      <p>Approximation and Exact Computation Algorithms
This section presents di erent possibilities to implement the computation of an
approximation or of the exact value of the resulting continuous con dence value
t0 according to Sect. 5.1. All exact algorithms return the same resulting con
dence value t0, but di er in computation time. The result of the approximation
gets arbitrarily close to the exact result if the number of iterations is su ciently
large.</p>
      <p>To keep the computation time small we recommend for all algorithms to
precompute all possible paths : We rst set up a \superposition" of possible
rsthand propositions. For each statement Hj with continuous con dence value tj =
(bj ; ij ; dj ; 0) we select Hj+ if bj &gt; 0 and we select (possibly in addition) Hj if
dj &gt; 0. Then we translate all trust propositions into the internal form, apply all
inference rules and record the dependencies, i. e., we trace which sets of rst-hand
propositions (premises) allow to derive which conclusions. Each set of rst-hand
propositions that allows to (directly or indirectly) derive the positive requested
proposition H0+ is called a positive path for H0, each set that allows to derive
the negative proposition H0 a negative path for H0. Next, we select the set of
positive paths and the set of negative paths for H0 and minimize these paths,
i. e., we remove all paths that contain at least one other path in the set. We
nally obtain the set of minimal positive paths A+ = fa1+; a2+; : : : ak++ g and the
set of minimal negative paths A = fa1 ; a2 ; : : : ak g.</p>
      <p>Approximation with Monte-Carlo Simulation An obvious approach to
determine an approximation for the resulting con dence value is to run the
described random experiment N times and to count in how many experiments
the selected set of rst-hand propositions contains at least one positive (but
no negative) path (nb), no paths (ni), at least one negative (but no positive)
path (nd) or both positive and negative paths (nc). The approximation for the
con dence value is t0 = 1 (nb; ni; nd; nc). The choice of N allows to adjust the</p>
      <p>N
trade-o between precision and computation time.</p>
      <p>Possible Worlds Algorithm An simple algorithm to compute the exact value
is to go through the list of all possible combinations of rst-hand propositions
(so-called possible worlds ), to compute the probability of each of those possible
worlds and to check for each world whether the set of rst-hand propositions of
this world contains the minimal paths. The sum of all probabilities of all worlds
that contain at least one positive and at least one negative path is c0, b0 is
the sum of probabilities of all worlds that contain at least one positive, but no
negative path, and d0 the sum of probabilities of all worlds that contain at least
one negative, but no positive path. The degree of ignorance is i0 = 1 b0 d0 c0.</p>
      <p>H1</p>
      <p>H2
EA</p>
      <p>EB</p>
      <p>EC
j Hj
1 Trust(EA; EB; r1; 1)
2 Trust(EB; EC ; r1; 0)
Example 1: Simple Trust Chain with Possible Worlds Algorithm The
example scenario in Fig. 3 (left) consists of two trust statements: H1 is a
recommendation trust statement for one recommendation hop (h = 1), and H2 is a
functional trust statement (h = 0). EA wants to compute the resulting functional
trustworthiness of EC (H0 = Trust(EA; EC ; r1; 0)).</p>
      <p>First, the trust statements in standard form have to be replaced by
corresponding trust statements in internal form: H1 by H1;1 = Trust(EA; EB; r1; 1; 1)
and H2 by H2;0 = Trust(EB; EC ; r1; 0; 1). Both refer to the same property r1, it
is therefore possible to combine H1;1 and H2;0 with the transitive trust inference
rule (7) to the new functional trust statement H0;0 = Trust(EA; EC ; r1; 0; 2):
H1+;1; H2+;0 ` H0+;0 (H1+;1 and H2+;0 allow to drive H0+;0) and H1+;1; H2;0 ` H0;0.
Thus, there is only one positive path a1+ = fH1+;1; H2+;0g and one negative path
a1 = fH1+;1; H2;0g for H0;0 and thus for H0.</p>
      <p>Fig. 3 (right) shows all possible combinations of the rst-hand propositions,
the probability that this world occurs and the propositions that can be derived
in this world. There are no worlds in which both H0+ and H0 can be derived,
thus c0 = 0. H0+ can be derived only in the rst world, therefore b0 = b1b2.
Similarly, H0 can be derived only in the third world, therefore d0 = b1d2. The
degree of ignorance is the remaining probability mass i0 = 1 b0 d0 c0. With
the values in Fig. 3 we obtain t0 = (0:35; 0:55; 0:1; 0).</p>
      <p>Grouped Possible Worlds Algorithm The possible worlds algorithm can
be improved by subdividing the set of relevant rst-hand statements into as few
groups g1; : : : gu as possible. Two statements, Hj and Hm, belong to the same
group if and only if the following condition holds for each positive, negative and
con icting path: If the path contains a proposition for Hj (Hj+ or Hj ), then it
must also contain a proposition for Hm (Hm+ or Hm).</p>
      <p>In the preparation step we construct for each group a list of all relevant
combinations of propositions of the statements in the group. This list contains
all combinations that contain exactly one proposition (i. e., either Hj+ or Hj )4for
each statement and that is identical to the corresponding section of at least one
(positive or negative) path. An additional element of this list consists of an empty
set. It represents all remaining possible combinations of propositions, i. e., all
combinations that contain neither Hj+ nor Hj for at least one statement Hj of
the group. We can subsume these combinations because the have the same e ect
on the derivability of propositions of H0. For each element of the list we compute
the probability that this combination will occur (within the group) from the
continuous con dence values of the statements. The probability associated with
the empty set is the sum of the probabilities of the contained combinations of
propositions (i. e., the remaining probability). Thus, the sum of all probabilities
is one.</p>
      <p>Next, in the main step, we go through all possible worlds. Each world consists
of one possible combination of these prepared proposition-combinations of the
groups, i. e., for each groups we select one proposition-combination from the
prepared list of the group. We multiply the precomputed probabilities of the
chosen proposition-combinations to obtain the resulting probability of the world.
Finally, we compute b0, i0, d0 and c0 just as in the possible worlds algorithm.
Example 2: Parallel Trust Chain with Grouped Possible Worlds
Algorithm The scenario in Fig. 4 consists of two parallel trust chains from EA to ED.
EA requests the con dence value for the resulting functional trustworthiness of
ED (H0 = Trust(EA; ED; r1; 0)). The trust statements in standard form are
replaced by statements in internal form: H1 by H1;1 = Trust(EA; EB; r1; 1; 1),
H2 by H2;0 = Trust(EB; ED; r1; 0; 1), H3 by H3;1 = Trust(EA; EC ; r1; 1; 1)
and H4 by H4;0 = Trust(EC ; ED; r1; 0; 1). We can combine H1;1 with H2;0
and H3;1 with H4;0 with the transitive trust inference rule (7). We obtain
4 no combination can contain both Hj+ and Hj because cj = 0</p>
      <p>EA</p>
      <p>H1
H3</p>
      <p>EB
EC</p>
      <p>H2
H4</p>
      <p>ED
j Hj
1 Trust(EA; EB; r1; 1)
2 Trust(EB; ED; r1; 0)
3 Trust(EA; EC ; r1; 1)
4 Trust(EC ; ED; r1; 0)
two positive paths A+ = ffH1+;1; H2+;0g; fH3+;1; H4+;0gg and two negative paths
A = ffH1+;1; H2;0g; fH3+;1; H4;0gg. Propositions for H1;1 and H2;0 appear
always together in paths, the same holds for H3;1 and H4;0. Therefore we can
divide the statements into two groups g1 = fH1;1; H2;0g and g1 = fH3;1; H4;0g.</p>
      <p>In the preparation step we set up a list for each group that contains all
relevant possible combinations of the propositions and their probabilities (see
Tab. 3). For each group we nd three relevant combinations: one combination
supports a positive path and one a negative path. The third entry with the
empty set represents the remaining combinations.</p>
      <p>To compute the resulting con dence value t0 for H0 we set up Tab. 5 with
all 3 3 = 9 possible combinations of the entries in the lists (possible worlds), the
probabilities of each world and the derivable propositions for H0. Then we add
the probabilities and obtain b0 = b1b2b3b4 + b1b2(1 b3b4 b3d4) + (1 b1b2
b1d2)b3b4, i0 = (1 b1b2 b1d2)(1 b3b4 b3d4), d0 = b1d2b3d4 + b1d2(1 b3b4
b3d4) + (1 b1b2 b1d2)b3d4 and c0 = b1b2b3d4 + b1d2b3b4. With the values in
Fig. 4 we obtain t0 = (0:7112; 0:0532; 0:07; 0:1656).</p>
      <p>Computation with Inclusion-exclusion Formula This algorithm computes
the exact resulting con dence value directly from the minimal positive and
negative paths for H0. In addition, we need the set of minimal con icting paths.
Therefore we set up all possible combinations consisting of one positive and one
negative path (ax = ay+ [ az with y = 1; : : : k+, z = 1; : : : k ), minimize the set
and obtain A = fa1 ; a2 ; : : : ak g (with k k+k ). A useful optimization is
to eliminate all paths that contain both Hj+ and Hj (because cj = 0).</p>
      <p>First, we compute the degree of con ict c0 from the con dence values of the
rst-hand statements in the set of minimal paths with the
inclusion-exclusionformula (I(A)): c0 is the probability that a possible world chosen according to
Sect. 5.1 will contain at least one con icting path. Thus, we add the probabilities
of all minimal paths, subtract the probabilities of all unions of two minimal paths,
add the probabilities of all unions of three minimal paths, etc.:</p>
      <p>k
c0 =I(A ) = X( 1)n+1</p>
      <p>X
n=1
1 j1&lt; &lt;jn k</p>
      <p>P (aj1 [
[ ajn )
k
= X P (aj1 )
j1=1</p>
      <p>X
1 j1&lt;j2 k</p>
      <p>P (aj1 [ aj2 ) +
+ ( 1)k +1P (a1 [
[ ak )
P (a) denotes the probability that path a is contained in the set of rst-hand
propositions of a chosen possible world:</p>
      <p>P (a) =</p>
      <p>Y
j:Hj+2a or Hj 2a
pj
8 0 if H+</p>
      <p>j 2 a; Hj 2 a
with pj = &lt; bj if Hj+ 2 a; Hj 62 a
: dj if H+
j 62 a; Hj 2 a
We obtain b0 + c0 with the inclusion-exclusion formula applied to the minimal
positive paths, thus b0 = I(A+) c0. Similarly, the degree of disbelief is d0 =
I(A ) c0 and nally we obtain i0 = 1 b0 d0 c0.</p>
      <p>Example 3: Authenticity Veri cation with Inclusion-Exclusion
Formula Fig. 5 shows an example scenario consisting of the rst-hand statements
H1, . . . H6 with associated con dence values. Entity EA wants to know whether
entity ED is the holder of the key KD and therefore requests the resulting
condence value for H0 = Auth(EA; KD; ED).
(23)
(24)
EA</p>
      <p>H1</p>
      <p>H2</p>
      <p>KB
H3
KC</p>
      <p>KD
H4
H5</p>
      <p>H6
ED</p>
      <p>DD
j Hj bj ij dj
1 Trust(EA; KB; rPKI; 0::1) 0.95 0.05 0
2 Trust(EA; KC ; rPKI; 0) 0.85 0.15 0
3 Trust(KB; KC ; rPKI; 0) 0.9 0.1 0
4 Auth(KB; KD; DD) 0.7 0.2 0.1
5 Auth(KC ; KD; DD) 0.8 0.2 0
6 Auth(EA; DD; ED) 1 0 0</p>
      <p>First, we transform the trust statements from standard form into the
internal form: H1 is transformed into H1;0 = Trust(EA; KB; rPKI; 0; 1) and H1;1 =
Trust(EA; KB; rPKI; 1; 1), H2 into H2;0 = Trust(EA; KC ; rPKI; 0; 1), etc. Then
we create the set of propositions that represents a superposition of all possible
worlds according to the con dence values of the statements (see Tab. 6, H1+;0,
. . . H6+). Next, we apply the inference rules to the proposition of this set
(including the already derived propositions). The remaining rows of Tab. 6 list the
derived propositions as well as the used inference rules and the premises. The
transitive trust inference rule (7) allows for example to derive the new
proposition H7+ = Trust+(EA; KC ; rPKI; 0; 2) from H1+;1 and H3+;0. Then the minimal
positive and negative paths can be determined. We nd the three positive paths
fH1+; H4+; H6+g, fH2+; H5+; H6+g and fH1+; H3+; H5+; H6+g and one negative path
fH1+; H4 ; H6+g. We can thus construct the set of minimal con icting paths:
fH1+; H2+; H4 ; H5+; H6+g, fH1+; H3+; H4 ; H5+; H6+g and fH1+; H4+; H4 ; H6+g. The
last path can be eliminated since c4 = 0.</p>
      <p>Next we compute the degrees of con ict, belief and disbelief with the
inclusionexclusion formula: c0 = b1b2d4b5b6 + b1b3d4b5b6 b1b2b3d4b5b6 = 0:07486, b0 =
b1b4b6+b2b5b6+b1b3b5b6 b1b2b4b5b6 b1b3b4b5b6 b1b2b3b5b6+b1b2b3b4b5b6 c0 =
0:92358 c0 = 0:84872 and d0 = b1d4b6 c0 = 0:095 c0 = 0:02014. The degree
of ignorance is i0 = 1 b0 d0 c0 = 0:05628. Thus, the resulting con dence
value for H0 is t0 = (0:84872; 0:05628; 0:02014; 0:07486).
5.3</p>
      <p>
        Comparison with Other Trust Models
The model of Maurer [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] does not allow to express degrees of disbelief. Therefore,
con ict can never occur. In all scenarios in which Maurer's model can be
applied it produces the same resulting con dence values as our model. Subjective
Logic [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] can only be applied if the network is a directed series-parallel graph
(e. g., Examples 1 and 2, but not Example 3). Credential Networks [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] can be
applied only if at least one component of the con dence value (bj , ij or dj ) of
each rst-hand con dence value is zero. Subjective Logic and Credential
Networks produce the same results as our model in all cases in which the models and
their computation approaches can be applied and in which no con ict occurs
(e. g., in Example 1). If con icts are possible (e. g., in Examples 2 and 3), then
the results generally di er from the results of our model because these models
eliminate the probability mass associated with con ict.
      </p>
      <p>Our model can thus be seen as an extension of Maurer's model, Subjective
Logic and Credential Networks that overcomes the mentioned restrictions (b &gt; 0,
i &gt; 0 and d &gt; 0 is possible, no restriction to directed series-parallel graphs).
However, we do not eliminate the degree of con ict, because this can cause
counter-intuitive e ects: Consider Example 2 (Sect. 5.2). If we choose t1 = t2 =
t3 = t4 = (1; 0; 0; 0) (full trust), then the resulting con dence value is t0 =
(1; 0; 0; 0), too (in all models). If t4 changes to t4 = (0:01; 0; 0:99; 0) (almost
complete distrust), then the resulting con dence value in our model changes
to t0 = (0:01; 0; 0; 0:99), which shows EA that the trustworthiness of ED is
now highly disputed. However, in Subjective Logic and Credential Networks the
resulting con dence value does not change. This gives EA the wrong impression
that there are no trustworthy entities who distrust ED.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Computation Time</title>
      <p>Computation time is a very (although not the most) important issue for
reputation systems. The computation time to nd the minimal paths appears to be
uncritical because it is possible to check the inference rules e ciently and
because the paths can be computed incrementally and in advance. Furthermore, the
paths usually remain unchanged when the con dence values of existing opinions
are updated.</p>
      <p>
        The number of possible worlds to consider in the possible worlds algorithm
increases exponentially with the number of relevant rst-hand statements. It is
therefore applicable if the number of relevant statements is small. It is important
to emphasize that the computation time depends only on the number of relevant
statements or paths, not on the total number. It is su cient to consider only
statements that are issued by the requester or by authentic entities or keys that
have been found to be trustworthy. Moreover, we can ignore all statements that
are not part of a valid path, i. e., that do not contribute to answer the trust or
authenticity request. Furthermore, most trust chains will not be longer than two
or three statements. Therefore, the number of relevant statements or paths will
usually be reasonably small. Although a trust and authenticity network similar
to the PGP/GnuPG web of trust [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] can contain more than 100 000 trust and
authenticity statements, the number of statements that are directly or indirectly
(via valid paths) related to the requester will probably be below 100, and the
number of statements that are part of valid paths from the requester to the
requested statement is likely to be not higher than 10 or 20 in typical scenarios.
      </p>
      <p>The number of possible worlds in the grouped possible worlds algorithm
increases exponentially with the number of groups. Thus, the computation time can
be reduced signi cantly if the statements can be grouped. Even large scenarios
can be evaluated e ciently as long as the relevant statements can be subdivided
into a small number of groups. In the inclusion-exclusion algorithm the number
of summands increases exponentially with the number of relevant paths. This
algorithm is therefore well suited for all scenarios with a small number of paths,
even if the number of statements is large.</p>
      <p>We illustrate the in uence of the scenario (i. e., the number of relevant
statements, paths and groups) on the computation time of the algorithms on two
examples5. The scenarios are constructed in order to emphasize the large in
uence on the computation time and are not meant to be representative examples.
For simplicity the scenarios consist only of trust statements between entities and
refer to the same capability r. All con dence values contain degrees of belief,
ignorance and disbelief (b &gt; 0; i &gt; 0; d &gt; 0).</p>
      <p>trustor</p>
      <p>trustee</p>
      <p>The scenario with e entities in Fig. 6 consists of a simple chain and e
1 trust statements with h = 0::e recommendation hops. The possible worlds
algorithm has to evaluate 3e 1 worlds. The scenario contains one positive and one
negative path, therefore the inclusion-exclusion algorithm has to compute only
two summands. The grouped possible world algorithm creates one group with
three possible proposition-combinations: the positive path leads to belief, the
negative path to disbelief, all other combinations lead to ignorance. It thus has
to evaluate only three worlds. The diagram in Fig. 7 shows that the computation
time of the possible world algorithm increases exponentially with the number of
trust statements, whereas the computation time of the other algorithms increases
linearly and thus remains insigni cant even for long trust chains.
5 implementation in Java 1.6; measurements on Intel Pentium M with 1.73 GHz</p>
      <p>The scenario in Fig. 8 is a full mesh. All entities trust each other for h = 0::1
hops. The number of relevant statements is 2e 3, the number of positive and
negative paths is e 1 and the number of con icting paths is (e 1)(e 2).
Thus, the computation time of the possible worlds algorithm increases slower
than of the inclusion-exclusion algorithm because the number of con icting paths
increases faster than the number of relevant statements (Fig. 9). The grouped
possible worlds algorithm subdivides the statements into e 1 groups, which
reduces the number of possible worlds from 32e 3 to 3e 1 worlds. Therefore the
computation time remains acceptable for a larger number of entities than with
the other algorithms.</p>
      <p>The computation time heavily depends on the scenario. It is therefore di cult
to give a general recommendation for one of the algorithms. It is possible that
one algorithm outperforms an other by orders of magnitude in one scenario, but
is much slower in an other scenario. The presented results and measurements in
other scenarios suggest that the grouped possible worlds algorithm is in most
scenarios the fastest (or at least close to the fastest) algorithm. However, further
investigations are necessary.</p>
      <p>An estimation for the computation time of the algorithms can be computed
from the number of relevant statements, paths and groups. If the expected
computation of all exact algorithms exceeds an acceptable limit, then a Monte-Carlo
simulation can be used to compute an approximation. The number of iterations
can be chosen according to the required accuracy and the acceptable
computation time.
7</p>
    </sec>
    <sec id="sec-7">
      <title>Summary and Conclusions</title>
      <p>We presented a detailed model to represent trust and authenticity statements as
well as con dence values and we proposed an integrated approach to reason with
these statements and to compute resulting con dence values. The model
distinguishes clearly between entities, descriptions and keys, allows multiple keys and
descriptions per entity, distinguishes between functional and recommendation
trust and allows to specify ranges of recommendation hops in trust statements.
Con dence values allow to express degrees of belief, ignorance and disbelief. The
system is able to reason with con icting opinions because the presented
inference rules are based on a paraconsistent logic. The computation of the resulting
con dence values is based on a probability theoretical model in order to
produce consistent results. In con ict-free scenarios our model is consistent with
the Model of Maurer, Subjective Logic and Credential Networks, but overcomes
several restrictions of these models. In con icting scenarios we do not
eliminate the degree of con ict in order to avoid counter-intuitive e ects caused by
re-normalizations.</p>
      <p>We proposed di erent algorithms to implement the con dence value
computation. Although the computation time increases exponentially with the number
of relevant statements, groups or paths it can be expected that an acceptable
computation time can be reached in the majority of realistic scenarios. In the
other cases, we propose to compute an approximation with Monte-Carlo
simulations.</p>
      <p>Acknowledgements This work was funded by the German Research
Foundation (DFG) through the Collaborative Research Center (SFB) 627.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Ashley</surname>
            ,
            <given-names>J.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Copeland</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grahn</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wheeler</surname>
            ,
            <given-names>D.A.</given-names>
          </string-name>
          :
          <article-title>The GNU Privacy Handbook</article-title>
          .
          <source>The Free Software Foundation</source>
          . (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Grandison</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sloman</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A Survey of Trust in Internet Application</article-title>
          .
          <source>IEEE Communications Surveys &amp; Tutorials</source>
          <volume>3</volume>
          (
          <issue>4</issue>
          ) (
          <year>2000</year>
          )
          <volume>2</volume>
          {
          <fpage>16</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Marsh</surname>
            ,
            <given-names>S.P.</given-names>
          </string-name>
          :
          <article-title>Formalising Trust as a Computational Concept</article-title>
          .
          <source>PhD thesis</source>
          , Department of Mathematics and Computer Science, University of Stirling (
          <year>1994</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Maurer</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>Modelling a Public-Key Infrastructure</article-title>
          .
          <source>In: Proc. 1996 European Symposium on Research in Computer Security (ESORICS' 96)</source>
          . Volume
          <volume>1146</volume>
          of Lecture Notes in Computer Science., Springer-Verlag (
          <year>1996</year>
          )
          <volume>325</volume>
          {
          <fpage>350</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5. J sang, A.:
          <article-title>Arti cial Reasoning with Subjective Logic</article-title>
          .
          <source>In: Proceedings of the Second Australian Workshop on Commonsense Reasoning</source>
          . (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Haenni</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kohlas</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lehmann</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <article-title>In: Probabilistic Argumentation Systems</article-title>
          . Volume 5
          <article-title>(Algorithms for Uncertainty and Defeasible Reasoning) of Handbook of Defeasible Reasoning and Uncertainty Management Systems</article-title>
          . Springer (
          <year>2000</year>
          )
          <volume>221</volume>
          {
          <fpage>288</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Kamvar</surname>
            ,
            <given-names>S.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schlosser</surname>
            ,
            <given-names>M.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garcia-Molina</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          :
          <article-title>The EigenTrust Algorithm for Reputation Management in P2P Networks</article-title>
          .
          <source>In: Proceedings of the 12th International Conference on World Wide Web</source>
          . (
          <year>2003</year>
          )
          <volume>640</volume>
          {
          <fpage>651</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Demolombe</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Reasoning about trust: A formal logical framework</article-title>
          .
          <source>In: Proceedings of the Second International Conference of Trust Management (iTrust</source>
          <year>2004</year>
          ).
          <article-title>(</article-title>
          <year>2004</year>
          )
          <volume>291</volume>
          {
          <fpage>303</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9. J sang, A.,
          <string-name>
            <surname>Ismail</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boyd</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>A survey of trust and reputation systems for online service provision</article-title>
          .
          <source>In: Decision Support Systems</source>
          . (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Kohlas</surname>
          </string-name>
          , R.:
          <article-title>Decentralized Trust Evaluation and Public-Key Authentication</article-title>
          .
          <source>PhD thesis</source>
          , Universitat Bern (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Jonczy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haenni</surname>
          </string-name>
          , R.:
          <article-title>Credential Networks: a General Model for Distributed Trust and Authenticity Management</article-title>
          . In: PST. (
          <year>2005</year>
          )
          <volume>101</volume>
          {
          <fpage>112</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12. J sang, A.,
          <string-name>
            <surname>Gray</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kinateder</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Simpli cation and Analysis of Transitive Trust Networks</article-title>
          .
          <source>Web Intelligence and Agent Systems Journal</source>
          (
          <year>2006</year>
          )
          <volume>139</volume>
          {
          <fpage>161</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Zadeh</surname>
            ,
            <given-names>L.A.</given-names>
          </string-name>
          :
          <article-title>Review of Books: A Mathematical Theory of Evidence</article-title>
          .
          <source>The AI Magazine</source>
          <volume>5</volume>
          (
          <issue>3</issue>
          ) (
          <year>1984</year>
          )
          <volume>81</volume>
          {
          <fpage>83</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Shafer</surname>
          </string-name>
          , G.:
          <source>A Mathematical Theory of Evidence</source>
          . Princeton Univ. Press (
          <year>1976</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Gutscher</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>A Trust Model for an Open, Decentralized Reputation System</article-title>
          .
          <source>In: Proceedings of the Joint iTrust and PST Conferences on Privacy Trust Management and Security (IFIPTM</source>
          <year>2007</year>
          ).
          <article-title>(</article-title>
          <year>2007</year>
          )
          <volume>285</volume>
          {
          <fpage>300</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Gutscher</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Reasoning with Uncertain and Con icting Opinions in Open Reputation Systems</article-title>
          .
          <source>In: Proceedings of the Fourth International Workshop on Security and Trust Management (STM</source>
          <year>2008</year>
          ).
          <article-title>(</article-title>
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>