<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Method to Evaluate Uncertain and Conflicting Trust and Authenticity Statements</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Andreas</forename><surname>Gutscher</surname></persName>
							<email>andreas.gutscher@ikr.uni-stuttgart.de</email>
							<affiliation key="aff0">
								<orgName type="department">Institute of Communication Networks and Computer Engineering</orgName>
								<orgName type="institution">Universität Stuttgart</orgName>
								<address>
									<postCode>70569</postCode>
									<settlement>Stuttgart</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A Method to Evaluate Uncertain and Conflicting Trust and Authenticity Statements</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">DC4C880C989159ECBA2D822D3D1EE84B</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T23:30+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Countless Internet applications and platforms make it easy to communicate, to collaborate and to exchange information and opinions with a huge number of individuals. However, it is getting more and more difficult to distinguish honest and reliable individuals from malicious users distributing false information or malware. Digital signatures, webs of trust and reputation systems can help to securely identify communication partners and to estimate the trustworthiness of others, but there is a lack of trust and authenticity evaluation methods that do not show counterintuitive effects in the case of conflicting opinions. This article proposes a new integrated method to evaluate uncertain and conflicting trust and authenticity statements. It introduces a set of operators and inference rules for combining and reasoning with these statements, it defines an approach to compute the resulting confidence values of derived statements and it compares different computation algorithms. The computation is based on a probability theoretical model in order to exclude inconsistencies and counter-intuitive effects. D. Chadwick, I. You and H. Chang (Eds.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>An increasing number of different Internet applications, platforms and social networks makes it easy to communicate with a huge number of individuals, to exchange and share information, news, photos, files and product recommendations and to socialize with people sharing similar interests. However, for users participating in a large number of social networks, discussion forums, etc. it is getting more and more difficult to find out who their new "friends" actually are and whether they can trust them.</p><p>With a reputation system users can share their knowledge and opinions about other users. The reputation system collects and evaluates the opinions of all users about the trustworthiness of others. On request it computes the resulting confidence value for the requested entity according to a trust model.</p><p>The authenticity of all opinion statements should be protected, e. g., with digital signatures, to prevent manipulations and to make the evaluation verifiable to the users. Digital signatures are only useful if the users can identify the signature key holder. If no global trusted public key infrastructure is available, users can share their knowledge about the ownership of keys in a so-called web of trust (e. g., the PGP/GnuPG web of trust <ref type="bibr" target="#b0">[1]</ref>) by exchanging digitally signed identity certificates. However, these authenticity statements are only useful if the users can verify that the issuer is trustworthy to verify the ownership of public keys. Trust and authenticity evaluation are thus highly interdependent.</p><p>Various different computational trust models, reputation systems and applications using trust have been proposed <ref type="bibr" target="#b0">[1]</ref><ref type="bibr" target="#b1">[2]</ref><ref type="bibr" target="#b2">[3]</ref><ref type="bibr" target="#b3">[4]</ref><ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref><ref type="bibr" target="#b6">[7]</ref><ref type="bibr" target="#b7">[8]</ref><ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref>. To obtain an intuitive and consistent trust model one must define clearly what a confidence value represents and find a sound mathematical basis for the computation with confidence values. Confidence values have strong similarities to probability values. The most sophisticated trust models are therefore based on probability theory. Maurer <ref type="bibr" target="#b3">[4]</ref> proposed a probabilistic model in which confidence values for trust and authenticity relations correspond to probability values. However, it does not model negative opinions (distrust) and entities cannot have more than one key. Credential Networks and related models proposed by Haenni <ref type="bibr" target="#b5">[6]</ref>, Jonczy <ref type="bibr" target="#b10">[11]</ref> and Kohlas <ref type="bibr" target="#b9">[10]</ref> also model confidence values as probabilities. Besides degrees of support and uncertainty the confidence values can express also degrees of refutation (e. g., distrust). However, confidence values may contain either degrees of belief and ignorance <ref type="foot" target="#foot_0">1</ref> , disbelief and ignorance, or belief and disbelief, but they cannot contain degrees of belief, disbelief and ignorance at the same time. Jøsang's Subjective Logic <ref type="bibr" target="#b4">[5]</ref> can express opinions with degrees of belief, ignorance and disbelief (at the same time), but the approach to compute the resulting confidence values is (although based on probability theory, too) quite different. In the model of Maurer and in Credential Networks the initial confidence values are uncertain and the inference rules are deterministic, whereas in Subjective Logic the uncertainty is modeled in the operators of the inference rules, i. e., the confidence value of a conclusion is computed from the confidence values of the preconditions. Unfortunately, this leads to the problem that the resulting confidence value generally depends on the order in which the operators are applied. It seems that Subjective Logic can not be used to evaluate arbitrary networks of trust and authenticity statements without using questionable workarounds <ref type="bibr" target="#b11">[12]</ref>.</p><p>If users can express both positive (supporting) and negative (refuting) opinions, then the combination of contradictory opinions can lead to conflicts. In Credential Networks and Subjective Logic the probability mass associated with conflicting combinations is eliminated and the remaining probability mass is renormalized. Zadeh <ref type="bibr" target="#b12">[13]</ref> has shown that conflict elimination and re-normalization approaches (like Dempster's rule of combination <ref type="bibr" target="#b13">[14]</ref>) can produce counterintuitive effects.</p><p>This article proposes a new integrated approach to evaluate uncertain and conflicting trust and authenticity statements without eliminating conflict. This avoids the counter-intuitive effects of re-normalizations. The trust model is based on a combination of the inference rules from <ref type="bibr" target="#b14">[15]</ref> and the calculus and operators from <ref type="bibr" target="#b15">[16]</ref>, extended by a new operator for reasoning with authenticity statements. Sect. 2 describes the representation of trust and authenticity statements, Sect. 3 the representation of confidence values and the corresponding operators. The new inference rules are formulated in Sect. 4. Sect. 5 proposes different algorithms to compute the resulting confidence value, Sect. 6 compares the computation time of these algorithms and Sect. 7 concludes the article.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Model of Trust and Authenticity Statements</head><p>An opinion refers to a trust or authenticity statement H j with an associated confidence value t j . A first-hand opinion is an opinion that is based only on the experience and knowledge of a single entity (the trustor or issuer ) and that is independent of other opinions. A second-hand opinion is an opinion that is derived from other opinions and that is thus not independent.</p><p>We define trust as "a unidirectional relation between a trustor and a trustee expressing the strong belief of the trustor that the trustee will behave as expected with respect to a particular capability within a particular context" <ref type="bibr" target="#b14">[15]</ref>. Therefore we represent the standard form of a trust statement as follows:</p><formula xml:id="formula_0">Trust(trustor , trustee, r, h min ..h max )<label>(1)</label></formula><p>The trustor can be an entity or a key, the trustee an entity, a description or a key (see Tab. <ref type="figure" target="#fig_0">1</ref>). An entity (E A , E B , . . . ) can be a person, an organization, a network node, etc. referred to by a local identifier. To exchange opinions with others users have to use unique descriptions or public keys to refer to other entities. A description (D A , D B , . . . ) consists of a list of names, identifiers or attributes that uniquely identifies the described entity. Entities may have several different descriptions. A public key (K A , K B , . . . ) is the public part of an asymmetric key pair. The holder uses the key pair to sign trust or authenticity statements (certificates). An entity can use several different key pairs at the same time. The capability r refers to an application specific capability (r 1 , r 2 , . . . ) or to the capability r PKI , which represents the capability to honestly and carefully verify that a description uniquely refers to the holder of a particular key pair.</p><p>We distinguish different types of trust identified by a different number of recommendation hops (h): Functional trust expresses the belief that the trustee has the capability r and is described by h = 0. Recommendation trust for h = 1 hop expresses the belief that the trustee can recommend someone with capability r, recommendation trust for h = 2 hops that the trustee can recommend someone who can recommend someone with capability r, etc. Each standard form trust statement can specify the desired range of recommendation hops h min ..h max .</p><p>For the evaluation of trust statements we need in addition trust statements in the slightly different internal form. These trust statements refer not to a range, but to a single recommendation hop value h ≥ 0 and they have an additional parameter, the chain length l ≥ 1:</p><formula xml:id="formula_1">Trust(trustor , trustee, r, h, l)<label>(2)</label></formula><p>Trust is not transitive in general, but trust statements can be combined in certain cases to trust chains according to the transitive trust inference rule (7) described in Sect. 4.1. The chain length l of the derived trust statement refers to the number of first-hand trust statements in the trust chain.</p><p>Authenticity statements express the strong belief of the issuer that a description belongs to an entity, that a public key belongs to an entity or that a description belongs to the holder of a public key:</p><formula xml:id="formula_2">Auth(issuer , actor 1 , actor 2 ) (<label>3</label></formula><formula xml:id="formula_3">)</formula><p>The issuer is an entity or a public key, actor 1 and actor 2 are entities, descriptions or public keys. All four possible combinations are listed in Tab. 1<ref type="foot" target="#foot_1">2</ref> .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Confidence Values</head><p>This section introduces discrete and continuous confidence values as well as operators for reasoning with discrete confidence values. Users express their opinions with continuous confidence values while the discrete confidence values are used internally only for reasoning with opinions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Representation of Discrete and Continuous Confidence Values</head><p>Users can have different and possibly conflicting opinions about trust and authenticity statements. Therefore, we can not definitively decide whether a statement H is "true" or "false". We can only evaluate known indications that support or refute H. It is possible that neither supporting nor refuting or that both supporting and refuting indications for H are found. Therefore we describe knowledge of supporting and refuting indications independently. For each statement H we introduce the propositions H + and H − to describe that the reputation system is aware of indications that imply that H must be true and that H must be false, respectively. We also introduce the four discrete confidence values belief (+), ignorance (∅), disbelief (−) and conflict (±) to represent the four possible combinations of these propositions (see Tab. 2). They can be seen as "truth values" of a paraconsistent logic <ref type="bibr" target="#b15">[16]</ref>. As statements can in fact not be both true and false at the same time we can conclude that first-hand opinions can not have the confidence value conflict. However, if we combine statements of different (disagreeing) entities, it is possible to find both H + and H − , i. e., the confidence value of derived (second-hand ) opinions can be conflict. Conflict must not be confused with partial support and partial refutation (ambivalent opinions). An entity that has for example experienced some positive and some negative interactions can express this opinion with continuous confidence values.</p><p>Continuous confidence values t = (b, i, d, c) with b, i, d, c ∈ [0, 1] and b + i + d + c = 1 express degrees of belief, ignorance, disbelief and conflict. The value b represents the issuer's subjective estimation of the probability that there are indications supporting (but no refuting) H. Similarly, d represents the subjective estimation of the probability that there are indications refuting (but no supporting) H. c represents the subjective estimation of the probability that there are both supporting and refuting indications for H at the same time, and i represents the subjective estimation of the probability that there are neither supporting nor refuting indications for H. For the same reason as before, c must be zero in all first-hand opinions, whereas second-hand opinions can contain conflict. Nevertheless, ambivalent first-hand opinions can be expressed by continuous confidence values with both b &gt; 0 and d &gt; 0. A user that has made many positive and few negative experiences can choose, for example, a firsthand confidence value with b = 0.7 and d = 0.1 (i. e., t = (0.7, 0.2, 0.1, 0). Thus, in first-hand statements b can be seen as the lower bound and 1 − d as the upper bound for the estimated subjective probability that H must be true.</p><p>The degrees of ignorance and conflict in resulting confidence values have different meanings, and applications should handle high degrees of ignorance and conflict differently: A high degree of ignorance indicates that the reputation system has little information about the requested statement and suggests searching more relevant statements, if possible. A high degree of conflict, however, shows that the requested statement H is controversial. This suggests that the requester should verify whether the trust and authenticity assignments he made and that cause the conflict are correct.</p><p>Continuous confidence value can be condensed to a single value w, if desired:</p><formula xml:id="formula_4">w = b + w i i + w d d + w c c<label>(4)</label></formula><p>The parameters w i , w d and w c represent weights for the degrees of ignorance, disbelief and conflict, e. g., w i = 0.5, w d = −1 and w c = 0. They can be chosen according to the preferences of the application and allow for rather optimistic or rather pessimistic behavior in the cases of uncertainty and conflict.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Operators to Combine Discrete Confidence Values</head><p>This section describes the recommendation and authentication operators. The operators define whether H + z and H − z can be derived from a set of premises (H + x , H −</p><p>x , H + y , H − y ). The short notation with statements is provided for convenience and will be used to formulate the inference rules in Sect. 4.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Recommendation Operator</head><p>The recommendation operator (⊗) is used to concatenate two trust statements or a trust with an authenticity statement. It is reasonable for a user to adopt the opinions of trustworthy entities. However, it is not reasonable (it is in fact even dangerous) to assume that untrustworthy (malicious or incompetent) entities always tell the opposite of the truth. Instead, opinions of untrustworthy entities should be ignored. Therefore, we do not draw any conclusions from H −</p><p>x . The operator is thus defined as follows:</p><formula xml:id="formula_5">H x ⊗ H y H z ⇔ H + x H + y H + z , H + x H − y H − z<label>(5)</label></formula><p>This reads as follows: H z follows from a combination of H x and H y with the recommendation operator. If there are supporting indications for H x and for H y , then infer H + z . If there are supporting indications for H x and refuting indications for H y , then infer H − z . Fig. <ref type="figure" target="#fig_0">1</ref> (left) shows the corresponding "truth table <ref type="table">"</ref>. Authentication Operator The authentication operator ( ) is used to reason with two authenticity relations between entities, descriptions and public keys:</p><formula xml:id="formula_6">t z = t x ⊗ t y t x + ∅ − ± t y + + ∅ ∅ + ∅ ∅ ∅ ∅ ∅ − − ∅ ∅ − ± ± ∅ ∅ ± + ∅ − ± + + ∅ − ± ∅ ∅ ∅ ∅ ∅ − − ∅ ∅ − ± ± ∅ − ±</formula><formula xml:id="formula_7">H x H y H z ⇔ H + x H + y H + z , H + x H − y H − z , H − x H + y H − z<label>(6)</label></formula><p>The operator definition can be understood as follows </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Inference Rules</head><p>The inference rules specify which conclusions the reputation system can draw from a set of given trust and authenticity propositions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Transitive Trust Inference Rule</head><p>This inference rule describes the transitivity property of trust statements. It defines in which cases two trust statements for the same capability r can be combined with the recommendation operator in order to derive a new trust statement from the trustor of the first statement (A) to the trustee of the second statement (C). The trustor A can be an entity (E A ) or a public key (K A ). The second statement can be a trust statement or a trust certificate, i. e., B can be an entity (E B ) or a public key (K B ). The final trustee C can be an entity (E C ), a public key (K C ) or a description (D C ).</p><formula xml:id="formula_8">Trust(A, B, r, h + l 2 , l 1 ) ⊗ Trust(B, C, r, h, l 2 ) Trust(A, C, r, h, l 1 + l 2 )<label>(7)</label></formula><p>This inference rule differs from other proposed transitive trust inference rules in that it allows the combination of trust statements only if the number of recommendation hops matches: The number of recommendation hops of the first statement must equal the sum of the recommendation hops plus the chain length of the second statement. The chain length of the resulting statement is the sum of the chain lengths of the input statements. This ensures that the recommendation hop value of the trust statements decreases by one throughout the chain of first-hand trust relations (e. g., h = 2, h = 1, h = 0). The example in Fig. <ref type="figure">2</ref> illustrates the inference rule. The transitive trust inference rule allows to combine</p><formula xml:id="formula_9">H + 1 = Trust + (E A , E B , r, 2, 1) with H + 2 = Trust + (E B , E C , r, 1, 1) to H + 4 = Trust + (E A , E C , r, 1, 2) and then H + 4 with H − 3 = Trust − (E C , E D , r, 0, 1) to H − 5 = Trust − (E A , E D , r, 0, 3). h = 0, l = 1 h = 2, l = 1 h = 1, l = 1 H + 1 H + 2 H − 3 EA EB EC ED h = 1, l = 2 EA EC EA ED H − 5 h = 0, l = 3 H + 4</formula><p>Fig. <ref type="figure">2</ref>. Example for application of the transitive trust inference rule</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Trust in Entities, Keys and Descriptions</head><p>A number of simple rules allow to infer from trust assigned to an entity to trust assigned to the holder of a key and to trust assigned to an entity identified by a description, and vice versa. If an entity is trustworthy, then the holder of a key that belongs to this entity is trustworthy, too, and vice versa:</p><formula xml:id="formula_10">Auth(E A , K C , E C ) ⊗ Trust(E A , E C , r, h, l) Trust(E A , K C , r, h, l)<label>(8)</label></formula><p>Auth</p><formula xml:id="formula_11">(E A , K C , E C ) ⊗ Trust(E A , K C , r, h, l) Trust(E A , E C , r, h, l)<label>(9)</label></formula><p>If an entity is trustworthy, then the entity identified by a description that belongs to this entity is trustworthy, too, and vice versa:</p><formula xml:id="formula_12">Auth(E A , D C , E C ) ⊗ Trust(E A , E C , r, h, l) Trust(E A , D C , r, h, l)<label>(10)</label></formula><p>Auth</p><formula xml:id="formula_13">(E A , D C , E C ) ⊗ Trust(E A , D C , r, h, l) Trust(E A , E C , r, h, l)<label>(11)</label></formula><p>If the holder of a key is trustworthy, then the entity identified by a description that belongs to this key holder is trustworthy, too, and vice versa. This applies to trust relations and trust certificates:</p><formula xml:id="formula_14">Auth(E A , K C , D C ) ⊗ Trust(E A , K C , r, h, l) Trust(E A , D C , r, h, l)<label>(12)</label></formula><p>Auth</p><formula xml:id="formula_15">(E A , K C , D C ) ⊗ Trust(E A , D C , r, h, l) Trust(E A , K C , r, h, l)<label>(13)</label></formula><formula xml:id="formula_16">Auth(K A , K C , D C ) ⊗ Trust(K A , K C , r, h, l) Trust(K A , D C , r, h, l)<label>(14)</label></formula><formula xml:id="formula_17">Auth(K A , K C , D C ) ⊗ Trust(K A , D C , r, h, l) Trust(K A , K C , r, h, l)<label>(15)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Local Authenticity Inference Rule</head><p>If an entity E A has partial knowledge about whether an entity E B is the holder of a key K B , whether a description D B refers to the entity E B or whether the description D B refers to the holder of the key K B , then it can draw further conclusions about the confidence values of the authenticity statements between E B , K B and D B . If the confidence values of two corresponding authenticity relations are known, then the confidence value of the third authenticity relation can be derived with the authentication operator:</p><formula xml:id="formula_18">Auth(E A , K C , D C ) Auth(E A , K C , E C ) Auth(E A , D C , E C ) (16) Auth(E A , K C , D C ) Auth(E A , D C , E C ) Auth(E A , K C , E C ) (17) Auth(E A , K C , E C ) Auth(E A , D C , E C ) Auth(E A , K C , D C ) (18)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4">Authenticity Inference with Authenticity Confirmation</head><p>If a trustor (E A or K A ) trusts a trustee (E B or K B ) to issue only correct authenticity relations or identity certificates (property r PKI ), then the trustor can conclude that authenticity relations or identity certificates of the trustee are correct:</p><formula xml:id="formula_19">Trust(E A , E B , r PKI , 0, l) ⊗ Auth(E B , K C , D C ) Auth(E A , K C , D C ) (19) Trust(E A , K B , r PKI , 0, l) ⊗ Auth(K B , K C , D C ) Auth(E A , K C , D C ) (20) Trust(K A , K B , r PKI , 0, l) ⊗ Auth(K B , K C , D C ) Auth(K A , K C , D C ) (21)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.5">Uniqueness Conditions</head><p>Two further conclusions can be drawn from the condition that each public key has only one holder and that each description refers to only one entity. If A knows that E B is the holder of K B , then it can infer that all other entities are not the holder of K B . Similarly, if A knows that E B has the description D B , then it can infer that all other entities do not have the description D B (A can be an entity or a key).</p><formula xml:id="formula_20">Auth + (A, K B , E B ) Auth − (A, K B , E j ) , Auth + (A, D B , E B ) Auth − (A, D B , E j ) ∀E j = E B (<label>22</label></formula><formula xml:id="formula_21">)</formula><p>5 Confidence Value Computation</p><p>The reputation system collects all issued first-hand trust and authenticity opinions H j with associated continuous confidence value t j (with c j = 0). Users can then send requests in the form of a standard form trust statement or an authenticity statement to the reputation system. The reputation system then processes all collected opinions. It applies the inference rules to derive trust and authenticity statements and it computes the resulting continuous confidence value t 0 of the requested statement H 0 from the confidence values of the relevant first-hand statements. As the components of the continuous first-hand confidence values (b, i and d) represent probabilities, we define the resulting confidence value by a random experiment and propose different algorithms for the computation of the resulting confidence value.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Probabilistic Model for the Confidence Value Computation</head><p>The components of the computed resulting confidence value t 0 = (b 0 , i 0 , d 0 , c 0 ) for H 0 are computed from the combination of all available first-hand opinions with the inference rules under the assumption that the confidence values of the opinions of the requestor are correct. In short, b 0 is the computed lower bound for the probability that the combination of the available first-hand opinions leads to the conclusion that H 0 must be true (but not that H 0 must be false). Similarly, d 0 is the computed lower bound for the probability that the combination of the available first-hand opinions leads to the conclusion that H 0 must be false (but not that H 0 must be true). The degree of conflict c 0 is the computed probability that the combination of the first-hand opinions leads to the contradicting conclusion that H 0 must be both true and false at the same time. The degree of ignorance is the remaining probability</p><formula xml:id="formula_22">i 0 = 1 − b 0 − d 0 − c 0 .</formula><p>The following description of a random experiment provides a more detailed definition for t 0 : We assume that the reputation system has collected J first-hand opinions, i. e., the statements H j (j = 1, 2, . . . J) with associated continuous confidence values t j = (b j , i j , d j , 0). For each first-hand statement H j choose a discrete confidence value t j from {+, ∅, −} according to the weights b j , i j and d j , i. e., choose t j = + with probability b j , t j = ∅ with probability i j and t j = − with probability d j . Statements with the discrete confidence value ignorance don't contribute knowledge and can be discarded <ref type="foot" target="#foot_2">3</ref> . Each remaining first-hand statement H j with associated discrete confidence value t j corresponds to a set of first-hand propositions according to Tab. 2.</p><p>The inference rules always operate on trust propositions in the internal representation. We therefore have to replace each standard-form trust statement Trust(A, B, r, h min ..h max ) by a list of single-hop trust statements in internal form with chain length l = 1: Trust(A, B, r, h min , l), Trust(A, B, r, h min + 1, l), . . . , Trust(A, B, r, h max , l). The internal trust statements inherit their assigned discrete confidence value from the standard-form trust statement. Next, we apply all inference rules (see Sect. 4) to derive all (positive and negative) deducible propositions from the set of all known first-hand propositions and all already derived propositions. To get back to trust statements in standard form we conclude H + 0 = Trust + (A, B, r, h min ..h max ) if we have been able to derive a proposition H + 0,h = Trust + (A, B, r, h, l) with h min ≤ h ≤ h max . Similarly, we conclude H − 0 if we have been able to derive a proposition H − 0,h . To obtain the resulting continuous confidence value of a requested trust or authenticity statement we compute the probability that the random experiment leads to a set of first-hand propositions from which we can derive positive and negative propositions for the requested statement H 0 . The components of the resulting confidence value t 0 = (b 0 , i 0 , d 0 , c 0 ) are defined as follows: b 0 is the probability that H + 0 (but not H − 0 ) can be derived and d 0 is the probability that H − 0 (but not H + 0 ) can be derived. The probability that neither H + 0 nor H − 0 can be derived is i 0 , and c 0 is the probability that both H + 0 and H − 0 can be derived. In contrast to other trust models (e. g., <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>) we propose not to eliminate the degree of conflict, not only to avoid counter-intuitive effects of renormalizations but also because it provides valuable information to the requesting user or application (see Sect. 3.1).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Approximation and Exact Computation Algorithms</head><p>This section presents different possibilities to implement the computation of an approximation or of the exact value of the resulting continuous confidence value t 0 according to Sect. 5.1. All exact algorithms return the same resulting confidence value t 0 , but differ in computation time. The result of the approximation gets arbitrarily close to the exact result if the number of iterations is sufficiently large.</p><p>To keep the computation time small we recommend for all algorithms to precompute all possible paths: We first set up a "superposition" of possible firsthand propositions. For each statement H j with continuous confidence value t j = (b j , i j , d j , 0) we select H + j if b j &gt; 0 and we select (possibly in addition) H − j if d j &gt; 0. Then we translate all trust propositions into the internal form, apply all inference rules and record the dependencies, i. e., we trace which sets of first-hand propositions (premises) allow to derive which conclusions. Each set of first-hand propositions that allows to (directly or indirectly) derive the positive requested proposition H + 0 is called a positive path for H 0 , each set that allows to derive the negative proposition H − 0 a negative path for H 0 . Next, we select the set of positive paths and the set of negative paths for H 0 and minimize these paths, i. e., we remove all paths that contain at least one other path in the set. We finally obtain the set of minimal positive paths A + = {a + 1 , a + 2 , . . . a + k + } and the set of minimal negative paths</p><formula xml:id="formula_23">A − = {a − 1 , a − 2 , . . . a − k − }.</formula><p>Approximation with Monte-Carlo Simulation An obvious approach to determine an approximation for the resulting confidence value is to run the described random experiment N times and to count in how many experiments the selected set of first-hand propositions contains at least one positive (but no negative) path (n b ), no paths (n i ), at least one negative (but no positive) path (n d ) or both positive and negative paths (n c ). The approximation for the confidence value is t0 = 1 N (n b , n i , n d , n c ). The choice of N allows to adjust the trade-off between precision and computation time.</p><p>Possible Worlds Algorithm An simple algorithm to compute the exact value is to go through the list of all possible combinations of first-hand propositions (so-called possible worlds), to compute the probability of each of those possible worlds and to check for each world whether the set of first-hand propositions of this world contains the minimal paths. The sum of all probabilities of all worlds that contain at least one positive and at least one negative path is c 0 , b 0 is the sum of probabilities of all worlds that contain at least one positive, but no negative path, and d 0 the sum of probabilities of all worlds that contain at least one negative, but no positive path. The degree of ignorance is </p><formula xml:id="formula_24">i 0 = 1−b 0 −d 0 −c 0 .</formula><formula xml:id="formula_25">H1,1 H2,0 Probability H0 H + 1,1 H + 2,0 b1b2 H + 0 H + 1,1 b1i2 H + 1,1 H − 2,0 b1d2 H − 0 H + 2,0 i1b2 i1i2 H − 2,0 i1d2 H − 1,1 H + 2,0 d1b2 H − 1,1 d1i2 H − 1,1 H − 2,0<label>d1d2</label></formula><p>Fig.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Scenario and possible worlds table of Example 1</head><p>Example 1: Simple Trust Chain with Possible Worlds Algorithm The example scenario in Fig. <ref type="figure">3</ref> (left) consists of two trust statements: H 1 is a recommendation trust statement for one recommendation hop (h = 1), and H 2 is a functional trust statement (h = 0). E A wants to compute the resulting functional trustworthiness of</p><formula xml:id="formula_26">E C (H 0 = Trust(E A , E C , r 1 , 0)).</formula><p>First, the trust statements in standard form have to be replaced by corresponding trust statements in internal form:</p><formula xml:id="formula_27">H 1 by H 1,1 = Trust(E A , E B , r 1 , 1, 1)</formula><p>and H 2 by H 2,0 = Trust(E B , E C , r 1 , 0, 1). Both refer to the same property r 1 , it is therefore possible to combine H 1,1 and H 2,0 with the transitive trust inference rule <ref type="bibr" target="#b6">(7)</ref> to the new functional trust statement H 0,0 = Trust(E A , E C , r 1 , 0, 2):</p><formula xml:id="formula_28">H + 1,1 , H + 2,0</formula><p>H + 0,0 (H + 1,1 and H + 2,0 allow to drive H + 0,0 ) and</p><formula xml:id="formula_29">H + 1,1 , H − 2,0</formula><p>H − 0,0 . Thus, there is only one positive path a + 1 = {H + 1,1 , H + 2,0 } and one negative path</p><formula xml:id="formula_30">a − 1 = {H + 1,1 , H − 2,0</formula><p>} for H 0,0 and thus for H 0 .</p><p>Fig. <ref type="figure">3</ref> (right) shows all possible combinations of the first-hand propositions, the probability that this world occurs and the propositions that can be derived in this world. There are no worlds in which both H + 0 and H − 0 can be derived, thus c 0 = 0. H + 0 can be derived only in the first world, therefore b 0 = b 1 b 2 . Similarly, H − 0 can be derived only in the third world, therefore</p><formula xml:id="formula_31">d 0 = b 1 d 2 .</formula><p>The degree of ignorance is the remaining probability mass i 0 = 1 − b 0 − d 0 − c 0 . With the values in Fig. <ref type="figure">3</ref> we obtain t 0 = (0.35, 0.55, 0.1, 0).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Grouped Possible Worlds Algorithm</head><p>The possible worlds algorithm can be improved by subdividing the set of relevant first-hand statements into as few groups g 1 , . . . g u as possible. Two statements, H j and H m , belong to the same group if and only if the following condition holds for each positive, negative and conflicting path: If the path contains a proposition for H j (H + j or H − j ), then it must also contain a proposition for H m (H + m or H − m ). In the preparation step we construct for each group a list of all relevant combinations of propositions of the statements in the group. This list contains all combinations that contain exactly one proposition (i. e., either H + j or H − j ) <ref type="foot" target="#foot_3">4</ref> for each statement and that is identical to the corresponding section of at least one (positive or negative) path. An additional element of this list consists of an empty set. It represents all remaining possible combinations of propositions, i. e., all combinations that contain neither H + j nor H − j for at least one statement H j of the group. We can subsume these combinations because the have the same effect on the derivability of propositions of H 0 . For each element of the list we compute the probability that this combination will occur (within the group) from the continuous confidence values of the statements. The probability associated with the empty set is the sum of the probabilities of the contained combinations of propositions (i. e., the remaining probability). Thus, the sum of all probabilities is one.</p><p>Next, in the main step, we go through all possible worlds. Each world consists of one possible combination of these prepared proposition-combinations of the groups, i. e., for each groups we select one proposition-combination from the prepared list of the group. We multiply the precomputed probabilities of the chosen proposition-combinations to obtain the resulting probability of the world. Finally, we compute b 0 , i 0 , d 0 and c 0 just as in the possible worlds algorithm.</p><p>Example 2: Parallel Trust Chain with Grouped Possible Worlds Algorithm The scenario in Fig. <ref type="figure">4</ref> consists of two parallel trust chains from E A to E D . E A requests the confidence value for the resulting functional trustworthiness of E D (H 0 = Trust(E A , E D , r 1 , 0)). The trust statements in standard form are replaced by statements in internal form:</p><formula xml:id="formula_32">H 1 by H 1,1 = Trust(E A , E B , r 1 , 1, 1), H 2 by H 2,0 = Trust(E B , E D , r 1 , 0, 1), H 3 by H 3,1 = Trust(E A , E C , r 1 , 1, 1)</formula><p>and H 4 by H 4,0 = Trust(E C , E D , r 1 , 0, 1). We can combine H 1,1 with H 2,0 and H 3,1 with H 4,0 with the transitive trust inference rule <ref type="bibr" target="#b6">(7)</ref>. We obtain two positive paths</p><formula xml:id="formula_33">A + = {{H + 1,1 , H + 2,0 }, {H + 3,1 , H + 4</formula><p>,0 }} and two negative paths</p><formula xml:id="formula_34">A − = {{H + 1,1 , H − 2,0 }, {H + 3,1 , H − 4,0 }}.</formula><p>Propositions for H 1,1 and H 2,0 appear always together in paths, the same holds for H 3,1 and H 4,0 . Therefore we can divide the statements into two groups</p><formula xml:id="formula_35">g 1 = {H 1,1 , H 2,0 } and g 1 = {H 3,1 , H 4,0 }.</formula><p>In the preparation step we set up a list for each group that contains all relevant possible combinations of the propositions and their probabilities (see Tab. 3). For each group we find three relevant combinations: one combination supports a positive path and one a negative path. The third entry with the empty set represents the remaining combinations.  </p><formula xml:id="formula_36">} b1b2 {H + 1,1 , H − 2,0 } b1d2 {} 1 − b1b2 − b1d2 Propositions g2 Probability {H + 3,1 , H + 4,0 } b3b4 {H + 3,1 , H − 4,0 } b3d4 {} 1 − b3b4 − b3d4</formula><formula xml:id="formula_37">g1 g2 Probability H0 {H + 1,1 , H + 2,0 } {H + 3,1 , H + 4,0 } b1b2b3b4 H + 0 {H + 1,1 , H + 2,0 } {H + 3,1 , H − 4,0 } b1b2b3d4 H + 0 , H − 0 {H + 1,1 , H + 2,0 } {} b1b2(1 − b3b4 − b3d4) H + 0 {H + 1,1 , H − 2,0 } {H + 3,1 , H + 4,0 } b1d2b3b4 H + 0 , H − 0 {H + 1,1 , H − 2,0 } {H + 3,1 , H − 4,0 } b1d2b3d4 H − 0 {H + 1,1 , H − 2,0 } {} b1d2(1 − b3b4 − b3d4) H − 0 {} {H + 3,1 , H + 4,0 } (1 − b1b2 − b1d2)b3b4 H + 0 {} {H + 3,1 , H − 4,0 } (1 − b1b2 − b1d2)b3d4 H − 0 {} {} (1 − b1b2 − b1d2)(1 − b3b4 − b3d4)</formula><p>To compute the resulting confidence value t 0 for H 0 we set up Tab. 5 with all 3 • 3 = 9 possible combinations of the entries in the lists (possible worlds), the probabilities of each world and the derivable propositions for H 0 . Then we add the probabilities and obtain b</p><formula xml:id="formula_38">0 = b 1 b 2 b 3 b 4 + b 1 b 2 (1 − b 3 b 4 − b 3 d 4 ) + (1 − b 1 b 2 − b 1 d 2 )b 3 b 4 , i 0 = (1 − b 1 b 2 − b 1 d 2 )(1 − b 3 b 4 − b 3 d 4 ), d 0 = b 1 d 2 b 3 d 4 + b 1 d 2 (1 − b 3 b 4 − b 3 d 4 ) + (1 − b 1 b 2 − b 1 d 2 )b 3 d 4 and c 0 = b 1 b 2 b 3 d 4 + b 1 d 2 b 3 b 4 .</formula><p>With the values in Fig. <ref type="figure">4</ref> we obtain t 0 = (0.7112, 0.0532, 0.07, 0.1656).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Computation with Inclusion-exclusion Formula</head><p>This algorithm computes the exact resulting confidence value directly from the minimal positive and negative paths for H 0 . In addition, we need the set of minimal conflicting paths. Therefore we set up all possible combinations consisting of one positive and one negative path (a ± x = a + y ∪ a − z with y = 1, . . . k + , z = 1, . . . k − ), minimize the set and obtain</p><formula xml:id="formula_39">A ± = {a ± 1 , a ± 2 , . . . a ± k ± } (with k ± ≤ k + k − ).</formula><p>A useful optimization is to eliminate all paths that contain both H + j and H − j (because c j = 0). First, we compute the degree of conflict c 0 from the confidence values of the first-hand statements in the set of minimal paths with the inclusion-exclusionformula (I(A)): c 0 is the probability that a possible world chosen according to Sect. 5.1 will contain at least one conflicting path. Thus, we add the probabilities of all minimal paths, subtract the probabilities of all unions of two minimal paths, add the probabilities of all unions of three minimal paths, etc.:</p><formula xml:id="formula_40">c 0 =I(A ± ) = k ± n=1 (−1) n+1 1≤j1&lt;•••&lt;jn≤k ± P (a ± j1 ∪ • • • ∪ a ± jn ) = k ± j1=1 P (a ± j1 ) − 1≤j1&lt;j2≤k ± P (a ± j1 ∪ a ± j2 ) + • • • + (−1) k ± +1 P (a ± 1 ∪ • • • ∪ a ± k ± )<label>(23)</label></formula><p>P (a) denotes the probability that path a is contained in the set of first-hand propositions of a chosen possible world:</p><formula xml:id="formula_41">P (a) = j:H + j ∈a or H − j ∈a p j with p j =    0 if H + j ∈ a, H − j ∈ a b j if H + j ∈ a, H − j ∈ a d j if H + j ∈ a, H − j ∈ a<label>(24)</label></formula><p>We obtain b 0 + c 0 with the inclusion-exclusion formula applied to the minimal positive paths, thus b 0 = I(A + ) − c 0 . Similarly, the degree of disbelief is d 0 = I(A − ) − c 0 and finally we obtain</p><formula xml:id="formula_42">i 0 = 1 − b 0 − d 0 − c 0 .</formula><p>Example 3: Authenticity Verification with Inclusion-Exclusion Formula Fig. <ref type="figure">5</ref> shows an example scenario consisting of the first-hand statements H 1 , . . . </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3">Comparison with Other Trust Models</head><p>The model of Maurer <ref type="bibr" target="#b3">[4]</ref> does not allow to express degrees of disbelief. Therefore, conflict can never occur. In all scenarios in which Maurer's model can be applied it produces the same resulting confidence values as our model. Subjective Logic <ref type="bibr" target="#b4">[5]</ref> can only be applied if the network is a directed series-parallel graph (e. g., Examples 1 and 2, but not Example 3). Credential Networks <ref type="bibr" target="#b10">[11]</ref> can be applied only if at least one component of the confidence value (b j , i j or d j ) of each first-hand confidence value is zero. Subjective Logic and Credential Networks produce the same results as our model in all cases in which the models and their computation approaches can be applied and in which no conflict occurs (e. g., in Example 1). If conflicts are possible (e. g., in Examples 2 and 3), then the results generally differ from the results of our model because these models eliminate the probability mass associated with conflict.</p><p>Our model can thus be seen as an extension of Maurer's model, Subjective Logic and Credential Networks that overcomes the mentioned restrictions (b &gt; 0, i &gt; 0 and d &gt; 0 is possible, no restriction to directed series-parallel graphs). However, we do not eliminate the degree of conflict, because this can cause counter-intuitive effects: Consider Example 2 (Sect. 5.2). If we choose t 1 = t 2 = t 3 = t 4 = (1, 0, 0, 0) (full trust), then the resulting confidence value is t 0 = (1, 0, 0, 0), too (in all models). If t 4 changes to t 4 = (0.01, 0, 0.99, 0) (almost complete distrust), then the resulting confidence value in our model changes to t 0 = (0.01, 0, 0, 0.99), which shows E A that the trustworthiness of E D is now highly disputed. However, in Subjective Logic and Credential Networks the resulting confidence value does not change. This gives E A the wrong impression that there are no trustworthy entities who distrust E D .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Computation Time</head><p>Computation time is a very (although not the most) important issue for reputation systems. The computation time to find the minimal paths appears to be uncritical because it is possible to check the inference rules efficiently and because the paths can be computed incrementally and in advance. Furthermore, the paths usually remain unchanged when the confidence values of existing opinions are updated.</p><p>The number of possible worlds to consider in the possible worlds algorithm increases exponentially with the number of relevant first-hand statements. It is therefore applicable if the number of relevant statements is small. It is important to emphasize that the computation time depends only on the number of relevant statements or paths, not on the total number. It is sufficient to consider only statements that are issued by the requester or by authentic entities or keys that have been found to be trustworthy. Moreover, we can ignore all statements that are not part of a valid path, i. e., that do not contribute to answer the trust or authenticity request. Furthermore, most trust chains will not be longer than two or three statements. Therefore, the number of relevant statements or paths will usually be reasonably small. Although a trust and authenticity network similar to the PGP/GnuPG web of trust <ref type="bibr" target="#b0">[1]</ref> can contain more than 100 000 trust and authenticity statements, the number of statements that are directly or indirectly (via valid paths) related to the requester will probably be below 100, and the number of statements that are part of valid paths from the requester to the requested statement is likely to be not higher than 10 or 20 in typical scenarios.</p><p>The number of possible worlds in the grouped possible worlds algorithm increases exponentially with the number of groups. Thus, the computation time can be reduced significantly if the statements can be grouped. Even large scenarios can be evaluated efficiently as long as the relevant statements can be subdivided into a small number of groups. In the inclusion-exclusion algorithm the number of summands increases exponentially with the number of relevant paths. This algorithm is therefore well suited for all scenarios with a small number of paths, even if the number of statements is large.</p><p>We illustrate the influence of the scenario (i. e., the number of relevant statements, paths and groups) on the computation time of the algorithms on two examples <ref type="foot" target="#foot_4">5</ref> . The scenarios are constructed in order to emphasize the large influence on the computation time and are not meant to be representative examples. For simplicity the scenarios consist only of trust statements between entities and refer to the same capability r. All confidence values contain degrees of belief, ignorance and disbelief (b &gt; 0, i &gt; 0, d &gt; 0). trustee trustor Fig. <ref type="figure">6</ref>. Scenario of the simple chain example</p><p>The scenario with e entities in Fig. <ref type="figure">6</ref> consists of a simple chain and e − 1 trust statements with h = 0..e recommendation hops. The possible worlds algorithm has to evaluate 3 e−1 worlds. The scenario contains one positive and one negative path, therefore the inclusion-exclusion algorithm has to compute only two summands. The grouped possible world algorithm creates one group with three possible proposition-combinations: the positive path leads to belief, the negative path to disbelief, all other combinations lead to ignorance. It thus has to evaluate only three worlds. The diagram in Fig. <ref type="figure">7</ref> shows that the computation time of the possible world algorithm increases exponentially with the number of trust statements, whereas the computation time of the other algorithms increases linearly and thus remains insignificant even for long trust chains.  The scenario in Fig. <ref type="figure">8</ref> is a full mesh. All entities trust each other for h = 0..1 hops. The number of relevant statements is 2e − 3, the number of positive and negative paths is e − 1 and the number of conflicting paths is (e − 1)(e − 2). Thus, the computation time of the possible worlds algorithm increases slower than of the inclusion-exclusion algorithm because the number of conflicting paths increases faster than the number of relevant statements (Fig. <ref type="figure" target="#fig_7">9</ref>). The grouped possible worlds algorithm subdivides the statements into e − 1 groups, which reduces the number of possible worlds from 3 2e−3 to 3 e−1 worlds. Therefore the computation time remains acceptable for a larger number of entities than with the other algorithms.</p><p>The computation time heavily depends on the scenario. It is therefore difficult to give a general recommendation for one of the algorithms. It is possible that one algorithm outperforms an other by orders of magnitude in one scenario, but is much slower in an other scenario. The presented results and measurements in other scenarios suggest that the grouped possible worlds algorithm is in most scenarios the fastest (or at least close to the fastest) algorithm. However, further investigations are necessary.  An estimation for the computation time of the algorithms can be computed from the number of relevant statements, paths and groups. If the expected computation of all exact algorithms exceeds an acceptable limit, then a Monte-Carlo simulation can be used to compute an approximation. The number of iterations can be chosen according to the required accuracy and the acceptable computation time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Summary and Conclusions</head><p>We presented a detailed model to represent trust and authenticity statements as well as confidence values and we proposed an integrated approach to reason with these statements and to compute resulting confidence values. The model distinguishes clearly between entities, descriptions and keys, allows multiple keys and descriptions per entity, distinguishes between functional and recommendation trust and allows to specify ranges of recommendation hops in trust statements. Confidence values allow to express degrees of belief, ignorance and disbelief. The system is able to reason with conflicting opinions because the presented inference rules are based on a paraconsistent logic. The computation of the resulting confidence values is based on a probability theoretical model in order to produce consistent results. In conflict-free scenarios our model is consistent with the Model of Maurer, Subjective Logic and Credential Networks, but overcomes several restrictions of these models. In conflicting scenarios we do not eliminate the degree of conflict in order to avoid counter-intuitive effects caused by re-normalizations.</p><p>We proposed different algorithms to implement the confidence value computation. Although the computation time increases exponentially with the number of relevant statements, groups or paths it can be expected that an acceptable computation time can be reached in the majority of realistic scenarios. In the other cases, we propose to compute an approximation with Monte-Carlo simulations.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Recommendation and authentication operator truth tables</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>EA, EB, r1, 1) 0.5 0.15 0.35 2 Trust(EB, EC , r1, 0) 0.7 0.1 0.2</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>1 Fig. 4 .</head><label>14</label><figDesc>Fig. 4. Scenario of Example 2</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>H 6 with associated confidence values. Entity E A wants to know whether entity E D is the holder of the key K D and therefore requests the resulting confidence value for H 0 = Auth(E A , K D , E D ). 0.92358 − c 0 = 0.84872 and d 0 = b 1 d 4 b 6 − c 0 = 0.095 − c 0 = 0.02014. The degree of ignorance is i 0 = 1 − b 0 − d 0 − c 0 = 0.05628. Thus, the resulting confidence value for H 0 is t 0 = (0.84872, 0.05628, 0.02014, 0.07486).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Fig. 7 .Fig. 8 .</head><label>78</label><figDesc>Fig. 7. Computation time in the simple chain example</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head></head><label></label><figDesc>Gr. pos. worlds Incl.-excl.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Fig. 9 .</head><label>9</label><figDesc>Fig. 9. Computation time in the full mesh example</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Trust and authenticity statements (relations and certificates)</figDesc><table><row><cell></cell><cell>Trust statements</cell><cell>Authenticity</cell></row><row><cell>Standard form</cell><cell>Internal form</cell><cell>statements</cell></row></table><note>RelationTrust(EA, EB, r, hmin..hmax) Trust(EA, EB, r, h, l) Auth(EA, KB, EB) Trust(EA, KB, r, hmin..hmax) Trust(EA, KB, r, h, l) Auth(EA, DB, EB) Trust(EA, DB, r, hmin..hmax) Trust(EA, DB, r, h, l) Auth(EA, KB, DB)CertificateTrust(KA, KB, r, hmin..hmax) Trust(KA, KB, r, h, l) Auth(KA, KB, DB) Trust(KA, DB, r, hmin..hmax) Trust(KA, DB, r, h, l)</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 .</head><label>2</label><figDesc>Discrete confidence values</figDesc><table><row><cell cols="3">Propositions Discrete confidence value Semantics</cell></row><row><cell>{H + }</cell><cell>t = + (belief )</cell><cell>"the indications imply that H must be true"</cell></row><row><cell>{}</cell><cell>t = ∅ (ignorance)</cell><cell>"there are no relevant indications about H"</cell></row><row><cell>{H − }</cell><cell>t = − (disbelief )</cell><cell>"the indications imply that H must be false"</cell></row><row><cell cols="2">{H + , H − } t = ± (conflict)</cell><cell>"the indications imply that H must be true</cell></row><row><cell></cell><cell></cell><cell>and that H must be false at the same time"</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head></head><label></label><figDesc>: Assume H x and H y represent statements like "A and B belong together" and "B and C belong together", respectively. If we have supporting indications for both statements, then this supports that A and C belong together (H z ). If we have indications that A and B belong together but that B does not belong to C, then we conclude that A does not belong to C either. If neither A belongs to B nor does B belong to C, then we can draw no conclusion about A and C. Fig.1(right) shows the corresponding truth table.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3 .</head><label>3</label><figDesc>Preparation step for the groups in Example 2</figDesc><table><row><cell>Propositions g1 Probability</cell></row><row><cell>{H + 1,1 , H + 2,0</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 4 .</head><label>4</label><figDesc>Confidence value computation in the parallel trust chain example</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 5 .</head><label>5</label><figDesc>Confidence value computation in Example 2</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">the terms uncertainty and ignorance are used interchangeable</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">certificates can not contain local identifiers for entities (EA, EB, . . .) because they would be meaningless to other entities</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">this optimization does not change the resulting confidence value, the resulting continuous confidence value t0 nevertheless contains the correct degree of ignorance</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">no combination can contain both H + j and H − j because cj = 0</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">implementation in Java 1.6; measurements on Intel Pentium M with 1.73 GHz</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgements This work was funded by the German Research Foundation (DFG) through the Collaborative Research Center (SFB) 627.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>First, we transform the trust statements from standard form into the internal form:</p><p>r PKI , 0, 1), etc. Then we create the set of propositions that represents a superposition of all possible worlds according to the confidence values of the statements (see Tab. 6, H + 1,0 , . . . H + 6 ). Next, we apply the inference rules to the proposition of this set (including the already derived propositions). The remaining rows of Tab. 6 list the derived propositions as well as the used inference rules and the premises. The transitive trust inference rule <ref type="bibr" target="#b6">(7)</ref> allows for example to derive the new proposition  </p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">The GNU Privacy Handbook</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Ashley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Copeland</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Grahn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Wheeler</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1999">1999</date>
			<publisher>The Free Software Foundation</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A Survey of Trust in Internet Application</title>
		<author>
			<persName><forename type="first">T</forename><surname>Grandison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sloman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Communications Surveys &amp; Tutorials</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="2" to="16" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Formalising Trust as a Computational Concept</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">P</forename><surname>Marsh</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1994">1994</date>
		</imprint>
		<respStmt>
			<orgName>Department of Mathematics and Computer Science, University of Stirling</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">PhD thesis</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Modelling a Public-Key Infrastructure</title>
		<author>
			<persName><forename type="first">U</forename><surname>Maurer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. 1996 European Symposium on Research in Computer Security (ESORICS&apos; 96)</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<meeting>1996 European Symposium on Research in Computer Security (ESORICS&apos; 96)</meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="1996">1996</date>
			<biblScope unit="volume">1146</biblScope>
			<biblScope unit="page" from="325" to="350" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Artificial Reasoning with Subjective Logic</title>
		<author>
			<persName><forename type="first">A</forename><surname>Jøsang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Second Australian Workshop on Commonsense Reasoning</title>
				<meeting>the Second Australian Workshop on Commonsense Reasoning</meeting>
		<imprint>
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">In: Probabilistic Argumentation Systems. Volume 5 (Algorithms for Uncertainty and Defeasible Reasoning) of Handbook of Defeasible Reasoning and Uncertainty Management Systems</title>
		<author>
			<persName><forename type="first">R</forename><surname>Haenni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kohlas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Lehmann</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2000">2000</date>
			<publisher>Springer</publisher>
			<biblScope unit="page" from="221" to="288" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">The EigenTrust Algorithm for Reputation Management in P2P Networks</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">D</forename><surname>Kamvar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Schlosser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Garcia-Molina</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 12th International Conference on World Wide Web</title>
				<meeting>the 12th International Conference on World Wide Web</meeting>
		<imprint>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="640" to="651" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Reasoning about trust: A formal logical framework</title>
		<author>
			<persName><forename type="first">R</forename><surname>Demolombe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Second International Conference of Trust Management (iTrust</title>
				<meeting>the Second International Conference of Trust Management (iTrust</meeting>
		<imprint>
			<date type="published" when="2004">2004. 2004</date>
			<biblScope unit="page" from="291" to="303" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A survey of trust and reputation systems for online service provision</title>
		<author>
			<persName><forename type="first">A</forename><surname>Jøsang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ismail</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Boyd</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Decision Support Systems</title>
				<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Decentralized Trust Evaluation and Public-Key Authentication</title>
		<author>
			<persName><forename type="first">R</forename><surname>Kohlas</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
		<respStmt>
			<orgName>Universität Bern</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">PhD thesis</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Credential Networks: a General Model for Distributed Trust and Authenticity Management</title>
		<author>
			<persName><forename type="first">J</forename><surname>Jonczy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Haenni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PST</title>
		<imprint>
			<biblScope unit="page" from="101" to="112" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Simplification and Analysis of Transitive Trust Networks</title>
		<author>
			<persName><forename type="first">A</forename><surname>Jøsang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Gray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kinateder</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Web Intelligence and Agent Systems Journal</title>
		<imprint>
			<biblScope unit="page" from="139" to="161" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Review of Books: A Mathematical Theory of Evidence</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Zadeh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The AI Magazine</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="81" to="83" />
			<date type="published" when="1984">1984</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">A Mathematical Theory of Evidence</title>
		<author>
			<persName><forename type="first">G</forename><surname>Shafer</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1976">1976</date>
			<publisher>Princeton Univ. Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A Trust Model for an Open, Decentralized Reputation System</title>
		<author>
			<persName><forename type="first">A</forename><surname>Gutscher</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Joint iTrust and PST Conferences on Privacy Trust Management and Security (IFIPTM 2007)</title>
				<meeting>the Joint iTrust and PST Conferences on Privacy Trust Management and Security (IFIPTM 2007)</meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="285" to="300" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Reasoning with Uncertain and Conflicting Opinions in Open Reputation Systems</title>
		<author>
			<persName><forename type="first">A</forename><surname>Gutscher</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fourth International Workshop on Security and Trust Management</title>
				<meeting>the Fourth International Workshop on Security and Trust Management<address><addrLine>STM</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2008">2008. 2008</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
