<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The Measurable Belief of Trust in Social Networks∗</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sanguk Noh</string-name>
          <email>sunoh@catholic.ac.kr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Computer Science and Information Engineering, The Catholic University of Korea</institution>
          ,
          <addr-line>Bucheon</addr-line>
          ,
          <country country="KR">Korea</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>As Web-based online communities are rapidly growing, the agents in social groups need to know their measurable belief of trust for safe and successful interactions. In this paper, we propose a formal model of reputation resulting from available feedbacks in online communities. The notion of trust can be defined as an aggregation of consensus given a set of reputations. The expected trust of an agent further represents the center of gravity of the distribution of its trustworthiness and untrustworthiness. And then, we precisely describe the relationship between reputation, trust, and expected trust through a concrete example of their computations. We apply our trust model to online Internet settings in order to show how the trust is involved in a rational decision-making of the agents.</p>
      </abstract>
      <kwd-group>
        <kwd>Trust within social networks</kwd>
        <kwd>consensus aggregation</kwd>
        <kwd>DempsterShafer</kwd>
        <kwd>adaptive multi-agent systems</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>Traditional notion of trust [3] refers to an agent’s belief that other agents towards
itself intend to be honest and positive, and can be usually built up through direct
interactions in person. As online communities on the Internet are rapidly growing, the
agents have exposed to virtual interactions as well as face-to-face interactions. The
agents in online social networks communicate anonymously and have only limited
inspections. These features have made the agents hard to decide whether or not other
agents may be positive or benevolent to them. Thus, it is essential that they could
have a tangible model of trust for safe and successful interactions, even in the case
that they don’t have prior and direct interactions. This paper addresses how to assess
trust in social networks, particularly applicable to the online community. We build up
the computational model of trust as a measurable concept.</p>
      <p>Our approach to the computational model of trust starts with the lesson from “Tit
for Tat” strategy in game theory for the iterated Prisoner’s Dilemma [1], which
encourages social cooperation among agents. As a result of mutual behaviors in online
multi-agent settings, agents will get more positive feedbacks from other agents if the
∗ This work has been supported by the Catholic University of Korea research fund, and
department specialization fund, respectively, granted in the program year of 2006.
agents are willing to cooperate with others and, otherwise, they will receive more
negative feedbacks from others. We translate the feedbacks resulting from social
activities into the agent’s reputation as a quantitative concept. The next steps for our
trust model are to apply aggregation rules to given reputation values to reach a
consensus, and calculate the expected trust interpreted as the center of gravity of the
distributions of trustworthiness and untrustworthiness. The notion of trust in our
framework then represents positive expectations about others’ future behaviors.</p>
      <p>In the following section of this paper, we briefly compare our approach to related
research. Section 3 is devoted to our trust model that defines reputation, trust, and
expected trust. We precisely describe the relationship among them through a concrete
example of their computations. In Section 4, we apply our trust model to online
Internet transactions showing how trust affects a rational decision-making of buyers and
sellers. In the concluding Section 5, we summarize our work and mention further
research issues.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>Our work builds on efforts by several other researchers who have made the social
concept of trust computable in a society of multi-agents. In the field of multi-agent
community, there have been several approaches to support a computational model of
trust. Marsh [10] introduces a simple, computational model of trust, which is a
subjective real number ranging from -1 to 1. His model has trouble with handling
negative values of trust and their propagation. Mui et al. [11] describe trust in a
pseudomathematical expression and represent it as posteriors using expected utility notation.
Their scheme only counts the number of cooperations (or positive events). In a
distributed reputation system [8], they use aging factor, distance factor, and new
experience to update trust. However, the assumption of these components of trust is not
likely to be realistic. As they pointed out, their scheme does not correctly handle
negative experiences. Our model of trust represents an aggregation of consensus
without any problem of fusion, and effectively deals with the agent’s trustworthiness
and untrustworthiness in the range of 0 and 1, respectively, which are based on actual
positive and negative feedbacks in social networks.</p>
      <p>Other rigorous efforts have also focused on the formulation of measurable belief
representing trust. One of them is to use a subjective probability [4, 7] that quantifies
trust as a social belief. In the subjective logic, an agent’s opinion is presented by
degrees of belief, disbelief, and uncertainty. Handling uncertainty in various
operations is too intuitive to be clear. And further, the subjective logic provides not a
certain value of trust but a probability certainty density function. However, our trust
model provides a specific trust value as an expected trust considering agent’s
trustworthiness and untrustworthiness together. In another approach, a simple e-Bay
feedback system [13] uses a feedback summary, which is computed as arithmetically
subtracting the number of negative feedbacks from the number of positive feedbacks.
The contribution of our work is to precisely define the notion of trust as a measurable
social belief, and to clearly describe the relationship between reputation, trust, and
expected trust in social multi-agent settings.
We propose a formal model of reputation resulting from feedbacks in social networks.
The notion of trust then can be defined as an aggregation of consensus given a set of
reputations. The calculation of expected trust, further, results in a precise trust value
as a metric. In this section, we describe the relationship between reputation, trust, and
expected trust through a concrete example of their computations.</p>
      <sec id="sec-2-1">
        <title>3.1 Modeling Reputation</title>
        <p>
          Feedbacks in social networks [6, 13] represent reputation associated with the society
of multiple agents. The cumulative positive and negative events or feedbacks for an
agent, thus, constitute the agent’s reputation [8, 11]. The reputation can be described
by a binary proposition1 p, for example, “A seller deals with only qualified products
and delivers them on time.” in the field of online Internet transactions. Given a binary
proposition p and an agent-group i judging an agent in p, the reputation of the agent
in p, ωip , can be defined as follows:
ω ip = {Ti , U i }
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
where
•
•
•
•
•
•
        </p>
        <p>Ti = PFi/Ni and 0≤Ti≤1;
PFi is the number of positive feedbacks for p within an agent-group i;
Ui = NFi/Ni and 0≤Ui≤1;
NFi is the number of negative feedbacks for p within an agent-group i;
ZFi is the number of neutral feedbacks for p within an agent-group i;
Ni is the total number of feedbacks for p within an agent-group i and Ni =
PFi+NFi+ZFi.</p>
        <p>
          In the definition of reputation, as described in (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ), we assume that the feedbacks
given by agents within an agent-group evaluating p are independent, and further, the
opinions supporting p can be only loosely related to the possible opinions supporting
¬p, since there could be neutral feedbacks from the agent-group. The notion of
reputation, thus, is based on independent opinions and the sum of Ti and Ui is not
necessarily being 1.
        </p>
        <p>The cumulative positive feedbacks in social networks result from cooperativeness,
i.e., trusting interactions, and establish trustworthiness of an agent in p, while the
possible number of negative feedbacks from the society affects untrustworthiness of
the agent. The trustworthiness and untrustworthiness together constitute a reputation
function as a quantitative concept. The reputation of an agent varies with time and
size of society, and clearly influences its trust. Given a set of reputations, which is
1 Any reputation in the form of proposition can be expressed according to the contexts as
follows: “A buyer has an intention and capability to pay,” “The network system could be safe
from any intrusions,” “A car could be reliable for ten years,” and so on.
collected at different times and from various interactions made by other agent-groups,
the trust as a representative reputation will be derived.</p>
      </sec>
      <sec id="sec-2-2">
        <title>3.2 Calculating Trust Using Aggregation Rules</title>
        <p>
          We define trust as a consensus from an aggregation of reputations. The trust2 ω p for
an agent in a proposition p is defined as
ω p = ω ip ⊗ω jp = {T , U }
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
where
•
•
•
ωip and ω jp represent reputations accumulated from an agent-group i and
an agent-group j, respectively;
T is the trustworthiness of the agent in a proposition p and 0≤T≤1;
        </p>
        <p>U is the untrustworthiness of the agent in a proposition p and 0≤U≤1.</p>
        <p>
          The trust, as described in (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ), consists of trustworthiness and untrustworthiness.
These two components are determined by a set of reputations, as previously defined
in (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ). To formulate the agent’s trust from reputations, expressed in degrees of
trustworthiness and untrustworthiness which may or may not have the mathematical
properties of probabilities, therefore, we propose a set of aggregation rules [9]. Given
reputations of ωip and ω jp , the aggregation operators, ⊗ = {Ψ1,..., Ψn} , in the
paper, are as follows:
1.
2.
3.
        </p>
        <p>Minimum (Ψ1): T = min(Ti , T j ), U = min(Ui ,U j ) ;
Maximum (Ψ2): T = max(Ti , T j ), U = max(Ui ,U j ) ;</p>
        <p>Average (Ψ3): T = (Ti + T j ) / 2, U = (Ui + U j ) / 2 ;
4. Product (Ψ4): T = TiT j , U = UiU j ;
5.</p>
        <p>Dempster-Shafer theory [5, 14, 15] (Ψ5):</p>
        <p>T =</p>
        <p>T T
i j
1 − (TiU j + T jUi )
, U =</p>
        <p>UiU j
1 − (TiU j + T jUi )
.</p>
        <p>The trust representing the degrees of belief on agent’s truthfulness can be obtained
by applying aggregation rules to a set of reputations. The goal of aggregation is to
combine reputations when each of them estimates the probability of trustworthiness
2 For the sake of simplicity, we explain our trust model in a much simpler case of two
agentgroups i and j. Our model of trust can be simply extended in more complicated settings
involving multiple agent-groups without loss of generality.
and untrustworthiness for an agent, and to produce a single probability distribution
that summarizes the various reputations.</p>
        <p>The minimum and maximum aggregation rules provide a single minimum and
maximum value for T and U, respectively. The average aggregation operator simply
extends a statistic summary and provides an average of Tk’s and Uk’s coming from
different agent-groups. The product rule summarizes the probabilities that coincide in
T and U, respectively, given a set of reputations. Dempster’s rule3 for combining
degrees of belief produces a new belief distribution that represents the consensus of
the original opinions [15]. Using Dempster’s rule, the resulting values of T and U
indicate the degrees of agreement on trustworthiness and untrustworthiness of
original reputations, respectively, but completely exclude the degrees of disagreement or
conflict. The advantage of using the Dempster’s rule in the context of trust is that no
priors and conditionals are needed.</p>
        <p>
          Among the possible outputs of trust, we denote the trust as the consensus output
using a specific aggregator, which is defined as
Ψˆ (t, u) = Ψ(Ψ1(t, u),..., Ψn (t, u))
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
where
•
•
Ψ is a function determining a specific aggregation rule;
Ψˆ (t, u) is the aggregation rule selected with the inputs of t∈Tk and u∈Uk.
Example 1. Let ω1p = {0.80, 0.10}, ω 2p = {0.70, 0.20}. This is interpreted that
there are two agent-groups evaluating p and, in each group, the resulting number of
positive feedbacks is much greater than that of negative feedbacks, respectively.
Given reputations, aggregation rules can be applied to get trust, as defined in (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ),
denoting a consensus out of agent-groups’ opinions. The possible outputs of trust
using the aggregation rules are summarized in Table 1.
3 In this paper, a set of original reputations embedded in social networks are assumed to be
consistent in measuring them. This assumption avoids the counterintuitive results obtained
using Dempter’s rule in the presence of significantly conflicting evidence, which was
originally pointed out by Lotfi Zadeh [15].
        </p>
        <p>T =
U =
(0.8)(0.7)
(0.1)(0.2)</p>
        <p>Among possible outputs of trust, the trust can be denoted as ω p ={0.70, 0.10}, when
Ψˆ (t, u) =Ψ1. When minimum, maximum, and average aggregators are used, the
resulting distribution of the trust similarly reflects the distributions of the reputation.
In cases of product and Dempster-Shafer theory, however, the T’s (0.56 and 0.73) of
the trusts are much bigger than their U values (0.02 and 0.03), compared with the
original distributions of the reputation. The resulting T value in Ψ5 is interpreted that
there is a 0.73 chance that the agent in p has the trustworthiness, while the resulting U
value indicates that there is only a 0.03 chance that the agent is negatively estimated.
As we mentioned above, thus, normalizing the original values of trustworthiness and
untrustworthiness, which is corresponding to the denominator in the above equation,
makes the opinions associated with conflict being away from the trust as a consensus.</p>
        <p>To show how the aggregation rules could be adapted to various distributions of
reputation, we consider additional set of reputations. The possible outputs of trust
with two different set of reputations are displayed in the second and the third column
of Table 2, respectively.
The example of second column shows the case that the number of positive feedbacks
is much less than that of negative feedbacks, and the third column is an example that
the number of both feedbacks is identical. Note that the resulting distributions of
trustworthiness and untrustworthiness, as displayed in Table 2, mirror their
distributions in the original set of reputations. </p>
        <p>Since the available feedbacks from multiple agent-groups in social networks are
classified into positive, negative, and neutral ones, the positive and negative
feedbacks among them are adopted for the components of our trust model. However,
these two values contradicting each other are still not enough to represent the trust
itself as degrees of belief on agent’s truthfulness. From practical perspective, the trust
is required to be a precise value as a metric.
3.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>Expected Trust</title>
        <p>We define expected trust as the center of gravity of the distribution of beliefs, i.e., the
degrees of trustworthiness and untrustworthiness for an agent. The expected trust ωˆ p
is given as
ωˆ p =</p>
        <p>
          T
T + U
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
taking into account both trustworthiness and untrustworthiness of an agent. The
expected trust, thus, represents the average beliefs on agent’s truthfulness or
cooperativeness, and translates the agent’s trust into a specific value where 0 ≤ ωˆ p ≤ 1. In
the notion of expected trust, the higher the expected trust level for the agent, the more
the expectation that the agent will be truthful or cooperative in future interactions.
The calculation of expected trust using equation (
          <xref ref-type="bibr" rid="ref4">4</xref>
          ) gives social insight on the agent’s
trust.
        </p>
        <p>Example 1 (cont’d). Given a set of reputations in the three agent-groups above, the
expected trusts are shown in Table 3.</p>
        <p>This example illustrates that the expected trust provides a metric for the agent’s
overall truthfulness, which consists of trustworthiness and untrustworthiness. The
simple aggregation rules, i.e., minimum, maximum, average, and product, give a
pretty representative trust value considering both trustworthiness and
untrustworthiness, even though it is not clear which one is good for a particular setting. This may
be the reason that these simple but surprisingly well applicable rules keep on being
popular in any contexts [9]. The product rule and Dempster-Shafer theory highly rate
the agent’s expected trust than the other simple rules. We attribute this sharp contrast
between trustworthiness (refer to 0.97 and 0.96 in Table 3) and untrustworthiness
(0.10 and 0.10, respectively, in Table 3) to their purely conjunctive operation with
completely ignoring the degrees of disagreement or conflict. </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4 Applying Trust Model to Online Internet Transactions</title>
      <p>We apply our trust model to online Internet transactions. Given the actual feedbacks
of agent-groups in online multi-agent settings, we can convert the feedbacks into the
agent’s reputation, denote its trust as an aggregation of reputations, and compute the
expected trust for a measurable belief on the agent’s truthfulness. In this section, we
pursue how the trust is involved in a rational decision-making of buyers and sellers.</p>
      <p>Suppose that there are sellers and buyers in online Internet settings. Let R be a
contract price, s be the quantitative size of the contract, V(s) be the buyer’s benefit (or
value) function, which reflects his/her satisfaction acquired by purchasing a number
of commodities, and C(s) be the seller’s cost function, which indicates the cost to
produce the amount of the commodities. Given the expected trust of the buyer ωˆ M ,
the expected utility of the buyer is given by4
Given the expected trust of the seller ωˆ N , and also, the expected utility of the seller
is defined as</p>
      <p>EU M (s) = V (s) − ωˆ M R.</p>
      <p>EU N (s) = ωˆ N R − C (s).</p>
      <p>
        In equations (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ) and (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ), the expected trust is interpreted as the average beliefs on the
buyer’s and the seller’s truthfulness or cooperativeness, respectively. The Nash
equilibrium [2, 12] in online transactions, then, provides a solution concept when the
buyer and the seller have no incentives in case of choosing other alternatives. The
Nash bargaining solution is
      </p>
      <p>
        arg max R (V (s) − ωˆ M R)(ωˆ N R − C(s))
so that the buyer and the seller are beneficial to each other if they agree on their
bargaining behavior. Note that equation (
        <xref ref-type="bibr" rid="ref7">7</xref>
        ) has a unique Nash equilibrium, since an R
can be determined given expected trusts of the buyer and the seller, V(s), and C(s).
Example 2. To derive R given the Nash bargaining solution, as defined in (
        <xref ref-type="bibr" rid="ref7">7</xref>
        ), let us
take the first derivative of equation (
        <xref ref-type="bibr" rid="ref7">7</xref>
        ) as follows:
4 Our notation follows [2].
(
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
(
        <xref ref-type="bibr" rid="ref6">6</xref>
        )
(
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
dR
∴ R =
(V (s) − ωˆ M R)(ωˆ N R − C (s)) = 0;
ωˆ NV (s) + ωˆ M C (s)
2ωˆ Mωˆ N
.
      </p>
      <p>
        Thus, the contract price R that they agree on can be determined in a Nash equilibrium.
Substituting the above into (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ) and rearranging terms, we get
In similar way, the expected utility of the seller is
      </p>
      <p>EU M (s) =
ωˆ NV (s) − ωˆ M C (s)</p>
      <p>2ωˆ N
EU N (s) =
ωˆ NV (s) − ωˆ M C(s)
2ωˆ M
.
.</p>
      <p>Suppose that the buyer’s benefit function V(s) is 24ln(2s) and the seller’s cost
function C(s) is s2-2s+3 as usual.5 When ωˆ M = ωˆ N = 0.8 , the quantitative size of the
contract s can be determined by
d
ds
(ωˆ NV (s) − ωˆ M C (s)) = d
ds</p>
      <p>(0.8 × 24 ln(2s) − 0.8 × (s 2 − 2s + 3)) = 0.</p>
      <p>That is, they both maximize their expected utilities and, once the buyer’s benefit
function and the seller’s cost function are decided, the quantitative size of the contract
is computed as the above. Thus, s=4. The expected utilities of the buyer and the seller
also can be calculated, and, in this case, those are 19.45. Consider now that the
seller’s expected trust is low, say, ωˆ N = 0.2 . Then, s=2.31, and their expected
utilities are EU M (s) = 10.94 and EU N (s) = 2.73 . Calculated above, both the overall
quantitative size of contract and the expected utilities of the buyer and the seller are
larger, when the expected trust values of the agents are higher. 
5 We assume that the buyer’s benefit does not necessarily increase in proportion to the
quantitative size of commodities while the seller’s cost proportionally increases to produce a
certain amount of commodities.
The model of trust in social networks has been continuously studied for safe and
successful interactions. Our work contributes to a computational model of trust as an
aggregation of consensus associated with multiple agent-groups. We formulated
reputation based on available feedbacks resulting from social interactions, calculated trust
among a set of reputations using aggregation rules, and represented expected trust as
a metric for the agent’s truthfulness or cooperativeness. We have shown how our trust
model can be calculated in a detailed example. To show how the trust is involved in a
rational decision-making of interactive agents, our trust model has been applied to
online Internet transactions. We believe the trust model should be applicable to real
societies of multi-agent environments.</p>
      <p>As part of our ongoing work, we are applying our trust model to online Internet
transactions. Given the actual feedbacks of customers in online multi-agent settings,
for example, e-Bay and Auction, we will convert the feedbacks into the agent’s
reputation, denote its trust as an aggregation of reputations, and pursue how trust affects a
rational decision-making of buyers and sellers. Toward this end, we will benchmark
the amount of interactions between the buyers and the sellers, when they have higher
trust values and/or they have lower trust values. The experiments that we are
performing will also measure the global profits in a set of agent-groups employed with
different trust values.
12. J. Nash, The Bargaining Problem, Econometrica, Vol. 28 (1950) pp 155-162
13. P. Resnick and R. Zeckhauser, Trust Among Strangers in Internet Transactions: Empirical
Analysis of eBay's Reputation System, The Economics of the Internet and E-Commerce,
Vol. 11, Advances in Applied Microeconomics, Elsevier (2002)
14. G. Shafer, Perspectives on the Theory and Practice of Belief Functions, International
Journal of Approximate Reasoning, Vol. 3 (1990) 1-40
15. G. Shafer and J. Pearl, eds., Readings in Uncertain Reasoning, Chapter 3 Decision Making,</p>
      <p>Chapter 7 Belief Functions, Morgan Kaufmann Publishers (1990)
16. L.A. Zadeh, Review of Books: A Mathematical Theory of Evidence, AI Magazine, Vol. 5,
No. 3 (1984) 81-83</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>R.</given-names>
            <surname>Axelrod</surname>
          </string-name>
          ,
          <source>The Evolution of Cooperation</source>
          , Basic Books, New York (
          <year>1984</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>S.</given-names>
            <surname>Braynov</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Sandholm</surname>
          </string-name>
          ,
          <article-title>Contracting with Uncertain Level of Trust</article-title>
          ,
          <source>Computational Intelligence</source>
          , Vol.
          <volume>18</volume>
          , No.
          <volume>4</volume>
          (
          <year>2002</year>
          )
          <fpage>501</fpage>
          -
          <lpage>514</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>J.</given-names>
            <surname>Coleman</surname>
          </string-name>
          , Foundations of Social Theory, Harvard University Press, Cambridge, MA (
          <year>1990</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>A.</given-names>
            <surname>Daskalopulu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Dimitrakos</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Maibaum</surname>
          </string-name>
          ,
          <article-title>Evidence-Based Electronic Contract Performance Monitoring</article-title>
          ,
          <source>INFORMS Journal of Group Decision and Negotiation</source>
          , Vol.
          <volume>11</volume>
          (
          <year>2002</year>
          )
          <fpage>469</fpage>
          -
          <lpage>485</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>A.P.</given-names>
            <surname>Dempster</surname>
          </string-name>
          ,
          <article-title>A Generalization of Bayesian Inference</article-title>
          ,
          <source>Journal of the Royal Statistical Society</source>
          ,
          <string-name>
            <surname>Series</surname>
            <given-names>B</given-names>
          </string-name>
          , Vol.
          <volume>30</volume>
          (
          <year>1968</year>
          )
          <fpage>205</fpage>
          -
          <lpage>247</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>J.</given-names>
            <surname>Golbeck</surname>
          </string-name>
          ,
          <article-title>Generating Predictive Movie Recommendations from Trust in Social Networks</article-title>
          ,
          <source>Proceedings of the Fourth International Conference on Trust Management</source>
          , Pisa, Italy (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>A.</given-names>
            <surname>Josang</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.J.</given-names>
            <surname>Knapskog</surname>
          </string-name>
          ,
          <article-title>A Metric for Trusted Systems</article-title>
          ,
          <source>Proceedings of the 21st National Information Systems Security Conference</source>
          , Virginia, USA (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>M.</given-names>
            <surname>Kinateder</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Rothermel</surname>
          </string-name>
          ,
          <article-title>Architecture and Algorithms for a Distributed Reputation System</article-title>
          , LNCS Vol.
          <volume>2692</volume>
          , Springer-Verlag (
          <year>2003</year>
          )
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>L.I.</given-names>
            <surname>Kuncheva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.C.</given-names>
            <surname>Bezdek</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Duin</surname>
          </string-name>
          ,
          <article-title>Decision Templates for Multiple Classifier Fusion: An Experimental Comparison</article-title>
          ,
          <source>Pattern Recognition</source>
          , Vol.
          <volume>34</volume>
          (
          <year>2001</year>
          )
          <fpage>299</fpage>
          -
          <lpage>314</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. S. Marsh,
          <article-title>Formalizing Trust as a Computational Concept</article-title>
          ,
          <source>Ph.D. thesis</source>
          , University of Stirling, UK (
          <year>1994</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. L.
          <string-name>
            <surname>Mui</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Mohtashemi</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Halberstadt</surname>
          </string-name>
          ,
          <article-title>A Computational Model of Trust and Reputation</article-title>
          ,
          <source>Proceedings of the 35th Hawaii International Conference on System Sciences</source>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>