<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards a Provenance-Preserving Trust Model in Agent Networks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Patricia Victor</string-name>
          <email>Patricia.Victor@UGent.be</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chris Cornelis</string-name>
          <email>Chris.Cornelis@UGent.be</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Martine De Cock</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ghent University, Dept. of Applied Mathematics and CS</institution>
          ,
          <addr-line>Krijgslaan 281 (S9), 9000 Gent</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2006</year>
      </pub-date>
      <fpage>22</fpage>
      <lpage>26</lpage>
      <abstract>
        <p>Social networks in which users or agents are connected to other agents and sources by trust relations are an important part of many web applications where information may come from multiple sources. Trust recommendations derived from these social networks are supposed to help agents develop their own opinions about how much they may trust other agents and sources. Despite the recent developments in the area, most of the trust models and metrics proposed so far tend to lose trust-related knowledge. We propose a new model in which trust values are derived from a bilattice that preserves valuable trust provenance information including partial trust, partial distrust, ignorance and inconsistency. We outline the problems that need to be addressed to construct a corresponding trust learning mechanism. We present initial results on the first learning step, namely trust propagation through trusted third parties (TTPs).</p>
      </abstract>
      <kwd-group>
        <kwd>Trust provenance</kwd>
        <kwd>web of trust</kwd>
        <kwd>distrust</kwd>
        <kwd>bilattice</kwd>
        <kwd>trust propagation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Categories and Subject Descriptors</title>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>As intelligent agents in the semantic web take over more
and more human tasks, they require an automated way of
trusting each other. One of the key problems in establishing
this, is related to the dynamicity of trust: to grasp how trust</p>
      <sec id="sec-2-1">
        <title>Martine.DeCock@UGent.be</title>
      </sec>
      <sec id="sec-2-2">
        <title>Paulo Pinheiro da Silva</title>
        <p>The University of Texas at El Paso</p>
        <p>Dept. of Computer Science</p>
        <p>
          El Paso, TX 79968, USA
emerges and vanishes. Once an understanding is reached,
a new problem arises: how can the cyberinfrastructure be
used to manage trust among users? To this aim, it is very
important to find techniques that capture the human notions
of trust as precisely as possible. Quoting [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]:
        </p>
        <p>If people can use their everyday trust building
methods for the cyberinfrastructure and through
it reach out to fellow human beings in far-away
places, then that would be the dawn of the real
Information Society for all.</p>
        <p>
          In the near future, more and more applications and
systems will need solid trust mechanisms. In fact, effective trust
models already play an important role in many intelligent
web applications, such as peer-to-peer (P2P) networks [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ],
recommender systems [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] and question answering systems
[
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. All these applications use, in one way or another, a web
of trust that allows agents to express trust in other agents.
Using such a web of trust, an agent can develop an opinion
about another, unknown agent.
        </p>
        <p>Existing trust models can be classified in several ways,
among which probabilistic vs. gradual approaches as well as
representations of trust vs. representations of both trust and
distrust. This classification is shown in Table 1, along with
some representative references for each class.</p>
        <p>
          Many models deal with trust in a binary way — an agent
(or source) can either be trusted or not — and compute the
probability or belief that the agent can be trusted [
          <xref ref-type="bibr" rid="ref11 ref12 ref13 ref21">11, 12,
13, 21</xref>
          ]. In such a setting, a higher trust score corresponds to
a higher probability or belief that an agent can be trusted.
        </p>
        <p>
          Apart from complete trust or no trust at all, however, in
real life we also encounter partial trust. For instance, we
ofprobabilistic
gradual
ten say “I trust this person very much”, or “My trust in this
person is rather low”. More recent models like [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] take this
into account: they make a distinction between “very
trustworthy”, “trustworthy”, “untrustworthy” and “very
untrustworthy”. Other examples of a gradual approach can be
found in [
          <xref ref-type="bibr" rid="ref14 ref19 ref2 ref7 ref9">2, 7, 9, 14, 19</xref>
          ]. In this case, a trust score is not
a probability: a higher trust score corresponds to a higher
trust. The ordering of the trust scores is very important,
with “very reliable” representing a higher trust than
“reliable”, which in turn is higher than “rather unreliable”.
This approach leans itself better to the computation of trust
scores when the outcome of an action can be positive to
some extent, e.g., when provided information can be right
or wrong to some degree, as opposed to being either right
or wrong. It is this kind of application that we are keeping
in mind throughout this paper.
        </p>
        <p>
          Large agent networks without a central authority typically
face ignorance as well as inconsistency problems. Indeed, it
is likely that not all agents know each other, and different
agents might provide contradictory information. Both
ignorance and inconsistency can have an important impact on
the trust score computation. Models that only take into
account trust (e.g. [
          <xref ref-type="bibr" rid="ref1 ref13 ref14 ref16">1, 13, 14, 16</xref>
          ]), either with a probabilistic
or a gradual interpretation, are not fully equipped to deal
with trust issues in large networks where many agents do
not know each other, because, as we explain in the next
section, most of these models provide limited support for trust
provenance.
        </p>
        <p>
          Recent publications [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] show an emerging interest in
modeling the notion of distrust, but models that take into
account both trust and distrust are still scarce [
          <xref ref-type="bibr" rid="ref12 ref6 ref9">6, 9, 12</xref>
          ]. To
the best of our knowledge, there is only one probabilistic
approach considering trust and distrust simultaneously: in
subjective logic (SL) [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] an opinion includes a belief b that
an agent is to be trusted, a disbelief d corresponding to a
belief that an agent is not to be trusted, and an uncertainty u.
The uncertainty factor clearly indicates that there is room
for ignorance in this model. However, the requirement that
the belief b, the disbelief d and the uncertainty u should
sum up to 1, rules out options for inconsistency although
this might arise quite naturally in large networks with
contradictory sources.
        </p>
        <p>
          SL is an example of a probabilistic approach, whereas in
this paper we will outline a trust model that uses a
gradual approach, meaning that agents can be trusted to some
degree. Furthermore, to preserve provenance information,
our model deals with distrust in addition to trust.
Consequently, we can represent partial trust and partial distrust.
Our intended approach is situated in the bottom right
corner of Table 1. As far as we know, besides our own earlier
work [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], there is only one other existing model in this
category: Guha et al. [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] use a couple (t, d) with a trust degree
t and a distrust degree d, both in [
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ]. To obtain the final
trust score, they subtract d from t. As we explain in the
next section, potentially important information is lost when
the trust and distrust scales are merged into one.
        </p>
        <p>
          Our long term goal is to develop a model of trust that
preserves trust provenance as much as possible. A previous
model we introduced in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], based on intuitionistic fuzzy set
theory [
          <xref ref-type="bibr" rid="ref15 ref4">4, 15</xref>
          ], attempts this for partial trust, partial
distrust and ignorance. In this paper, we will introduce an
approach for preserving trust provenance about inconsistencies
as well. Our model is based on a trust score space,
consisting of the set [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]2 of trust scores equipped with a trust
ordering, going from complete distrust to complete trust, as
well as a knowledge ordering, going from a shortage of
evidence (incomplete information) to an excess of evidence (in
other words inconsistent information).
        </p>
        <p>First of all, in Section 2, we point out the importance of a
provenance-preserving trust model by means of some
examples. In Section 3, we introduce the bilattice-based concept
of a trust score space, i.e. a set of trust scores equipped with
both a trust ordering and a knowledge ordering, and we
provide a definition for a trust network. In developing a trust
learning mechanism that is able to compute trust scores we
will need to solve many challenging problems, such as how
to propagate, aggregate, and update trust scores. In
Section 4, we reflect upon our initial tinkering on candidate
operators for trust score propagation through trusted third
parties (TTPs). As these trust propagation operators are
currently shaped according to our own intuitions, we will
set up an experiment in the near future to gather the
necessary data that provides insight in the propagation of trust
scores through TTPs. We briefly comment on this in Section
5. Finally, subsequent problems that need to be addressed
are sketched.
2.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>TRUST PROVENANCE</title>
      <p>The main aim in using trust networks is to allow users or
agents to form trust opinions on unknown agents or sources
by asking for a trust recommendation from a TTP who, in
turn, might consult its own TTP etc. This process is called
trust propagation. In large networks, it often happens that
an agent does not ask one TTP’s opinion, but several.
Combining trust information received from more than one TTP is
called aggregation (see fig. 1). Existing trust network
models usually apply suitable trust propagation and aggregation
operators to compute a resulting trust value. In passing on
this trust value to the inquiring agent, valuable information
on how this value has been obtained is lost.</p>
      <p>User opinions, however, may be affected by provenance
information exposing how trust values have been computed.
For example, a trust recommendation in a source from a
fully informed TTP is quite different from a trust
recommendation from a TTP who does not know the source too
well but has no evidence to distrust it. Unfortunately, in
current models, users cannot really exercise their right to
interpret how trust is computed since most models do not
preserve trust provenance.</p>
      <p>Trust networks are typically challenged by two
important problems influencing trust recommendations. Firstly,
in large networks it is likely that many agents do not know
each other, hence there is an abundance of ignorance.
Secondly, because of the lack of a central authority, different
agents might provide different and even contradictory
information, hence inconsistency may occur. Below we illustrate
how ignorance and inconsistency may affect trust
recommendations.</p>
      <p>
        example 1 (Ignorance). Agent a needs to establish
an opinion about agent c in order to complete an important
bank transaction. Agent a may ask agent b for a
recommendation of c because agent a does not know anything about c.
Agent b, in this case, is a recommender that knows how to
compute a trust value of c from a web of trust. Assume that
b has evidence for both trusting and distrusting c. For
instance, let us say that b trusts c 0.5 in the range [
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ] where
0 is full absence of trust and 1 is full presence of trust; and
that b distrusts c 0.2 in the range [
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ] where 0 is full absence
of distrust and 1 is full presence of distrust. Another way
of saying this is that b trusts c at least to the extent 0.5, but
also not more than 0.8. The length of the interval [0.5,0.8]
indicates how much b lacks information about c.
      </p>
      <p>
        In this scenario, by getting the trust value 0.5 from b,
a is losing valuable information indicating that b has some
evidence to distrust c too. A similar problem occurs using
the approach of Guha et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In this case, b will pass on
a value of 0.5-0.2=0.3 to a. Again, a is losing valuable trust
provenance information indicating, for example, how much
b lacks information about c.
      </p>
      <p>example 2 (Ignorance). Agent a needs to establish
an opinion about both agents c and d in order to find an
efficient web service. To this end, agent a calls upon agent
b for trust recommendations on agents c and d. Agent b
completely distrusts agent c, hence agent b trusts agent c to
degree 0. On the other hand agent b does not know agent
d, hence agent b trusts agent d to degree 0. As a result,
agent b returns the same trust recommendation to agent a
for both agents c and d, namely 0, but the meaning of this
value is clearly different in both cases. With agent c, the lack
of trust is caused by a presence of distrust, while with agent
d, the absence of trust is caused by a lack of knowledge. This
provenance information is vital for agent a to make a well
informed decision. For example, if agent a has a high trust
in TTP b, agent a will not consider agent c anymore, but
agent a might ask for other opinions on agent d.</p>
      <p>example 3 (Contradictory Information). One of
your friends tells you to trust a dentist, and another one
of your friends tells you to distrust that same dentist. In
this case, there are two TTPs, they are equally trusted, and
they tell you the exact opposite thing. In other words, you
have to deal with inconsistent information. What would be
your aggregated trust score in the dentist? Models that work
with only one scale can not represent this: taking e.g. 0.5 as
trust score (i.e. the average) is not a solution, because then
we can not differentiate from a situation in which both of
your friends trust the dentist to the extent 0.5.</p>
      <p>Furthermore, what would you answer if someone asks you
if the dentist can be trusted? A possible answer is: “I don’t
really know, because I have contradictory information about
this dentist”. Note that this is fundamentally different from
“I don’t know, because I have no information about him”.
In other words, a trust score of 0 is not a suitable option
either, as it could imply both inconsistency and ignorance.</p>
      <p>The examples above indicate the need for a model that
preserves information on whether a “trust problem” is caused
by presence of distrust or rather by lack of knowledge, as well
as whether a “knowledge problem” is caused by having too
little or rather too much, i.e. contradictory, information.
3.</p>
    </sec>
    <sec id="sec-4">
      <title>TRUST SCORE SPACE</title>
      <p>We need a model that, on one hand, is able to represent
the trust an agent may have in another agent in a given
domain, and on the other hand, can evaluate the
contribution of each aspect of trust to the overall trust score. As a
result, such a model will be able to distinguish between
different cases of trust provenance. To this end, we introduce
a new structure, called trust score space BL .</p>
      <p>Definition 1
space</p>
      <p>(Trust Score Space). The trust score</p>
      <p>
        BL = ([
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]2, ≤t, ≤k, ¬)
consists of the set [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]2 of trust scores and two orderings
defined by
(x1, x2) ≤t (y1, y2) iff x1 ≤ y1 and x2 ≥ y2
(x1, x2) ≤k (y1, y2) iff x1 ≤ y1 and x2 ≤ y2
for all (x1, x2) and (y1, y2) in [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]2. Furthermore
¬(x1, x2) = (x2, x1).
      </p>
      <p>
        The negation ¬ serves to impose a relationship between the
lattices ([
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]2, ≤t) and ([
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]2, ≤k):
(x1, x2) ≤t (y1, y2) ⇒ ¬(x1, x2) ≥t ¬(y1, y2)
(x1, x2) ≤k (y1, y2) ⇒ ¬(x1, x2) ≤k ¬(y1, y2),
and ¬¬(x1, x2) = (x1, x2). In other words, ¬ is an involution
that reverses the ≤t-order and preserves the ≤k-order. One
can easily verify that the structure BL is a bilattice [
        <xref ref-type="bibr" rid="ref3 ref8">3, 8</xref>
        ].
      </p>
      <p>
        Figure 2 shows the bilattice BL , along with some
examples of trust scores. The first lattice ([
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]2, ≤t) orders
the trust scores going from complete distrust (0, 1) to
complete trust (1, 0). The other lattice ([
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]2, ≤k) evaluates
the amount of available trust evidence, going from a
“shortage of evidence”, x1 + x2 &lt; 1 (incomplete information), to
an “excess of evidence”, namely x1 + x2 &gt; 1 (inconsistent
information). In the extreme cases, there is no information
available (0, 0), or there is evidence that says that b is to be
trusted fully as well as evidence that states that b is
completely unreliable: (1, 1).
      </p>
      <p>
        The trust score space allows our model to preserve trust
provenance by simultaneously representing partial trust,
partial distrust, partial ignorance and partial inconsistency, and
treating them as different, related concepts. Moreover, by
using a bilattice model the aforementioned problems
disappear:
1. By using trust scores we can now distinguish full
distrust (0,1) from ignorance (0,0) and analogously, full
trust (1,0) from inconsistency (1,1). This is an
improvement of e.g. [
        <xref ref-type="bibr" rid="ref1 ref21">1, 21</xref>
        ].
2. We can deal with both incomplete information and
inconsistency (improvement of [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]).
3. We do not lose important information (improvement
of [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]), because, as will become clear in the next
section, we keep the trust and distrust degree separated
throughout the whole trust process (propagation and
other operations).
      </p>
      <p>The available trust information is modeled as a trust
network that associates with each couple of agents a score
drawn from the trust score space.</p>
      <p>Definition 2 (Trust Network). A trust network is
a couple (A, R) such that A is a set of agents and R is a
A × A → BL mapping. For every a and b in A, we write</p>
      <p>R(a, b) = R+(a, b), R−(a, b)
• R(a, b) is called the trust score of a in b.
• R+(a, b) is called the trust degree of a in b.</p>
      <p>• R−(a, b) is called the distrust degree of a in b.
R should be thought of as a snapshot taken at a certain
moment, since the trust learning mechanism involves
recalculating trust scores, for instance through trust propagation
as discussed next.</p>
    </sec>
    <sec id="sec-5">
      <title>TRUST SCORE PROPAGATION</title>
      <p>We often encounter situations in which we need trust
information about an unknown person. For instance, if you
are in search of a new dentist, you can ask your friends’
opinion about dentist Evans. If they do not know Evans
personally, they can ask a friend of theirs, and so on. In
virtual trust networks, propagation operators are used to
handle this problem. The simplest case (atomic
propagation) can informally be described as (fig. 3): if the trust
score of agent a in agent b is p, and the trust score of b
in agent c is q, what information can be derived about the
trust score of a in c? When propagating only trust, the most
commonly used operator is multiplication. When taking into
account also distrust, the picture gets more complicated, as
the following example illustrates.</p>
      <p>example 4. Suppose agent a trusts agent b and agent b
distrusts agent c. It is reasonable to assume that based on
this, agent a will also distrust agent c, i.e. R(a, c) = (0, 1).
Now, switch the couples. If a distrusts b and b trusts c,
there are several options for the trust score of a in c: a
possible reaction for a is to do the exact opposite of what b
recommends, in other words to distrust c, R(a, c) = (0, 1).
But another interpretation is to ignore everything b says,
hence the result of the propagation is ignorance, R(a, c) =
(0, 0).</p>
      <p>As this example indicates, there are likely multiple
possible propagation operators for trust scores. We expect that
the choice for a particular BL × BL → BL mapping
to model the trust score propagation will depend on the
application and the context but might also differ from person
to person. Thus, the need for provenance-preserving trust
models becomes more evident.</p>
      <p>To study some possible propagation schemes, let us first
consider the bivalent case, i.e. when trust and distrust
degrees assume only the values 0 or 1. For agents a and b, we
use R+(a, b), R−(a, b), and ∼R−(a, b) as shorthands for
respectively R+(a, b) = 1, R−(a, b) = 1 and R−(a, b) = 0. We
consider the following three, different propagation schemes
(a, b and c are agents):
1. R+(a, c) ≡ R+(a, b) ∧ R+(b, c)</p>
      <p>R−(a, c) ≡ R+(a, b) ∧ R−(b, c)
2. R+(a, c) ≡ R+(a, b) ∧ R+(b, c)</p>
      <p>R−(a, c) ≡ ∼R−(a, b) ∧ R−(b, c)
3. R+(a, c) ≡ (R+(a, b) ∧ R+(b, c)) ∨ (R−(a, b) ∧ R−(b, c))</p>
      <p>R−(a, c) ≡ (R+(a, b) ∧ R−(b, c)) ∨ (R−(a, b) ∧ R+(b, c))
In scheme (1) agent a only listens to whom he trusts, and
ignores everyone else. Scheme (2) is similar but in addition
agent a takes over distrust information from a not distrusted
(hence possibly unknown) third party. Scheme (3)
corresponds to an interpretation in which the enemy of an enemy
is considered to be a friend, and the friend of an enemy is
considered to be an enemy.</p>
      <p>In our model, besides 0 and 1, we also allow partial trust
and distrust. Hence we need suitable extensions of the
logical operators that are used in (1), (2) and (3). For
conjunction, disjunction and negation, we use respectively a t-norm
T , a t-conorm S and a negator N . They represent large
classes of logic connectives, from which specific operators,
each with their own behaviour, can be chosen, according to
the application or context.</p>
      <p>
        T and S are increasing, commutative and associative [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]
× [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] → [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] mappings satisfying T (x, 1) = S(x, 0) = x
for all x in [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]. Examples of T are the minimum and the
product, while S could be the maximum or the mapping
SP defined by SP (x, y) = x + y − x · y, for all x and y in
[
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]. N is a decreasing [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] → [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] mapping satisfying
N (0) = 1 and N (1) = 0; the most commonly used one is
Ns(x) = 1 − x.
      </p>
      <p>Generalizing the logical operators in scheme (1), (2), and
(3) accordingly, we obtain the propagation operators of
Table 2. Each one can be used for modeling a specific
behaviour. Starting from a trust score (t1, d1) of agent a in agent
b, and a trust score (t2, d2) of agent b in agent c, each
propagation operator computes a trust score for agent a in agent
c. Since the resulting value is again an element of the trust
score space, trust provenance is preserved.</p>
      <p>The remainder of this section is devoted to the
investigation of some potentially useful properties of these
propagation operators. In doing so, we keep the logical operators
as generic as possible, in order to get a clear view on their
general behaviour. First of all, if one of the arguments of
a propagation operator can be replaced by a higher trust
score w.r.t. to the knowledge ordering without decreasing
the resulting trust score, we call the propagation operator
knowledge monotonic.</p>
      <p>Definition 3 (Knowledge Monotonicity). A
propagation operator f on BL is said to be knowledge monotonic
iff for all x, y, z, and u in BL ,</p>
      <p>x ≤k y and z ≤k u implies f (x, z) ≤k f (y, u)</p>
      <p>Knowledge monotonicity reflects that the better you know
how well you should trust or distrust user b who is
recommending user c, the better you know how well to trust or
distrust user c. Although this behaviour seems natural, not
all operators of Table 2 abide by it.</p>
      <p>Proposition 1. Prop1 and Prop3 are knowledge monotonic.
Prop2 is not knowledge monotonic.</p>
      <p>Proof. The knowledge monotonicity of Prop1 and Prop3
follows from the monotonicity of T and S. To see that Prop2
is not knowledge monotonic, consider</p>
      <p>Prop2((0.2, 0.7), (0, 1)) = (0, 0.3)</p>
      <p>Prop2((0.2, 0.8), (0, 1)) = (0, 0.2),
with Ns as negator. We have that (0.2, 0.7) ≤k (0.2, 0.8)
and (0, 1) ≤k (0, 1) but (0, 0.3) k (0, 0.2).</p>
      <p>The intuitive explanation behind the non knowledge
monotonic behaviour of Prop2 is that, using this propagation
operator, agent a takes over distrust from a stranger b, hence
giving b the benefit of the doubt, but when a starts to
distrust b (thus knowing b better), a will adopt b’s opinion to
a lesser extent (in other words: a derives less knowledge).</p>
      <p>Knowledge montonicity is not only useful to provide more
insight in the propagation operators but it can also be used
to establish a lower or upper bound for the actual
propagated trust score without immediate recalculation. This
might be useful in a situtation where one of the agents has
updated its trust score in another agent and there is not
enough time to recalculate the whole propagation chain.</p>
      <p>Besides atomic propagation, we need to be able to
consider longer propagation chains, so TTPs can in turn consult
their own TTPs and so on. Prop1 turns out to be
associative, which means that we can extend it for more scores
without ambiguity.</p>
      <p>Proposition 2. (Associativity): Prop1 is associative, i.e.
for all x, y, and z in BL it holds that:</p>
      <p>Prop1(Prop1(x, y), z) = Prop1(x,Prop1(y, z))
Prop2 and Prop3 are not associative.</p>
      <p>Proof. The associativity of Prop1 can be proved by taking
into account the associativity of the t-norm. Examples can
be constructed to show that the other two propagation
operators are not associative. Take for example N (x) = 1 − x
and T (x, y) = x · y, then
Prop2((0.3, 0.6),Prop2((0.1, 0.2), (0.8, 0.1))) = (0.024, 0.032)
while on the other hand
Prop2(Prop2((0.3, 0.6), (0.1, 0.2)), (0.8, 0.1)) = (0.024, 0.092)</p>
      <p>With an associative propagation operator, the overall trust
score computed from a longer propagation chain is
independent of the choice of which two subsequent trust scores to
combine first. When dealing with a non associative operator
however, it should be specified which pieces of the
propagation chain to calculate first.</p>
      <p>Finally, it is interesting to note that in some cases the
overall trust score in a longer propagation chain can be
determined by looking at only one agent. For instance, if we
use Prop1 or Prop3, and there occurs a missing link (0, 0)
anywhere in the propagation chain, the result will contain
no useful information (in other words, the final trust score
is (0, 0)). Hence as soon as one of the agents is ignorant, we
can dismiss the entire chain. Notice that this also holds for
Prop3, despite the fact that it is not an associative operator.
Using Prop1, the same conclusion (0, 0) can be drawn if at
any position in the chain, except the last one, there occurs
complete distrust (0, 1).
5.</p>
    </sec>
    <sec id="sec-6">
      <title>CONCLUSIONS AND FUTURE WORK</title>
      <p>We have introduced a new model that can
simultaneously handle partial trust and distrust. We showed that
our bilattice-based model alleviates some of the existing
problems of trust models, more specifically concerning trust
provenance. In addition, this new model can handle
incomplete and excessive information, which occurs frequently in
virtual communities, such as the WWW in general and trust
networks in particular. Therefore, this new
provenancepreserving trust model can lead to an improvement of many
existing web applications, such as P2P networks, question
answering systems and recommender systems.</p>
      <p>A first step in our future research involves the further
development and the choice of trust score propagation operators.
Of course, the trust behaviour of users depends on the
situation and the application, and is in most cases relative to
a goal or a task. A friend e.g. can be trusted for answering
questions about movies, but not necessarily about doctors.
Therefore, we are preparing some specific scenario’s in which
trust is needed to make a certain decision (e.g. which doctor
to visit, which movie to see). According to these scenario’s,
we will prepare questionnaires, in which we aim to determine
how propagation of trust scores takes place. Gathering such
data, we hope to get a clear view on trust score
propagation in real life, and how to model it in applications. We
do not expect to find one particular propagation schema,
but rather several, depending on a persons nature. When
we obtain the results of the questionnaire, we will also be
able to verify the three propagation operators we proposed
in this paper. Furthermore, we would like to investigate the
behaviour of the operators when using particular t-norms,
t-conorms and negators, and examine whether it is possible
to use other classes of operators that do not use t-(co)norms.</p>
      <p>
        A second problem which needs to be addressed, is
aggregation. In our domain of interest, namely a gradual
approach to both trust and distrust, there are no aggregation
operators yet. We will start by investigating whether it is
possible to extend existing aggregation operators, like e.g.
the ordered weighted averaging aggregation operator [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ],
fuzzy integrals [
        <xref ref-type="bibr" rid="ref18 ref5">5, 18</xref>
        ], etc., but we assume that not all the
problems will be solved in this way, and that we will also
need to introduce new specific aggregation operators.
      </p>
      <p>Finally, trust and distrust are not static, they can change
after a bad (or good) experience. Therefore, it is also
necessary to search for appropriate updating techniques.</p>
      <p>Our final goal is the creation of a framework that can
represent partial trust, distrust, inconsistency and ignorance,
that contains appropriate operators (propagation,
aggregation, update) to work with those trust scores, and that can
serve as a starting point to improve the quality of many web
applications. In particular, as we are aware that trust is
experienced in different ways, according to the application and
context, we aim at a further development of our model for
one specific application.</p>
    </sec>
    <sec id="sec-7">
      <title>ACKNOWLEDGMENTS</title>
      <p>Patricia Victor would like to thank the Institute for the
Promotion of Innovation through Science and Technology in
Flanders (IWT-Vlaanderen) for funding her research. Chris
Cornelis would like to thank the Research Foundation-Flanders
for funding his research.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Abdul-Rahman</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Hailes</surname>
          </string-name>
          .
          <article-title>Supporting trust in virtual communities</article-title>
          .
          <source>In Proceedings of the 33rd Hawaii International Conference on System Sciences</source>
          , pages
          <fpage>1769</fpage>
          -
          <lpage>1777</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Almen</surname>
          </string-name>
          <article-title>´arez, A</article-title>
          . Mar´ın, C. Campo, and
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Garc´ıa. Ptm: A pervasive trust management model for dynamic open environments</article-title>
          .
          <source>In First Workshop on Pervasive Security, Privacy and Trust, PSPT2004 in conjuntion with Mobiquitous</source>
          <year>2004</year>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>O.</given-names>
            <surname>Arieli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cornelis</surname>
          </string-name>
          , G. Deschrijver, and
          <string-name>
            <given-names>E. E.</given-names>
            <surname>Kerre</surname>
          </string-name>
          .
          <article-title>Bilattice-based squares and triangles</article-title>
          .
          <source>Lecture Notes in Computer Science</source>
          ,
          <volume>3571</volume>
          :
          <fpage>563</fpage>
          -
          <lpage>574</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>K.</given-names>
            <surname>Atanassov</surname>
          </string-name>
          .
          <article-title>Intuitionistic fuzzy sets</article-title>
          .
          <source>Fuzzy Sets and Systems</source>
          ,
          <volume>20</volume>
          :
          <fpage>87</fpage>
          -
          <lpage>96</lpage>
          ,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Choquet</surname>
          </string-name>
          .
          <article-title>Theory of capacities</article-title>
          . Annales de l'
          <source>Institut Fourier</source>
          ,
          <volume>5</volume>
          :
          <fpage>131</fpage>
          -
          <lpage>295</lpage>
          ,
          <year>1953</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>M. De Cock</surname>
            and
            <given-names>P.</given-names>
          </string-name>
          <article-title>Pinheiro da Silva. A many-valued representation and propagation of trust and distrust</article-title>
          .
          <source>Lecture Notes in Computer Science</source>
          ,
          <volume>3849</volume>
          :
          <fpage>108</fpage>
          -
          <lpage>113</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Falcone</surname>
          </string-name>
          , G. Pezzulo, and
          <string-name>
            <given-names>C.</given-names>
            <surname>Castelfranchi</surname>
          </string-name>
          .
          <article-title>A fuzzy approach to a belief-based trust computation</article-title>
          .
          <source>Lecture Notes in Artificial Intelligence</source>
          ,
          <volume>2631</volume>
          :
          <fpage>73</fpage>
          -
          <lpage>86</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ginsberg</surname>
          </string-name>
          <article-title>. Multi-valued logics: A uniform approach to reasoning in artificial intelligence</article-title>
          .
          <source>Computer Intelligence</source>
          ,
          <volume>4</volume>
          :
          <fpage>256</fpage>
          -
          <lpage>316</lpage>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Raghavan</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Tomkins</surname>
          </string-name>
          .
          <article-title>Propagation of trust and distrust</article-title>
          .
          <source>In Proceedings of the 13th International World Wide Web Conference</source>
          , pages
          <fpage>403</fpage>
          -
          <lpage>412</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Herrmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Issarny</surname>
          </string-name>
          , and S. Shiu (eds).
          <source>Lecture Notes in Computer Science</source>
          , volume
          <volume>3477</volume>
          .
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Jøsang</surname>
          </string-name>
          .
          <article-title>A logic for uncertain probabilities</article-title>
          .
          <source>International Journal of Uncertainty, Fuzziness and Knowledge-based Systems</source>
          ,
          <volume>9</volume>
          (
          <issue>3</issue>
          ):
          <fpage>279</fpage>
          -
          <lpage>311</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Jøsang</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Knapskog</surname>
          </string-name>
          .
          <article-title>A metric for trusted systems</article-title>
          .
          <source>In Proc. 21st NIST-NCSC National Information Systems Security Conference</source>
          , pages
          <fpage>16</fpage>
          -
          <lpage>29</lpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kamvar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schlosser</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Garcia-Molina</surname>
          </string-name>
          .
          <article-title>The eigentrust algorithm for reputation management in P2P networks</article-title>
          .
          <source>In Proceedings of the 12th International World Wide Web Conference</source>
          , pages
          <fpage>640</fpage>
          -
          <lpage>651</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>P.</given-names>
            <surname>Massa</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Avesani</surname>
          </string-name>
          .
          <article-title>Trust-aware collaborative filtering for recommender systems</article-title>
          .
          <source>In Proceedings of the Federated International Conference On The Move to Meaningful Internet: CoopIS</source>
          , DOA, ODBASE, pages
          <fpage>492</fpage>
          -
          <lpage>508</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Nikolova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Nikolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cornelis</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Deschrijver</surname>
          </string-name>
          .
          <article-title>Survey of the research on intuitionistic fuzzy sets</article-title>
          .
          <source>Advanced Studies in Contempory Mathematics</source>
          ,
          <volume>4</volume>
          (
          <issue>2</issue>
          ):
          <fpage>127</fpage>
          -
          <lpage>157</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Richardson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Domingos</surname>
          </string-name>
          .
          <article-title>Trust management for the semantic web</article-title>
          .
          <source>In Proceedings of the Second International Semantic Web Conference</source>
          , pages
          <fpage>351</fpage>
          -
          <lpage>368</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Riguidel</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Martinelli</surname>
          </string-name>
          (eds).
          <source>Security, Dependability and Trust. Thematic Group Report of the European Coordination Action Beyond the Horizon: Anticipating Future and Emerging Information Society Technologies</source>
          , http://www.beyond
          <article-title>-the-horizon</article-title>
          .net,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sugeno</surname>
          </string-name>
          .
          <article-title>Theory of fuzzy integrals and its applications</article-title>
          ,
          <source>PhD thesis</source>
          .
          <year>1974</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>W.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ma</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          .
          <article-title>Managing trust in peer-to-peer networks</article-title>
          .
          <source>Journal of Digital Information Management</source>
          ,
          <volume>3</volume>
          :
          <fpage>58</fpage>
          -
          <lpage>63</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>R.</given-names>
            <surname>Yager</surname>
          </string-name>
          .
          <article-title>On ordered weighted averaging aggregation operators in multicriteria decision making</article-title>
          .
          <source>IEEE Transactions on Systems, Man, and Cybernetics</source>
          ,
          <volume>18</volume>
          :
          <fpage>183</fpage>
          -
          <lpage>190</lpage>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>I.</given-names>
            <surname>Zaihrayeu</surname>
          </string-name>
          , P. Pinheiro da Silva, and
          <string-name>
            <given-names>D.</given-names>
            <surname>McGuinness</surname>
          </string-name>
          .
          <article-title>IWTrust: Improving user trust in answers from the web</article-title>
          .
          <source>In Proceedings of the Third International Conference On Trust Management</source>
          , pages
          <fpage>384</fpage>
          -
          <lpage>392</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>