=Paper=
{{Paper
|id=Vol-190/paper-7
|storemode=property
|title=Towards a Provenance-Preserving Trust Model in Agent Networks
|pdfUrl=https://ceur-ws.org/Vol-190/paper07.pdf
|volume=Vol-190
|authors=Patricia Victor,Chris Cornelis,Martine De Cock and Paulo Pinheiro da Silva
|dblpUrl=https://dblp.org/rec/conf/mtw/VictorCCS06
}}
==Towards a Provenance-Preserving Trust Model in Agent Networks==
Towards a Provenance-Preserving Trust Model in Agent
Networks
Patricia Victor Martine De Cock
Ghent University Ghent University
Dept. of Applied Mathematics and CS Dept. of Applied Mathematics and CS
Krijgslaan 281 (S9), 9000 Gent, Belgium Krijgslaan 281 (S9), 9000 Gent, Belgium
Patricia.Victor@UGent.be Martine.DeCock@UGent.be
Chris Cornelis Paulo Pinheiro da Silva
Ghent University The University of Texas at El Paso
Dept. of Applied Mathematics and CS Dept. of Computer Science
Krijgslaan 281 (S9), 9000 Gent, Belgium El Paso, TX 79968, USA
Chris.Cornelis@UGent.be paulo@utep.edu
ABSTRACT emerges and vanishes. Once an understanding is reached,
Social networks in which users or agents are connected to a new problem arises: how can the cyberinfrastructure be
other agents and sources by trust relations are an impor- used to manage trust among users? To this aim, it is very
tant part of many web applications where information may important to find techniques that capture the human notions
come from multiple sources. Trust recommendations derived of trust as precisely as possible. Quoting [17]:
from these social networks are supposed to help agents de- If people can use their everyday trust building
velop their own opinions about how much they may trust methods for the cyberinfrastructure and through
other agents and sources. Despite the recent developments it reach out to fellow human beings in far-away
in the area, most of the trust models and metrics proposed places, then that would be the dawn of the real
so far tend to lose trust-related knowledge. We propose Information Society for all.
a new model in which trust values are derived from a bi-
lattice that preserves valuable trust provenance information In the near future, more and more applications and sys-
including partial trust, partial distrust, ignorance and incon- tems will need solid trust mechanisms. In fact, effective trust
sistency. We outline the problems that need to be addressed models already play an important role in many intelligent
to construct a corresponding trust learning mechanism. We web applications, such as peer-to-peer (P2P) networks [13],
present initial results on the first learning step, namely trust recommender systems [14] and question answering systems
propagation through trusted third parties (TTPs). [21]. All these applications use, in one way or another, a web
of trust that allows agents to express trust in other agents.
Categories and Subject Descriptors Using such a web of trust, an agent can develop an opinion
about another, unknown agent.
H.3.3 [Information Storage and Retrieval]: Information Existing trust models can be classified in several ways,
Search and Retrieval—Retrieval models; I.2.4 [Artificial among which probabilistic vs. gradual approaches as well as
Intelligence]: Knowledge Representation Formalisms and representations of trust vs. representations of both trust and
Methods distrust. This classification is shown in Table 1, along with
some representative references for each class.
General Terms Many models deal with trust in a binary way — an agent
(or source) can either be trusted or not — and compute the
Algorithms, Human Factors probability or belief that the agent can be trusted [11, 12,
13, 21]. In such a setting, a higher trust score corresponds to
Keywords a higher probability or belief that an agent can be trusted.
Trust provenance, web of trust, distrust, bilattice, trust Apart from complete trust or no trust at all, however, in
propagation real life we also encounter partial trust. For instance, we of-
1. INTRODUCTION
Table 1: Trust Models, State of the Art
As intelligent agents in the semantic web take over more trust trust and distrust
and more human tasks, they require an automated way of
proba- Kamvar et al. [13]
trusting each other. One of the key problems in establishing
bilistic Zaihrayeu et al. [21] Jøsang et al. [11, 12]
this, is related to the dynamicity of trust: to grasp how trust
Abdul-Rahman et al. [1]
Copyright is held by the author/owner(s). gradual Almenárez et al. [2] De Cock et al. [6]
WWW2006, May 22–26, 2006, Edinburgh, UK. Massa et al. [14] Guha et al. [9]
.
ten say “I trust this person very much”, or “My trust in this ing of the set [0, 1]2 of trust scores equipped with a trust
person is rather low”. More recent models like [1] take this ordering, going from complete distrust to complete trust, as
into account: they make a distinction between “very trust- well as a knowledge ordering, going from a shortage of evi-
worthy”, “trustworthy”, “untrustworthy” and “very untrust- dence (incomplete information) to an excess of evidence (in
worthy”. Other examples of a gradual approach can be other words inconsistent information).
found in [2, 7, 9, 14, 19]. In this case, a trust score is not First of all, in Section 2, we point out the importance of a
a probability: a higher trust score corresponds to a higher provenance-preserving trust model by means of some exam-
trust. The ordering of the trust scores is very important, ples. In Section 3, we introduce the bilattice-based concept
with “very reliable” representing a higher trust than “re- of a trust score space, i.e. a set of trust scores equipped with
liable”, which in turn is higher than “rather unreliable”. both a trust ordering and a knowledge ordering, and we pro-
This approach leans itself better to the computation of trust vide a definition for a trust network. In developing a trust
scores when the outcome of an action can be positive to learning mechanism that is able to compute trust scores we
some extent, e.g., when provided information can be right will need to solve many challenging problems, such as how
or wrong to some degree, as opposed to being either right to propagate, aggregate, and update trust scores. In Sec-
or wrong. It is this kind of application that we are keeping tion 4, we reflect upon our initial tinkering on candidate
in mind throughout this paper. operators for trust score propagation through trusted third
Large agent networks without a central authority typically parties (TTPs). As these trust propagation operators are
face ignorance as well as inconsistency problems. Indeed, it currently shaped according to our own intuitions, we will
is likely that not all agents know each other, and different set up an experiment in the near future to gather the neces-
agents might provide contradictory information. Both igno- sary data that provides insight in the propagation of trust
rance and inconsistency can have an important impact on scores through TTPs. We briefly comment on this in Section
the trust score computation. Models that only take into ac- 5. Finally, subsequent problems that need to be addressed
count trust (e.g. [1, 13, 14, 16]), either with a probabilistic are sketched.
or a gradual interpretation, are not fully equipped to deal
with trust issues in large networks where many agents do 2. TRUST PROVENANCE
not know each other, because, as we explain in the next sec-
The main aim in using trust networks is to allow users or
tion, most of these models provide limited support for trust
agents to form trust opinions on unknown agents or sources
provenance.
by asking for a trust recommendation from a TTP who, in
Recent publications [10] show an emerging interest in mod-
turn, might consult its own TTP etc. This process is called
eling the notion of distrust, but models that take into ac-
trust propagation. In large networks, it often happens that
count both trust and distrust are still scarce [6, 9, 12]. To
an agent does not ask one TTP’s opinion, but several. Com-
the best of our knowledge, there is only one probabilistic
bining trust information received from more than one TTP is
approach considering trust and distrust simultaneously: in
called aggregation (see fig. 1). Existing trust network mod-
subjective logic (SL) [12] an opinion includes a belief b that
els usually apply suitable trust propagation and aggregation
an agent is to be trusted, a disbelief d corresponding to a be-
operators to compute a resulting trust value. In passing on
lief that an agent is not to be trusted, and an uncertainty u.
this trust value to the inquiring agent, valuable information
The uncertainty factor clearly indicates that there is room
on how this value has been obtained is lost.
for ignorance in this model. However, the requirement that
User opinions, however, may be affected by provenance
the belief b, the disbelief d and the uncertainty u should
information exposing how trust values have been computed.
sum up to 1, rules out options for inconsistency although
For example, a trust recommendation in a source from a
this might arise quite naturally in large networks with con-
fully informed TTP is quite different from a trust recom-
tradictory sources.
mendation from a TTP who does not know the source too
SL is an example of a probabilistic approach, whereas in
well but has no evidence to distrust it. Unfortunately, in
this paper we will outline a trust model that uses a grad-
current models, users cannot really exercise their right to
ual approach, meaning that agents can be trusted to some
interpret how trust is computed since most models do not
degree. Furthermore, to preserve provenance information,
preserve trust provenance.
our model deals with distrust in addition to trust. Conse-
Trust networks are typically challenged by two impor-
quently, we can represent partial trust and partial distrust.
tant problems influencing trust recommendations. Firstly,
Our intended approach is situated in the bottom right cor-
in large networks it is likely that many agents do not know
ner of Table 1. As far as we know, besides our own earlier
each other, hence there is an abundance of ignorance. Sec-
work [6], there is only one other existing model in this cate-
ondly, because of the lack of a central authority, different
gory: Guha et al. [9] use a couple (t, d) with a trust degree
agents might provide different and even contradictory infor-
t and a distrust degree d, both in [0,1]. To obtain the final
mation, hence inconsistency may occur. Below we illustrate
trust score, they subtract d from t. As we explain in the
how ignorance and inconsistency may affect trust recom-
next section, potentially important information is lost when
mendations.
the trust and distrust scales are merged into one.
Our long term goal is to develop a model of trust that
example 1 (Ignorance). Agent a needs to establish
preserves trust provenance as much as possible. A previous
an opinion about agent c in order to complete an important
model we introduced in [6], based on intuitionistic fuzzy set
bank transaction. Agent a may ask agent b for a recommen-
theory [4, 15], attempts this for partial trust, partial dis-
dation of c because agent a does not know anything about c.
trust and ignorance. In this paper, we will introduce an ap-
Agent b, in this case, is a recommender that knows how to
proach for preserving trust provenance about inconsistencies
compute a trust value of c from a web of trust. Assume that
as well. Our model is based on a trust score space, consist-
b has evidence for both trusting and distrusting c. For in-
stance, let us say that b trusts c 0.5 in the range [0,1] where by presence of distrust or rather by lack of knowledge, as well
0 is full absence of trust and 1 is full presence of trust; and as whether a “knowledge problem” is caused by having too
that b distrusts c 0.2 in the range [0,1] where 0 is full absence little or rather too much, i.e. contradictory, information.
of distrust and 1 is full presence of distrust. Another way
of saying this is that b trusts c at least to the extent 0.5, but 3. TRUST SCORE SPACE
also not more than 0.8. The length of the interval [0.5,0.8]
We need a model that, on one hand, is able to represent
indicates how much b lacks information about c.
the trust an agent may have in another agent in a given
In this scenario, by getting the trust value 0.5 from b,
domain, and on the other hand, can evaluate the contribu-
a is losing valuable information indicating that b has some
tion of each aspect of trust to the overall trust score. As a
evidence to distrust c too. A similar problem occurs using
result, such a model will be able to distinguish between dif-
the approach of Guha et al. [9]. In this case, b will pass on
ferent cases of trust provenance. To this end, we introduce
a value of 0.5-0.2=0.3 to a. Again, a is losing valuable trust
a new structure, called trust score space BL .
provenance information indicating, for example, how much
b lacks information about c. Definition 1 (Trust Score Space). The trust score
example 2 (Ignorance). Agent a needs to establish space
an opinion about both agents c and d in order to find an BL = ([0, 1]2 , ≤t , ≤k , ¬)
efficient web service. To this end, agent a calls upon agent
b for trust recommendations on agents c and d. Agent b consists of the set [0, 1]2 of trust scores and two orderings
completely distrusts agent c, hence agent b trusts agent c to defined by
degree 0. On the other hand agent b does not know agent (x1 , x2 ) ≤t (y1 , y2 ) iff x1 ≤ y1 and x2 ≥ y2
d, hence agent b trusts agent d to degree 0. As a result,
agent b returns the same trust recommendation to agent a (x1 , x2 ) ≤k (y1 , y2 ) iff x1 ≤ y1 and x2 ≤ y2
for both agents c and d, namely 0, but the meaning of this
value is clearly different in both cases. With agent c, the lack for all (x1 , x2 ) and (y1 , y2 ) in [0, 1]2 . Furthermore
of trust is caused by a presence of distrust, while with agent ¬(x1 , x2 ) = (x2 , x1 ).
d, the absence of trust is caused by a lack of knowledge. This
provenance information is vital for agent a to make a well The negation ¬ serves to impose a relationship between the
informed decision. For example, if agent a has a high trust lattices ([0, 1]2 , ≤t ) and ([0, 1]2 , ≤k ):
in TTP b, agent a will not consider agent c anymore, but (x1 , x2 ) ≤t (y1 , y2 ) ⇒ ¬(x1 , x2 ) ≥t ¬(y1 , y2 )
agent a might ask for other opinions on agent d.
(x1 , x2 ) ≤k (y1 , y2 ) ⇒ ¬(x1 , x2 ) ≤k ¬(y1 , y2 ),
example 3 (Contradictory Information). One of
your friends tells you to trust a dentist, and another one and ¬¬(x1 , x2 ) = (x1 , x2 ). In other words, ¬ is an involution
of your friends tells you to distrust that same dentist. In that reverses the ≤t -order and preserves the ≤k -order. One
this case, there are two TTPs, they are equally trusted, and can easily verify that the structure BL is a bilattice [3, 8].
they tell you the exact opposite thing. In other words, you Figure 2 shows the bilattice BL , along with some ex-
have to deal with inconsistent information. What would be amples of trust scores. The first lattice ([0, 1]2 , ≤t ) orders
your aggregated trust score in the dentist? Models that work the trust scores going from complete distrust (0, 1) to com-
with only one scale can not represent this: taking e.g. 0.5 as plete trust (1, 0). The other lattice ([0, 1]2 , ≤k ) evaluates
trust score (i.e. the average) is not a solution, because then the amount of available trust evidence, going from a “short-
we can not differentiate from a situation in which both of age of evidence”, x1 + x2 < 1 (incomplete information), to
your friends trust the dentist to the extent 0.5. an “excess of evidence”, namely x1 + x2 > 1 (inconsistent
Furthermore, what would you answer if someone asks you information). In the extreme cases, there is no information
if the dentist can be trusted? A possible answer is: “I don’t available (0, 0), or there is evidence that says that b is to be
really know, because I have contradictory information about trusted fully as well as evidence that states that b is com-
this dentist”. Note that this is fundamentally different from pletely unreliable: (1, 1).
“I don’t know, because I have no information about him”.
In other words, a trust score of 0 is not a suitable option
either, as it could imply both inconsistency and ignorance.
The examples above indicate the need for a model that
preserves information on whether a “trust problem” is caused
Figure 1: Trust propagation and aggregation Figure 2: Trust score space BL
The trust score space allows our model to preserve trust account also distrust, the picture gets more complicated, as
provenance by simultaneously representing partial trust, par- the following example illustrates.
tial distrust, partial ignorance and partial inconsistency, and
treating them as different, related concepts. Moreover, by example 4. Suppose agent a trusts agent b and agent b
using a bilattice model the aforementioned problems disap- distrusts agent c. It is reasonable to assume that based on
pear: this, agent a will also distrust agent c, i.e. R(a, c) = (0, 1).
Now, switch the couples. If a distrusts b and b trusts c,
1. By using trust scores we can now distinguish full dis- there are several options for the trust score of a in c: a
trust (0,1) from ignorance (0,0) and analogously, full possible reaction for a is to do the exact opposite of what b
trust (1,0) from inconsistency (1,1). This is an im- recommends, in other words to distrust c, R(a, c) = (0, 1).
provement of e.g. [1, 21]. But another interpretation is to ignore everything b says,
hence the result of the propagation is ignorance, R(a, c) =
2. We can deal with both incomplete information and
(0, 0).
inconsistency (improvement of [6]).
As this example indicates, there are likely multiple possi-
3. We do not lose important information (improvement
ble propagation operators for trust scores. We expect that
of [9]), because, as will become clear in the next sec-
the choice for a particular BL × BL → BL mapping
tion, we keep the trust and distrust degree separated
to model the trust score propagation will depend on the ap-
throughout the whole trust process (propagation and
plication and the context but might also differ from person
other operations).
to person. Thus, the need for provenance-preserving trust
The available trust information is modeled as a trust net- models becomes more evident.
work that associates with each couple of agents a score To study some possible propagation schemes, let us first
drawn from the trust score space. consider the bivalent case, i.e. when trust and distrust de-
grees assume only the values 0 or 1. For agents a and b, we
Definition 2 (Trust Network). A trust network is use R+ (a, b), R− (a, b), and ∼R− (a, b) as shorthands for re-
a couple (A, R) such that A is a set of agents and R is a spectively R+ (a, b) = 1, R− (a, b) = 1 and R− (a, b) = 0. We
A × A → BL mapping. For every a and b in A, we write consider the following three, different propagation schemes
(a, b and c are agents):
R(a, b) = R+ (a, b), R− (a, b)
• R(a, b) is called the trust score of a in b. 1. R+ (a, c) ≡ R+ (a, b) ∧ R+ (b, c)
R− (a, c) ≡ R+ (a, b) ∧ R− (b, c)
• R+ (a, b) is called the trust degree of a in b.
2. R+ (a, c) ≡ R+ (a, b) ∧ R+ (b, c)
• R (a, b) is called the distrust degree of a in b.
−
R− (a, c) ≡ ∼R− (a, b) ∧ R− (b, c)
R should be thought of as a snapshot taken at a certain 3. R+ (a, c) ≡ (R+ (a, b) ∧ R+ (b, c)) ∨ (R− (a, b) ∧ R− (b, c))
moment, since the trust learning mechanism involves recal- R− (a, c) ≡ (R+ (a, b) ∧ R− (b, c)) ∨ (R− (a, b) ∧ R+ (b, c))
culating trust scores, for instance through trust propagation
as discussed next. In scheme (1) agent a only listens to whom he trusts, and
ignores everyone else. Scheme (2) is similar but in addition
4. TRUST SCORE PROPAGATION agent a takes over distrust information from a not distrusted
(hence possibly unknown) third party. Scheme (3) corre-
We often encounter situations in which we need trust in-
sponds to an interpretation in which the enemy of an enemy
formation about an unknown person. For instance, if you
is considered to be a friend, and the friend of an enemy is
are in search of a new dentist, you can ask your friends’
considered to be an enemy.
opinion about dentist Evans. If they do not know Evans
In our model, besides 0 and 1, we also allow partial trust
personally, they can ask a friend of theirs, and so on. In
and distrust. Hence we need suitable extensions of the logi-
virtual trust networks, propagation operators are used to
cal operators that are used in (1), (2) and (3). For conjunc-
handle this problem. The simplest case (atomic propaga-
tion, disjunction and negation, we use respectively a t-norm
tion) can informally be described as (fig. 3): if the trust
T , a t-conorm S and a negator N . They represent large
score of agent a in agent b is p, and the trust score of b
classes of logic connectives, from which specific operators,
in agent c is q, what information can be derived about the
each with their own behaviour, can be chosen, according to
trust score of a in c? When propagating only trust, the most
the application or context.
commonly used operator is multiplication. When taking into
T and S are increasing, commutative and associative [0, 1]
× [0, 1] → [0, 1] mappings satisfying T (x, 1) = S(x, 0) = x
for all x in [0, 1]. Examples of T are the minimum and the
product, while S could be the maximum or the mapping
SP defined by SP (x, y) = x + y − x · y, for all x and y in
[0, 1]. N is a decreasing [0, 1] → [0, 1] mapping satisfying
N (0) = 1 and N (1) = 0; the most commonly used one is
Ns (x) = 1 − x.
Generalizing the logical operators in scheme (1), (2), and
(3) accordingly, we obtain the propagation operators of Ta-
ble 2. Each one can be used for modeling a specific behav-
Figure 3: Atomic propagation iour. Starting from a trust score (t1 , d1 ) of agent a in agent
Table 2: Propagation operators, using TTP b with R(a, b) = (t1 , d1 ) and R(b, c) = (t2 , d2 )
Notation Trust score of a in c Meaning
Prop1 (T (t1 , t2 ), T (t1 , d2 )) Skeptical, take no advice from enemies or unknown people.
Prop2 (T (t1 , t2 ), T (N (d1 ), d2 )) Paranoid, distrust even unknown people’s enemies.
Prop3 (S(T (t1 , t2 ), T (d1 , d2 )), S(T (t1 , d2 ), T (d1 , t2 ))) Friend of your enemy is your enemy too.
b, and a trust score (t2 , d2 ) of agent b in agent c, each prop- Proposition 2. (Associativity): Prop1 is associative, i.e.
agation operator computes a trust score for agent a in agent for all x, y, and z in BL it holds that:
c. Since the resulting value is again an element of the trust
score space, trust provenance is preserved. Prop1 (Prop1 (x, y), z) = Prop1 (x,Prop1 (y, z))
The remainder of this section is devoted to the investiga-
Prop2 and Prop3 are not associative.
tion of some potentially useful properties of these propaga-
tion operators. In doing so, we keep the logical operators
Proof. The associativity of Prop1 can be proved by taking
as generic as possible, in order to get a clear view on their
into account the associativity of the t-norm. Examples can
general behaviour. First of all, if one of the arguments of
be constructed to show that the other two propagation op-
a propagation operator can be replaced by a higher trust
erators are not associative. Take for example N (x) = 1 − x
score w.r.t. to the knowledge ordering without decreasing
and T (x, y) = x · y, then
the resulting trust score, we call the propagation operator
knowledge monotonic. Prop2 ((0.3, 0.6),Prop2 ((0.1, 0.2), (0.8, 0.1))) = (0.024, 0.032)
Definition 3 (Knowledge Monotonicity). A prop- while on the other hand
agation operator f on BL is said to be knowledge monotonic
iff for all x, y, z, and u in BL , Prop2 (Prop2 ((0.3, 0.6), (0.1, 0.2)), (0.8, 0.1)) = (0.024, 0.092)
x ≤k y and z ≤k u implies f (x, z) ≤k f (y, u) With an associative propagation operator, the overall trust
score computed from a longer propagation chain is indepen-
Knowledge monotonicity reflects that the better you know
dent of the choice of which two subsequent trust scores to
how well you should trust or distrust user b who is recom-
combine first. When dealing with a non associative operator
mending user c, the better you know how well to trust or
however, it should be specified which pieces of the propaga-
distrust user c. Although this behaviour seems natural, not
tion chain to calculate first.
all operators of Table 2 abide by it.
Finally, it is interesting to note that in some cases the
Proposition 1. Prop1 and Prop3 are knowledge monotonic. overall trust score in a longer propagation chain can be de-
Prop2 is not knowledge monotonic. termined by looking at only one agent. For instance, if we
use Prop1 or Prop3 , and there occurs a missing link (0, 0)
Proof. The knowledge monotonicity of Prop1 and Prop3 anywhere in the propagation chain, the result will contain
follows from the monotonicity of T and S. To see that Prop2 no useful information (in other words, the final trust score
is not knowledge monotonic, consider is (0, 0)). Hence as soon as one of the agents is ignorant, we
Prop2 ((0.2, 0.7), (0, 1)) = (0, 0.3) can dismiss the entire chain. Notice that this also holds for
Prop2 ((0.2, 0.8), (0, 1)) = (0, 0.2), Prop3 , despite the fact that it is not an associative operator.
Using Prop1 , the same conclusion (0, 0) can be drawn if at
with Ns as negator. We have that (0.2, 0.7) ≤k (0.2, 0.8)
any position in the chain, except the last one, there occurs
and (0, 1) ≤k (0, 1) but (0, 0.3) k (0, 0.2).
complete distrust (0, 1).
The intuitive explanation behind the non knowledge mono-
tonic behaviour of Prop2 is that, using this propagation op- 5. CONCLUSIONS AND FUTURE WORK
erator, agent a takes over distrust from a stranger b, hence We have introduced a new model that can simultane-
giving b the benefit of the doubt, but when a starts to dis- ously handle partial trust and distrust. We showed that
trust b (thus knowing b better), a will adopt b’s opinion to our bilattice-based model alleviates some of the existing
a lesser extent (in other words: a derives less knowledge). problems of trust models, more specifically concerning trust
Knowledge montonicity is not only useful to provide more provenance. In addition, this new model can handle incom-
insight in the propagation operators but it can also be used plete and excessive information, which occurs frequently in
to establish a lower or upper bound for the actual prop- virtual communities, such as the WWW in general and trust
agated trust score without immediate recalculation. This networks in particular. Therefore, this new provenance-
might be useful in a situtation where one of the agents has preserving trust model can lead to an improvement of many
updated its trust score in another agent and there is not existing web applications, such as P2P networks, question
enough time to recalculate the whole propagation chain. answering systems and recommender systems.
Besides atomic propagation, we need to be able to con-
sider longer propagation chains, so TTPs can in turn consult A first step in our future research involves the further devel-
their own TTPs and so on. Prop1 turns out to be associa- opment and the choice of trust score propagation operators.
tive, which means that we can extend it for more scores Of course, the trust behaviour of users depends on the sit-
without ambiguity. uation and the application, and is in most cases relative to
a goal or a task. A friend e.g. can be trusted for answering [5] G. Choquet. Theory of capacities. Annales de
questions about movies, but not necessarily about doctors. l’Institut Fourier, 5:131–295, 1953.
Therefore, we are preparing some specific scenario’s in which [6] M. De Cock and P. Pinheiro da Silva. A many-valued
trust is needed to make a certain decision (e.g. which doctor representation and propagation of trust and distrust.
to visit, which movie to see). According to these scenario’s, Lecture Notes in Computer Science, 3849:108–113,
we will prepare questionnaires, in which we aim to determine 2006.
how propagation of trust scores takes place. Gathering such [7] R. Falcone, G. Pezzulo, and C. Castelfranchi. A fuzzy
data, we hope to get a clear view on trust score propaga- approach to a belief-based trust computation. Lecture
tion in real life, and how to model it in applications. We Notes in Artificial Intelligence, 2631:73–86, 2003.
do not expect to find one particular propagation schema, [8] M. Ginsberg. Multi-valued logics: A uniform approach
but rather several, depending on a persons nature. When to reasoning in artificial intelligence. Computer
we obtain the results of the questionnaire, we will also be Intelligence, 4:256–316, 1988.
able to verify the three propagation operators we proposed [9] R. Guha, R. Kumar, P. Raghavan, and A. Tomkins.
in this paper. Furthermore, we would like to investigate the Propagation of trust and distrust. In Proceedings of
behaviour of the operators when using particular t-norms, the 13th International World Wide Web Conference,
t-conorms and negators, and examine whether it is possible pages 403–412, 2004.
to use other classes of operators that do not use t-(co)norms.
[10] P. Herrmann, V. Issarny, and S. Shiu (eds). Lecture
A second problem which needs to be addressed, is aggre-
Notes in Computer Science, volume 3477. 2005.
gation. In our domain of interest, namely a gradual ap-
proach to both trust and distrust, there are no aggregation [11] A. Jøsang. A logic for uncertain probabilities.
operators yet. We will start by investigating whether it is International Journal of Uncertainty, Fuzziness and
possible to extend existing aggregation operators, like e.g. Knowledge-based Systems, 9(3):279–311, 2001.
the ordered weighted averaging aggregation operator [20], [12] A. Jøsang and S. Knapskog. A metric for trusted
fuzzy integrals [5, 18], etc., but we assume that not all the systems. In Proc. 21st NIST-NCSC National Informa-
problems will be solved in this way, and that we will also tion Systems Security Conference, pages 16–29, 1998.
need to introduce new specific aggregation operators. [13] S. Kamvar, M. Schlosser, and H. Garcia-Molina. The
Finally, trust and distrust are not static, they can change eigentrust algorithm for reputation management in
after a bad (or good) experience. Therefore, it is also nec- P2P networks. In Proceedings of the 12th International
essary to search for appropriate updating techniques. World Wide Web Conference, pages 640–651, 2003.
Our final goal is the creation of a framework that can rep- [14] P. Massa and P. Avesani. Trust-aware collaborative
resent partial trust, distrust, inconsistency and ignorance, filtering for recommender systems. In Proceedings of
that contains appropriate operators (propagation, aggrega- the Federated International Conference On The Move
tion, update) to work with those trust scores, and that can to Meaningful Internet: CoopIS, DOA, ODBASE,
serve as a starting point to improve the quality of many web pages 492–508, 2004.
applications. In particular, as we are aware that trust is ex- [15] M. Nikolova, N. Nikolov, C. Cornelis, and
perienced in different ways, according to the application and G. Deschrijver. Survey of the research on intuitionistic
context, we aim at a further development of our model for fuzzy sets. Advanced Studies in Contempory
one specific application. Mathematics, 4(2):127–157, 2002.
[16] M. Richardson, R. Agrawal, and P. Domingos. Trust
management for the semantic web. In Proceedings of
6. ACKNOWLEDGMENTS the Second International Semantic Web Conference,
Patricia Victor would like to thank the Institute for the pages 351–368, 2003.
Promotion of Innovation through Science and Technology in [17] M. Riguidel and F. Martinelli (eds). Security,
Flanders (IWT-Vlaanderen) for funding her research. Chris Dependability and Trust. Thematic Group Report of
Cornelis would like to thank the Research Foundation-Flanders the European Coordination Action Beyond the Hori-
for funding his research. zon: Anticipating Future and Emerging Information
Society Technologies,
http://www.beyond-the-horizon.net, 2006.
7. REFERENCES
[18] M. Sugeno. Theory of fuzzy integrals and its
[1] A. Abdul-Rahman and S. Hailes. Supporting trust in applications, PhD thesis. 1974.
virtual communities. In Proceedings of the 33rd [19] W. Tang, Y. Ma, and Z. Chen. Managing trust in
Hawaii International Conference on System Sciences, peer-to-peer networks. Journal of Digital Information
pages 1769–1777, 2000. Management, 3:58–63, 2005.
[2] F. Almenárez, A. Marı́n, C. Campo, and C. Garcı́a. [20] R. Yager. On ordered weighted averaging aggregation
Ptm: A pervasive trust management model for operators in multicriteria decision making. IEEE
dynamic open environments. In First Workshop on Transactions on Systems, Man, and Cybernetics,
Pervasive Security, Privacy and Trust, PSPT2004 in 18:183–190, 1988.
conjuntion with Mobiquitous 2004, 2004. [21] I. Zaihrayeu, P. Pinheiro da Silva, and
[3] O. Arieli, C. Cornelis, G. Deschrijver, and E. E. Kerre. D. McGuinness. IWTrust: Improving user trust in
Bilattice-based squares and triangles. Lecture Notes in answers from the web. In Proceedings of the Third
Computer Science, 3571:563–574, 2005. International Conference On Trust Management,
[4] K. Atanassov. Intuitionistic fuzzy sets. Fuzzy Sets and pages 384–392, 2005.
Systems, 20:87–96, 1986.