=Paper=
{{Paper
|id=None
|storemode=property
|title=TRR: An integrated Reliability-Reputation Model for Agent Societies
|pdfUrl=https://ceur-ws.org/Vol-741/ID6_RosaciSarneGarruzzo.pdf
|volume=Vol-741
|dblpUrl=https://dblp.org/rec/conf/woa/RosaciSG11
}}
==TRR: An integrated Reliability-Reputation Model for Agent Societies==
TRR: An integrated Reliability-Reputation Model
for Agent Societies
D. Rosaci G.M.L. Sarné S. Garruzzo
DIMET, Università “Mediterranea” DIMET, Università “Mediterranea” DIMET, Università “Mediterranea”
di Reggio Calabria di Reggio Calabria di Reggio Calabria
Loc. Feo di Vito Loc. Feo di Vito Loc. Feo di Vito
89122 Reggio Calabria (Italy) 89122 Reggio Calabria (Italy) 89122 Reggio Calabria (Italy)
Email: domenico.rosaci@unirc.it Email: sarne@unirc.it Email: salvatore.garruzzo@unirc.it
Tel: (++39) 0965875313 Tel: (++39) 0965875438 Tel: (++39) 0965875303
Fax: (++39) 0965875238 Fax: (++39) 0965875238 Fax: (++39) 0965875238
Abstract—Several reliability-reputation models to support based on its past interactions with other agents. Besides, a
agents’ decisions have been proposed in the past and many of usually interacted with a subset of the whole agent community
them combine together reliability and reputation in a synthetic and often its past interactions with an agent are insufficient to
trust measure. In this context, we present a new trust model,
called TRR, that considers, from a mathematical viewpoint, obtain a representative trust measure. Thus a should consider
the interdependence between these two trust measures. This also a reputation measure deriving by a reputation model. If
important feature of TRR is exploited to dynamically compute for each candidate both reliability and reputation measures are
a parameter determining the importance of the reliability with combined in a synthetic preference score, then a could use it
respect to the reputation. Some experiments performed on the to choose its best partner. In this case, the main question is
well-known ART platform show the advantages, in terms of
effectiveness, introduced by the TRR approach. “How much the user should weight the reliability with respect
to the reputation?”. For answering to this question, authors in
I. I NTRODUCTION [10] proposed a reliability-reputation model, called RRAF, but
In a multi-agent system (M AS) context, trust-based it has two main limitations, namely:
methodologies are recognized as an effective solution to
• The weight assigned to the reliability vs reputation is
increase MASs performances [17], [28], [29] by promoting
arbitrarily set by the user based only on his/her experience
social interactions, particularly when software agents are dis-
without considering the system evolution (i.e., it does not
tributed in large-scale networks and reciprocally interact [23].
give relevance to the reliability changes due, for instance,
A trust relationship between two interacting agents (i.e., a
to new information acquired about the other agents and to
trustor requiring a service to a trustee) can involve multiple
the increased expertise level about the domain of interest).
dimensions based on the chosen perspective. For instance, in
• In RRAF, the trust measures perceived from each agent
e-service domains, trust is defined as: “The quantified belief
about the other agents are not dependent among them. In-
by a trustor with respect to the competence, honesty, security
deed, let a and c are two agents that desire a trust opinion
and dependability of a trustee within a specified context”
about the agent b. The agent a (resp., c) composes its trust
[12]. In particular, i) the competence is referred to correctly
opinion τab (resp., τcb ) requiring to the agent c (resp.,
and efficiently perform the requested tasks; ii) the honesty
a) its opinion about b. It is reasonable that τcb (resp.,
involves the absence of malicious behaviours; iii) the security
τab ) represents that opinion. This shows the dependence
means the capability to manage private data avoiding their
between the trust measures τab and τcb . RRAF operates
unauthorized access; iv) the reliability is assumed as the
by considering the opinion that c provides to a about b
degree of reliance assigned on the provided services (e.g., the
(and vice versa) as a personal suggestion, not necessarily
reliability of an e-Commerce agent is different if the price of
coinciding with τcb . A more accurate computation should
the transaction if low enough or is very high).
consider these suggestions as coinciding with the trust
However, reliability is an individual trust measure, while
measures that each agent has on the other agents but this
for the whole community the trust is measured by the repu-
implies to solve the mathematical relationship existing
tation, that is fundamental to decide if an agent is a reliable
among all the trust measures.
interlocutor or not in absence of sufficient knowledge about
it. To solve the two problems highlighted above a new trust
To use reliability and reputation measures in MASs, a main model, called Trust-Reliability-Reputation (TRR), is proposed
issue is represented by the possibility of suitably combining in this paper. For each agent this model builds a global trust
them to support agents’ decisions. Indeed, when an agent a evaluation merging both the agent’s reliability and reputation
has to choose a possible partner, it exploits its reliability model measures in a single score (as in RRAF) but without the
use of a fixed parameter to weight them (differently from the reliability with respect to the reputation. However, to
RRAF). Instead, when the agent a computes the trust in compare such trust strategies and their computational costs
another agent b, in TRR the weight representing the relevance in a competitive environment, the Agent Reputation and Trust
given by a to the reliability with respect to the reputation is (ART) testbed platform is avalaible [3]. In the following, the
dynamically computed. This weight depends on the number examined approaches will be those that, to the best of our
of interactions performed between a and b and the expertise knowledge, come closest to the material presented in this paper
of a in evaluating b. Moreover, TRR introduces a novel pointing out differences and similarities with our proposal.
mechanism for computing the reputation where, differently Trust and reputation are represented in [33] by introducing a
from RRAF, the reputation perceived by an agent a about probabilistic reputation approach in the Ntropi model [1] that
another agent b is based on the global trust that each other is truly decentralized without reliance on any third party and
agent of the MAS has in b. This way, the overall trust measures allows all the entities to freely decide how to trust. Reputation
are reciprocally correlated and we argue that they are more and experiential information are combined in Ntropi in a single
accurate than in RRAF because the agent that is computing trust measure exploited to decide if performing the interaction.
a trust measure receives by the other agents suggestions that An agent will rate this experience and will adjust its trust
are their actual trust measures instead of “arbitrary” values. values based on the differences with the recommended ratings.
Two considerations has to be carried out about this latter In [33] a Dirichlet reputation algorithm [14] is added to the
issue: i) Our method of computing trust is applicable in MASs Ntropi model to set its parameters by using a Maximum Like-
in which the agents are collaborative and share their trust lihood Estimation method on the observed data. Always for
measures with each other; ii) In order to apply TRR, each distributed MASs, in [11] is presented an approach made up of
agent has to solve a linear system, instead of the simple time steps that deals with uncertainty and ignorance and takes
computation required by the RRAF model. into account the number of interactions, data dispersion and
To evaluate the performances of TRR with respect to variability. It computes trust based on three agent expectative,
RRAF some test have been executed on the well known namely: past experiences with that agent (direct); advertise-
ART testbed [3]. The experimental results show a significant ments received from that agent and discrepancies between
advantage, in terms of performances, introduced by TRR, experience and past advertisements (advertisements-based);
while the reduction of the agent efficiency, due to a more recommendations received from others about that agent and
complex computation of the trust measures, is practically discrepancies between experience and past recommendations
negligible. (recommendations-based). A Global Trust measure aggregates
The paper is organized as follows. In Section II some related the three components into a single belief referred to the next
work are discussed. The multi-agent scenario is presented in time step. The system has been tested on the ART testbed [3].
Section III, while Section IV deals with the TRR reliability- FIRE [13] is conceived for open MASs where agents are
reputation model. The Section V proposes an experimental benevolent and honest in exchanging information. It considers
comparison between RRAF and TRR on the ART Testbed more trust and reputation sources that in detail are: Interaction
and, finally, in Section VI some conclusions are drown. trust represented by the direct agent’s experience; Role-based
trust taking into account the agents’ relationships; Witness
II. R ELATED W ORK reputation considering attestations about the behaviour of an
In an open MAS trust-based approaches are available for agent; Certified reputation about an agent witnessed by third-
determining the best partner to interact on the basis of in- party suggested by the agent itself. As a result, FIRE correctly
formation derived by both direct experiences (i.e., reliability) works in many usual occurrences but it requires a lot of
and opinions of others (i.e., reputation). However, each agent parameters to set on. REGRET [27] is a modular trust and rep-
directly interacts only with a subset of the agent population utation system for cooperative MASs exploiting impressions
and, therefore, it should exploit also the opinions of the about other agents derived by both direct experiences (called
other members of the community to have a reliable opinion direct trust) and a reputation model aggregating three type of
about someone. Unfortunately, in a virtual environment some reputation (i.e.: Witness, based on the information coming from
malicious behaviours are possible, encouraged also by the witnesses; Neighborhood, calculated by using social relation;
facility to change own identity. To limit them, it is important System, depending by roles and general properties). REGRET
to have an adequate number of agent providing their opinions considers the witnesses’ credibility and each agent can neglect
to avoid a partial depiction of agents’ reputation [4] and one, more or all the reputation components. Finally, a common
preventing identity changes with some form of penalization semantic, called ontological dimension, models the agents’
and/or, for instance, by adopting a Public Key Infrastructure personal points of view considering the multi-dimensional
[18], [35]. aspects of the reputation.
In the literature a great number of metrics and approaches Within a grid context, in [32] the trust of both clients
for measuring reliability and reputations have been proposed and providers is computed, using both direct and indirect
[9], [12], [15], [19]–[21], [24]–[26], [28], [30]. Some of them information and removing biased feedbacks by using a rank
integrate reliability and reputation into a synthetic measure [2], correlation method. Direct trust is computed directly by the
[7], [13], [16] but leaving to the user the task of weighting initiator and it is dominant on the indirect trust, measured
by the feedbacks received from agents (in the same or other each other agent b ∈ C that, in its turn, can either accept or
domains) and weighted based on their credibility determined reject the request. If the request is accepted and the service
on criterion as similarity, activity, specificity, etc. Moreover, consumed then the agent a could evaluate its satisfaction and
the reputations of the client and provider are calculated on update its reliability model for b.
different parameters being their relationships asymmetric. In
presence of uncertain and incomplete information a fuzzy A. Reliability
approach can be used, as in [31] where the system collects The approach presented in this paper is independent from
and weights the opinions of each user about the other users the particular reliability model chosen by each agent and each
to obtain aggregated trustworthiness scores. Social networks agent has its own reliability model, independently of the other
and probabilistic trust models are examined in [8] for different agents. The reliability of the agent a with respect to the agent
contexts and settings but authors conclude that in several sce- b and the service category γ ∈ S can be represented by the
narios these techniques exhibit unsatisfactory performances. tuple ργab = h̺γab , iγab , eγ i, where:
γ
Trust has been particularly investigated for file sharing • ̺ab ∈ [0, 1] is the reliability value that a gives to b
services over P2P networks [15], [19], [34]. In this context, referred to the services of the category γ, where ̺ab = 0
the EigenTrust algorithm has been applied in [16], where each (resp. 1) means that b is totally unreliable (resp., reliable).
γ
peer rates its transactions for building a trust representation • iab is the number of interactions that a and b performed
of the other peers, called Local Trust. EigenTrust assumes in the past with respect to the services of the category γ.
γ
trust transitivity in order to compute the Global Trust values. • e ∈ [0, 1] is the expertise level that a assumes to have in
Each peer collects by the other peers their Local Trust values evaluating the services of the category γ and that depends
and, suitably weighted by means of the peer’s trustworthiness, on the knowledge acquired by a about the category γ.
aggregated in a trust matrix in which the trust values asymptot- In other words, the TRR approach does not assume that the
ically converge to its eigenvalues. The presence of pre-trusted reliability perceived from a about b is a simple scalar value, but
users, always trusted, can minimize the influence of malicious for each category γ it is possible to have a different reliability.
peers performing collusive activities. To this aim it also considers both the knowledge level that a
Nowadays, the opportunities given by the wireless technolo- has of b (represented by iγab ) in interactions associated with
gies to work in mobile contexts, also in absence of stable the category γ and the expertise level that a assumes to have
connections, places great relevance in trusting the counterpart. about the services of the category γ (represented by eγ ).
For instance, Celltrust [18] manages direct and reputation
information (suitably weighted) in a centralized manner by B. Reputation
γ
using cryptographic techniques. A Bayesian approach is used Let πab be the reputation of b in the whole community as
in [6], where reputation exploits a “second-hand” criterion perceived by a and with respect to services belonging to the
in which transitive reputation is accepted only if it agrees category γ. To obtain it, a should require to each other agent of
with the direct rates. To contrast liars in Ad Hoc networks, the community an opinion about b in providing good services
in [22] is adopted a deviation test, independently of specific in the category γ. It is important to remark that in the TRR
implementation, within a stochastic process but tests show that scenario more reputations of an agent b exist since each agent
this model defects when the number of liars exceed a certain has it personal perception of the b’s reputation. This way, the
γ
threshold. reputation πab is a function (F ) of the set of opinions {oγcb },
γ
The cited systems trust an agent by exploiting both direct where ocb is the opinion that each agent c gives to a about b
experiences and information about its reputation within the in providing good services of the category γ. Formally, it is:
community, as in TRR. In [1], [33] the trust in an agent is
γ
computed, as in TRR, only based on individual criterion but, πab = F ({oγcb }) (1)
for instance, in REGRET [27] a common ontology is adopted
to uniform different trust representations and in [6], [16], [18], C. Trust
γ
[22] trust is domain dependant. To cross malicious agents Let τab be the trust measure that an agent a assigns to
different strategies are adopted, TRR and [16], [18], [32] another agent b in a given category γ. In the most of the
suitable weight the reputation sources and [16] exploits also approaches proposed in the past, this measure is obtained
peers always trusted, while in [1], [11], [33] are considered by combining in some way the reliability (ργab ) and the
γ
discrepancies between computed trust and observed behaviour reputation (πab ) measures for taking into account both the
to limit the effects of dishonest behaviours and, finally, other direct knowledge that a has about the b’s capabilities and the
systems adopts a PKI approach (that is an orthogonal issue suggestions that the other agents give to a about b. Some of
for many trust systems). these approaches also requires to specify a coefficient (that we
call α) ranging in [0..1] that expresses the relevance assigned
III. T HE M ULTI -AGENT C OMMUNITY to the reliability with respect to the reputation. Vice versa
In this section, it is described the TRR scenario. Let S the relevance of the reputation with respect to reliability will
be a list of service categories and let C be a software agent be given from 1 − α. In the past approaches, this coefficient
community, where each agent a ∈ C can require a service to α is arbitrarily fixed to a given value accordingly to the
user’s preference. Differently, we assume that α increases 3
with: i) The number of interactions iγab , carried out by the .8
=0
agent a with the agent b for the category γ, since the direct t 32 0.2
=
t 13
knowledge of a improves when the number of interactions
increases; ii) The expertise level eγ the agent a has about the p12=0.39 t42=0.2
2 1 4
t14=0.9
category γ so that the more expert is the agent a and the
t5
more great will be its confidence in judging the b’s capability t1 2=
0
5 =0 .7
and consequently computing the b’s reliability. Our viewpoint .3
defines the α coefficient as an αγab coefficient, to remark its
5
dependance on the agents a and b and the category γ.
For evaluating a reasonable value for αγab , we propose Fig. 1: The agent 1 evaluates the reputation of the agent 2
to exploit a direct relationship with both the number of based on the suggestions of the agents 3, 4 and 5
interactions iγab and the expertise eγ , such that αγab will be
1 only if a is completely expert about the category γ and the
number of interaction iγab is higher than or equal to a suitable A. Reputation in the TRR model
threshold N (set by the system administrator). If iγab is higher γ
Let πab be the reputation that in TRR an agent a assigns
than or equal to N , the parameter αγab will be simply equal
to another agent b for a given category γ. It is obtained as
to eγ . Otherwise, if iγab is smaller than N , the parameter αγab γ
weighted mean of all the trust measures τcb that each agent
will linearly depend on eγ and iγab . More formally:
c (different from a and b) associates with b. In other words,
(
iγ
the suggestion that each agent c gives of b to a is represented
eγ · N
ab
if iγab < N by the trust that c has in b. This suggestion coming from c is
αγab = (2)
eγ if iγab ≥ N weighted by the trust measure τac γ
that a has in c. Formally,
the function F defined in the Equation 1 becomes:
Therefore, the trust measure can be generally expressed as P γ γ
a function G depending on the reliability, the reputation and γ c∈C−{a,b} τcb · τac
πab = P (5)
the αγab coefficient: γ
c∈C−{a,b} τac
γ For instance, in Figure 1 it is depicted a scenario in which
τab = G(ργab , πab
γ
, αγab ) (3) the agent 1 has to evaluate the reputation π12 of the agent 2
(the category is omitted for simplicity). The agent 1 receives
where:
by the agents 3, 4 and 5 “suggestions” about the agent 2 (i.e.,
the trust that they assign to it) weighted by the agent 1 with
αγab = αγab (iγab , eγ ) (4) the trust measure τ13 , τ14 and τ15 that it assigns to the agents
3, 4 and 5, respectively. Thus, the weighted mean that gives
D. An example of TRR model the reputation assigned by the agent 1 to the agent 2 is:
The TRR scenario cover most of the past trust approaches. π12 = (0.8 · 0.2 + 0.2 · 0.9 + 0.7 · 0.3)/(0.2 + 0.9 + 0.3) = 0.39
For instance, in the RRAF approach [10], the reliability ργab We remark that the high values suggested by the agents 3
depends only by the value of ̺γab , since the parameters iγab and and 5 (τ32 =0.8 and τ52 =0.7) have been marginally considered
eγ are not considered. The reliability is updated each time the for the small trust that the agent 1 assigns to them, while the
agent b provides a service to a. To compute the new reliability computed reputation is more similar to the suggestion given by
value the measure of the satisfaction expressed by a for this the agent 4, to which the agent 1 assigns a high trust (τ14 =0.9).
service is averaged with the current reliability value. Moreover, B. Trust in the TRR model
γ
the reputation πab is obtained by a by requiring to all the other γ
In order to compute the trust τab that the agent a assigns
agents an opinion about b in providing services of the category
γ to the agent b in the category γ, we choose to use a weighted
γ and averaging them to compute the new value πab . Finally,
γ mean of the reliability value ̺γab and the reputation value πab
γ
,
the trust value τab is computed as a weighted mean between γ
using the parameter αab to weight the reliability value and
reliability and reputation, where the reliability is weighted by
(1 − αγab ) to weight the reputation. This way, the function G
a parameter α, set by the agent’s owner, and the reputation
of the Equation 3 has the following form:
is weighted by (1 − α). Note that in RRAF the parameter α
does not depends on either the category γ or the agent b, but γ
τab = αγab · ̺γab + (1 − αγab ) · πab
γ
(6)
it is the same for all the agents and the categories.
and by considering the Equation 5 it becomes:
IV. T HE T RUST-R EPUTATION -R ELIABILITY M ODEL
P γ γ
In this section, the functions F and G chosen to define the γ c∈C−{a,b} τcb · τac
τab = αγab · ̺γab + (1 − αγab ) · P γ (7)
Trust-Reputation-reliability (TRR) model will be described. c∈C−{a,b} τac
This equation, written for all the n agents and all the m 250000
categories, respectively belonging to C and S, forms a system
200000
of m · n · (n − 1) linear equations, containing m · n · (n − 1)
Bank Amount (BA)
RRAF
γ
variables τab . This system is equivalent to that described in 150000
TRR
[5] and admits only one solution.
100000
V. A N EXPERIMENTAL COMPARISON BETWEEN RRAF
50000
AND TRR
In this section, we perform some experiments using the ART 0
10% 30% 50% 70% 90%
platform. On ART, each agent takes the role of an art appraiser Unreliable Agents (P)
who gives appraisals on paintings presented by its clients. In
order to fulfill his appraisals, each agent can ask opinions to
other agents. These agents are also in competition among them Fig. 2: Variation of the bank amount BA against the percentage
and thus, they may lie in order to fool opponents. The game of unreliable agents P , with population size N = 100.
is supervised by a simulator that runs in a synchronous and
step by step manner, and it can be described as follows:
amount using α = 0, 71 at the same conditions. For each
• The clients, simulated by the simulator, request opinions
on paintings to the appraiser agents. Each painting be- game, besides the RRAF and TRR agents, a population of 98
Simplet agents have run as competitors. Simplet agent is an
longs to an era. For each appraisal, an agent earns a given
agent that has participated to the 2008 ART Competition, and
money amount that is stored in its bank amount BA.
• Each agent has a specific expertise level in each era,
whose software can be downloaded at the ART site [3], and
that uses a reliability-reputation model. We have programmed
assigned by the simulator. The error made by an agent
two different versions of Simplet agent:
while appraising a painting depends on both this expertise
and the price the appraiser decides to spend for that • the former with a low availability to pay for the opinions,
appraisal. thus generating unreliable answers to the opinion re-
• An agent cannot appraise its paintings himself but he has quests. This low availability is represented by the internal
to ask other agents to obtain opinions. Each opinion has ART parameter cg = 1.
a fixed cost for the agent. • the latter with a high availability to pay for the opinions,
• Each agent can obtain recommendations about another thus characterized by the parameter cg = 15.
agent by other players. Each recommendation has a given Figure 2 reports the results of this experiment, in terms of
price. This way, the agent can build a reputation model variation of the bank amount BA of both the RRAF and TRR
of the other agents. agents against the different percentage of unreliable agents P .
• Agents weight each received opinion in order to compute We note that, while the RRAF agent reaches its max-
the final evaluation of the paintings. imum bank amount for P = 50% as espected in [10],
• At the end of each step, the accuracy of agents final evalu- the performances decrease for other values of P . This is
ations is compared to each other, in order to determine the due to the following reasons: i) RRAF agent isn’t able to
client share for each agent during the next step. In other recognize unreliable agents effectively, and ii) it incurs useless
words, the most accurate agent receives more clients. costs to ask recommendations when the population is reliable
• At the end of each step, the simulator reveals the real (P < 50%). Differently from the RRAF agent which has an
value of each painting, thus allowing each agent to update α value that is fixed during the game for all the agents, TRR
its reliability and reputation model. assigns a different α value for each era of each agent in the
• At the end of the game, the winner of the competition is community, and also it is able to modify these values at each
the agent having the highest bank amount BA. step of the game. This way, TRR gradually learns to recognize
The purpose of our experiment is to analyze the improve- reliable agents thus saving recommendation costs. Moreover,
ments the TRR model introduces along the RRAF model. We in TRR the reliability is a function of also the number of
have built two agents implementing the RRAF and TRR model interaction (iγ ab ) between trustor and trustee, and the expertise
respectively, and we have run some games in presence of of the trustor (eγ ) in evaluating the services. As a consequence,
different percentage of unreliable agents P . In particular, in the TRR is able to better evaluate the reliability of the other agents
performed experiment 5 different agent populations character- thus obtaining more significant results in term of bank amount.
ized by a size of N = 100 agents and a different percentage Finally, Figure 2 shows that the performance of TRR are not
P of unreliable agents have been considered. Namely, the 5 influenced by the presence of unreliable agents.
values of P we have considered are 10%, 30%, 50%, 70%
and 90%. For each of these values, we have run an ART VI. C ONCLUSIONS
game, where the RRAF agent participates to each game using The large number of trust-based approaches in MASs
the parameter α = 0, 71. This value was chosen according emerged in the last recent years implies the necessity of clearly
to [10], where the RRAF agent obtained the maximum bank understanding what are the advantages and the limitations
of using trust measures to improve the effectiveness of the [12] T. Grandison and M. Sloman, Trust Management Tools for Internet
systems. In particular, the two main measures considered in Applications, Proc. of the 1st Int. Conf. on Trust Management, pages
91–107. Springer, 2003.
the literature, i.e. reliability and reputation, should be suitably [13] T.D. Huynh, N.R. Jennings and N.R. Shadbolt, An Integrated Trust and
combined to obtain a trust measure to support agent decisions. Reputation Model for Open Multi-Agent System, Autonmous Agent and
In the past, we proposed a framework, called RRAF, to Multi Agent Systems 13, pages 119–154, 2006.
[14] A. Jösang and J. Haller, Dirichlet Reputation Systems, Proc. of the 2nd
build competitive agents provided with an internal reliability- Int. Conf. on Availability, Reliability and Security (ARES), pages 112–
reputation model, where the relevance of reliability with 119, IEEE Press, 2007.
respect to reputation is given by a suitable parameter. However, [15] A. Jöosang, R. Ismail and C. Boyd, A Survey of Trust and Reputation
Systems for Online Service Provision, Decision Support System, 43(2),
RRAF introduces some simplifications in computing the trust, pages 618–644, Elsevier, 2005.
that affected the effectiveness of its practical application. [16] S.D. Kamvar, M.T. Schlosser, H. Garcia-Molina, The Eigentrust Algo-
In this paper, it is proposed the TRR model to overcome the rithm for Reputation Management in P2P Networks, Proc. of the 12th Int.
Conf. on World Wide Web, (WWW ’03), pages 640–651, ACM, 2003.
RRAF limitations. The TRR model i) dynamically computes [17] B. Khosravifar, M. Gomrokchi, J. Bentahar and P. Thiran, Maintenance-
the parameter representing the importance of the reliability based Trust for Multi-Agent Systems, Proc. of the 8th Int. Conf. on
with respect to the reputation, based on the evolution of the Autonomous Agents and Multiagent Systems, pages 1017–1024, Int.
Foundation for Autonomous Agents and Multiagent Systems, 2009.
knowledge acquired by the agents in time, and ii) models [18] G. Lax and G.M.L. Sarné, CellTrust: a Reputation Model for C2C
the interdependence between the trust measures of the agents, Commerce, Electronic Commerce Research, 8(4), pages 193-216, 2006.
considering that, when an agent a computes the trust measure [19] S. Marti and H. Garcia-Molina, Taxonomy of Trust: Categorizing P2P
Reputation Systems, Computer Networks, 50(4), pages 472–484, 2006.
about an agent b, the computation exploits the trust measures [20] P. Massa, A Survey of Trust Use and Modeling in Current Real
about b coming from each other agent of the community. Systems, In R. Song, L. Korba, and G. Yee, editors, Trust in E-Services:
Technologies, Practices and Challenges, Idea Group Publishing, 2006.
The TRR model has been tested by comparing it with RRAF [21] L. Mui, M. Mohtashemi and A. Halberstadt, Notions of Reputation in
on the standard testbed ART. The experimental results clearly Multi-Agents Systems: a Review, Proc. of the First Int. Joint Conf. on
shows a significant improvement introduced by ART in the Autonomous Agents and Multiagent Systems (AAMAS ’02), pages 280–
287, ACM Press, 2002.
effectiveness of the agent when computing the trust measures. [22] J. Mundinger and J.Y. Le Boudec, Analysis of a Reputation System for
We argue that such improvement is strictly related to the Mobile Ad-Hoc Networks with Liars, Performance Evaluation, 65(3-4),
capability of the trust model in capturing the interdependence pages 212-226, Elsevier, 2008.
[23] S.J. Na, K.H. Choi and D.R. Shin, Reputation-based Service Discovery
of the trust measures, highlighting the social aspect of the in Multi-Agents Systems, Proc of the IEEE Int. Work. on Semantic
community in which the agents interact. Computing and Applications, pages 326–339, Springer, 2010.
As for our ongoing research, we are developing more [24] S.D. Ramchurn, D. Huynh and N.R. Jennings, Trust in Multi-Agent
Systems, Knowledge Engeenering Review, 19(1), pages 1–25, 2004.
advanced studies about such social aspects. In particular, we [25] P. Resnick, R. Zeckhauser, E. Friedman and K. Kuwabara, Reputation
plan to analyze how the characteristics of the agent population, Systems, Commununication of ACM, 43(12), pages 45–48, ACM, 2000.
e.g. the honesty, the competence, the privacy requirements etc., [26] J. Sabater-Mir and M. Paoulucci, On Open Representation and Ag-
gregation of Social Evaluationa in Computational Trust and Reputation
can be considered for designing a more accurate trust model. Models, International Journal of Approximate Reasoning, 46(3), pages
458–483, Elsevier, 2007
R EFERENCES [27] J. Sabater and C. Sierra, REGRET: Reputation in Gregarious Societies,
Proc. of the 5th Int. Conf. on Autonomous Agents, (AGENTS ’01), pages
[1] A. Abdul-Rahman and S. Hailes, A Distributed Trust Model, Proc. of 194–195, ACM Press, 2001.
the 1997 Work. on New Security Paradigms (NSPW ’97), pages 48–60, [28] J. Sabater and C. Sierra, Review on Computational Trust and Reputation
ACM Press, 1997. Models, Artificial Intelligence Review, 24, pages 33–60, Springer, 2005.
[2] K. Aberer and Z. Despotovic, Managing Trust in peer-2-peer Information [29] D.H. Sarvapali, S.D. Ramchurn and N.R. Jennings, Trust in Multi-Agent
Systems, Proc. of the 10th Int Conf on Information and knowledge Systems,The Knowledge Engineering Review, 19, pages 1–25, Elsevier,
management, (CIKM ’01), pages 310–317, ACM Press, 2001. 2004.
[3] ART-Testbed, http://megatron.iiia.csic.es/art-testbed/, 2011 [30] M.P. Singh, Trust as dependence: A logical approach, Proc. of the
[4] A. Birk, Boosting Cooperation by Evolving Trust Applied Artificial 10th International Conference on Autonomous Agents and MultiAgent
Intelligence, 14(8), pages 769–784, Taylor & Francis, 2000. Systems (AAMAS ’11), pages 863–870, Int. Foundation for Autonomous
[5] F. Buccafurri, L. Palopoli, D. Rosaci and G.M.L. Sarné, Modeling Agents and Multiagent Systems, 2011.
Cooperation in Multi-Agent Communities Cognitive Systems Research, [31] S. Song, K. Hwang, R. Zhou and Y.K. Kwok, Trusted P2P Transactions
5(3), pages 171–190, Elsevier, 2004. with Fuzzy Reputation Aggregation, IEEE Internet Computing, 9(6), pages
[6] S. Buchegger and J.Y. Le Boudec, A Robust Reputation System for 24–34, IEEE Press, 2005.
P2P and Mobile Ad-hoc Networks, Proc. of the 2nd Workshop on the [32] P. Srivaramangai and Renagaramanujam Srinivasan, Reputation Based
Economics of Peer-to-Peer Systems, (P2PEcon), 2004. Two Way Trust Model for Reliable Transactions in Grid Computing,
[7] K. Burton, The Design of the Openprivacy Distributed Reputation System, International Journal of Computer Science, 7(5), pages 33–39, 2010.
http://www.peerfear.org/papers/openprivacy-reputation.pdf, 2002. [33] M. Tavakolifard and S.J. Knapskog, A Probabilistic Reputation Algo-
[8] Z. Despotovic and K. Aberer, P2P Reputation Management: Probabilistic rithm for Decentralized Multi-Agent Environments, Proc. of the 4th Int.
Estimation vs. Social Networks, Computer Networks, 50(4), 485–500, Work. on Security and Trust Management (STM 2008) - Electronic Notes
Elsevier, 2006. in Theoretical Computer Science, pages 139–149, Elsevier, 2009.
[9] R. Falcone and C. Castelfranchi, From dependence networks to trust [34] Y.F. Wang, Y. Hori and K. Sakurai, Characterizing Economic and Social
networks, Proc. of the 11th AAMAS Workshop on Trust in Agent Properties of Trust and Reputation Systems in P2P Environment, Journal
Societies (Trust ’09), pages 13–26, 2009. of Computer Science and Tecnology, 23(1), pages 129–140, Springer,
[10] S. Garruzzo and D. Rosaci, The Roles of Reliability and Reputation 2008.
in Competitive Multi Agent Systems, Proc. of the COOPIS Conf. 2010, [35] H.C. Wong and K. Sycara, Adding Security and Trust to Multi-Agent
LNCS 6426, pages 439–442, Springer, 2010. Systems Proc. of Autonomous Agents99 (Work. on Deception, Fraud and
[11] M. Gómez, J. Carbó J. and C. Benac-Earle, An Anticipatory Trust model Trust in Agent Societies), pages 149–161, 1999.
for Open Distributed Systems, LNAI 4250, pages 307–324, 2007.