<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">TRR: An integrated Reliability-Reputation Model for Agent Societies</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">D</forename><surname>Rosaci</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">DIMET</orgName>
								<orgName type="institution">Università &quot;Mediterranea&quot; di Reggio Calabria Loc</orgName>
								<address>
									<addrLine>Feo di Vito</addrLine>
									<postCode>89122</postCode>
									<settlement>Reggio Calabria (</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">G</forename><forename type="middle">M L</forename><surname>Sarné</surname></persName>
							<email>sarne@unirc.it</email>
							<affiliation key="aff1">
								<orgName type="department">DIMET</orgName>
								<orgName type="institution">Università &quot;Mediterranea&quot; di Reggio Calabria Loc</orgName>
								<address>
									<addrLine>Feo di Vito</addrLine>
									<postCode>89122</postCode>
									<settlement>Reggio Calabria (</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">S</forename><surname>Garruzzo</surname></persName>
							<affiliation key="aff2">
								<orgName type="department">DIMET</orgName>
								<orgName type="institution">Università &quot;Mediterranea&quot; di Reggio Calabria Loc</orgName>
								<address>
									<addrLine>Feo di Vito</addrLine>
									<postCode>89122</postCode>
									<settlement>Reggio Calabria (</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">TRR: An integrated Reliability-Reputation Model for Agent Societies</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">1EF9653CF6FD239B14DE9A8110889EF4</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T08:25+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Several reliability-reputation models to support agents' decisions have been proposed in the past and many of them combine together reliability and reputation in a synthetic trust measure. In this context, we present a new trust model, called TRR, that considers, from a mathematical viewpoint, the interdependence between these two trust measures. This important feature of TRR is exploited to dynamically compute a parameter determining the importance of the reliability with respect to the reputation. Some experiments performed on the well-known ART platform show the advantages, in terms of effectiveness, introduced by the TRR approach.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>I. INTRODUCTION</head><p>In a multi-agent system (M AS) context, trust-based methodologies are recognized as an effective solution to increase MASs performances <ref type="bibr" target="#b16">[17]</ref>, <ref type="bibr" target="#b27">[28]</ref>, <ref type="bibr" target="#b28">[29]</ref> by promoting social interactions, particularly when software agents are distributed in large-scale networks and reciprocally interact <ref type="bibr" target="#b22">[23]</ref>.</p><p>A trust relationship between two interacting agents (i.e., a trustor requiring a service to a trustee) can involve multiple dimensions based on the chosen perspective. For instance, in e-service domains, trust is defined as: "The quantified belief by a trustor with respect to the competence, honesty, security and dependability of a trustee within a specified context" <ref type="bibr" target="#b11">[12]</ref>. In particular, i) the competence is referred to correctly and efficiently perform the requested tasks; ii) the honesty involves the absence of malicious behaviours; iii) the security means the capability to manage private data avoiding their unauthorized access; iv) the reliability is assumed as the degree of reliance assigned on the provided services (e.g., the reliability of an e-Commerce agent is different if the price of the transaction if low enough or is very high).</p><p>However, reliability is an individual trust measure, while for the whole community the trust is measured by the reputation, that is fundamental to decide if an agent is a reliable interlocutor or not in absence of sufficient knowledge about it.</p><p>To use reliability and reputation measures in MASs, a main issue is represented by the possibility of suitably combining them to support agents' decisions. Indeed, when an agent a has to choose a possible partner, it exploits its reliability model based on its past interactions with other agents. Besides, a usually interacted with a subset of the whole agent community and often its past interactions with an agent are insufficient to obtain a representative trust measure. Thus a should consider also a reputation measure deriving by a reputation model. If for each candidate both reliability and reputation measures are combined in a synthetic preference score, then a could use it to choose its best partner. In this case, the main question is "How much the user should weight the reliability with respect to the reputation?". For answering to this question, authors in <ref type="bibr" target="#b9">[10]</ref> proposed a reliability-reputation model, called RRAF, but it has two main limitations, namely:</p><p>• The weight assigned to the reliability vs reputation is arbitrarily set by the user based only on his/her experience without considering the system evolution (i.e., it does not give relevance to the reliability changes due, for instance, to new information acquired about the other agents and to the increased expertise level about the domain of interest). • In RRAF, the trust measures perceived from each agent about the other agents are not dependent among them. Indeed, let a and c are two agents that desire a trust opinion about the agent b. The agent a (resp., c) composes its trust opinion τ ab (resp., τ cb ) requiring to the agent c (resp., a) its opinion about b. It is reasonable that τ cb (resp., τ ab ) represents that opinion. This shows the dependence between the trust measures τ ab and τ cb . RRAF operates by considering the opinion that c provides to a about b (and vice versa) as a personal suggestion, not necessarily coinciding with τ cb . A more accurate computation should consider these suggestions as coinciding with the trust measures that each agent has on the other agents but this implies to solve the mathematical relationship existing among all the trust measures.</p><p>To solve the two problems highlighted above a new trust model, called Trust-Reliability-Reputation (TRR), is proposed in this paper. For each agent this model builds a global trust evaluation merging both the agent's reliability and reputation measures in a single score (as in RRAF) but without the use of a fixed parameter to weight them (differently from RRAF). Instead, when the agent a computes the trust in another agent b, in TRR the weight representing the relevance given by a to the reliability with respect to the reputation is dynamically computed. This weight depends on the number of interactions performed between a and b and the expertise of a in evaluating b. Moreover, TRR introduces a novel mechanism for computing the reputation where, differently from RRAF, the reputation perceived by an agent a about another agent b is based on the global trust that each other agent of the MAS has in b. This way, the overall trust measures are reciprocally correlated and we argue that they are more accurate than in RRAF because the agent that is computing a trust measure receives by the other agents suggestions that are their actual trust measures instead of "arbitrary" values. Two considerations has to be carried out about this latter issue: i) Our method of computing trust is applicable in MASs in which the agents are collaborative and share their trust measures with each other; ii) In order to apply TRR, each agent has to solve a linear system, instead of the simple computation required by the RRAF model.</p><p>To evaluate the performances of TRR with respect to RRAF some test have been executed on the well known ART testbed <ref type="bibr" target="#b2">[3]</ref>. The experimental results show a significant advantage, in terms of performances, introduced by TRR, while the reduction of the agent efficiency, due to a more complex computation of the trust measures, is practically negligible.</p><p>The paper is organized as follows. In Section II some related work are discussed. The multi-agent scenario is presented in Section III, while Section IV deals with the TRR reliabilityreputation model. The Section V proposes an experimental comparison between RRAF and TRR on the ART Testbed and, finally, in Section VI some conclusions are drown.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. RELATED WORK</head><p>In an open MAS trust-based approaches are available for determining the best partner to interact on the basis of information derived by both direct experiences (i.e., reliability) and opinions of others (i.e., reputation). However, each agent directly interacts only with a subset of the agent population and, therefore, it should exploit also the opinions of the other members of the community to have a reliable opinion about someone. Unfortunately, in a virtual environment some malicious behaviours are possible, encouraged also by the facility to change own identity. To limit them, it is important to have an adequate number of agent providing their opinions to avoid a partial depiction of agents' reputation <ref type="bibr" target="#b3">[4]</ref> and preventing identity changes with some form of penalization and/or, for instance, by adopting a Public Key Infrastructure <ref type="bibr" target="#b17">[18]</ref>, <ref type="bibr" target="#b34">[35]</ref>.</p><p>In the literature a great number of metrics and approaches for measuring reliability and reputations have been proposed <ref type="bibr" target="#b8">[9]</ref>, <ref type="bibr" target="#b11">[12]</ref>, <ref type="bibr" target="#b14">[15]</ref>, <ref type="bibr" target="#b18">[19]</ref>- <ref type="bibr" target="#b20">[21]</ref>, <ref type="bibr" target="#b23">[24]</ref>- <ref type="bibr" target="#b25">[26]</ref>, <ref type="bibr" target="#b27">[28]</ref>, <ref type="bibr" target="#b29">[30]</ref>. Some of them integrate reliability and reputation into a synthetic measure <ref type="bibr" target="#b1">[2]</ref>, <ref type="bibr" target="#b6">[7]</ref>, <ref type="bibr" target="#b12">[13]</ref>, <ref type="bibr" target="#b15">[16]</ref> but leaving to the user the task of weighting the reliability with respect to the reputation. However, to compare such trust strategies and their computational costs in a competitive environment, the Agent Reputation and Trust (ART) testbed platform is avalaible <ref type="bibr" target="#b2">[3]</ref>. In the following, the examined approaches will be those that, to the best of our knowledge, come closest to the material presented in this paper pointing out differences and similarities with our proposal.</p><p>Trust and reputation are represented in <ref type="bibr" target="#b32">[33]</ref> by introducing a probabilistic reputation approach in the Ntropi model <ref type="bibr" target="#b0">[1]</ref> that is truly decentralized without reliance on any third party and allows all the entities to freely decide how to trust. Reputation and experiential information are combined in Ntropi in a single trust measure exploited to decide if performing the interaction. An agent will rate this experience and will adjust its trust values based on the differences with the recommended ratings. In <ref type="bibr" target="#b32">[33]</ref> a Dirichlet reputation algorithm <ref type="bibr" target="#b13">[14]</ref> is added to the Ntropi model to set its parameters by using a Maximum Likelihood Estimation method on the observed data. Always for distributed MASs, in <ref type="bibr" target="#b10">[11]</ref> is presented an approach made up of time steps that deals with uncertainty and ignorance and takes into account the number of interactions, data dispersion and variability. It computes trust based on three agent expectative, namely: past experiences with that agent (direct); advertisements received from that agent and discrepancies between experience and past advertisements (advertisements-based); recommendations received from others about that agent and discrepancies between experience and past recommendations (recommendations-based). A Global Trust measure aggregates the three components into a single belief referred to the next time step. The system has been tested on the ART testbed <ref type="bibr" target="#b2">[3]</ref>.</p><p>FIRE <ref type="bibr" target="#b12">[13]</ref> is conceived for open MASs where agents are benevolent and honest in exchanging information. It considers more trust and reputation sources that in detail are: Interaction trust represented by the direct agent's experience; Role-based trust taking into account the agents' relationships; Witness reputation considering attestations about the behaviour of an agent; Certified reputation about an agent witnessed by thirdparty suggested by the agent itself. As a result, FIRE correctly works in many usual occurrences but it requires a lot of parameters to set on. REGRET <ref type="bibr" target="#b26">[27]</ref> is a modular trust and reputation system for cooperative MASs exploiting impressions about other agents derived by both direct experiences (called direct trust) and a reputation model aggregating three type of reputation (i.e.: Witness, based on the information coming from witnesses; Neighborhood, calculated by using social relation; System, depending by roles and general properties). REGRET considers the witnesses' credibility and each agent can neglect one, more or all the reputation components. Finally, a common semantic, called ontological dimension, models the agents' personal points of view considering the multi-dimensional aspects of the reputation.</p><p>Within a grid context, in <ref type="bibr" target="#b31">[32]</ref> the trust of both clients and providers is computed, using both direct and indirect information and removing biased feedbacks by using a rank correlation method. Direct trust is computed directly by the initiator and it is dominant on the indirect trust, measured by the feedbacks received from agents (in the same or other domains) and weighted based on their credibility determined on criterion as similarity, activity, specificity, etc. Moreover, the reputations of the client and provider are calculated on different parameters being their relationships asymmetric. In presence of uncertain and incomplete information a fuzzy approach can be used, as in <ref type="bibr" target="#b30">[31]</ref> where the system collects and weights the opinions of each user about the other users to obtain aggregated trustworthiness scores. Social networks and probabilistic trust models are examined in <ref type="bibr" target="#b7">[8]</ref> for different contexts and settings but authors conclude that in several scenarios these techniques exhibit unsatisfactory performances.</p><p>Trust has been particularly investigated for file sharing services over P2P networks <ref type="bibr" target="#b14">[15]</ref>, <ref type="bibr" target="#b18">[19]</ref>, <ref type="bibr" target="#b33">[34]</ref>. In this context, the EigenTrust algorithm has been applied in <ref type="bibr" target="#b15">[16]</ref>, where each peer rates its transactions for building a trust representation of the other peers, called Local Trust. EigenTrust assumes trust transitivity in order to compute the Global Trust values. Each peer collects by the other peers their Local Trust values and, suitably weighted by means of the peer's trustworthiness, aggregated in a trust matrix in which the trust values asymptotically converge to its eigenvalues. The presence of pre-trusted users, always trusted, can minimize the influence of malicious peers performing collusive activities.</p><p>Nowadays, the opportunities given by the wireless technologies to work in mobile contexts, also in absence of stable connections, places great relevance in trusting the counterpart. For instance, Celltrust <ref type="bibr" target="#b17">[18]</ref> manages direct and reputation information (suitably weighted) in a centralized manner by using cryptographic techniques. A Bayesian approach is used in <ref type="bibr" target="#b5">[6]</ref>, where reputation exploits a "second-hand" criterion in which transitive reputation is accepted only if it agrees with the direct rates. To contrast liars in Ad Hoc networks, in <ref type="bibr" target="#b21">[22]</ref> is adopted a deviation test, independently of specific implementation, within a stochastic process but tests show that this model defects when the number of liars exceed a certain threshold.</p><p>The cited systems trust an agent by exploiting both direct experiences and information about its reputation within the community, as in TRR. In <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b32">[33]</ref> the trust in an agent is computed, as in TRR, only based on individual criterion but, for instance, in REGRET <ref type="bibr" target="#b26">[27]</ref> a common ontology is adopted to uniform different trust representations and in <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b15">[16]</ref>, <ref type="bibr" target="#b17">[18]</ref>, <ref type="bibr" target="#b21">[22]</ref> trust is domain dependant. To cross malicious agents different strategies are adopted, TRR and <ref type="bibr" target="#b15">[16]</ref>, <ref type="bibr" target="#b17">[18]</ref>, <ref type="bibr" target="#b31">[32]</ref> suitable weight the reputation sources and <ref type="bibr" target="#b15">[16]</ref> exploits also peers always trusted, while in <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b10">[11]</ref>, <ref type="bibr" target="#b32">[33]</ref> are considered discrepancies between computed trust and observed behaviour to limit the effects of dishonest behaviours and, finally, other systems adopts a PKI approach (that is an orthogonal issue for many trust systems).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. THE MULTI-AGENT COMMUNITY</head><p>In this section, it is described the TRR scenario. Let S be a list of service categories and let C be a software agent community, where each agent a ∈ C can require a service to each other agent b ∈ C that, in its turn, can either accept or reject the request. If the request is accepted and the service consumed then the agent a could evaluate its satisfaction and update its reliability model for b.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Reliability</head><p>The approach presented in this paper is independent from the particular reliability model chosen by each agent and each agent has its own reliability model, independently of the other agents. The reliability of the agent a with respect to the agent b and the service category γ ∈ S can be represented by the tuple ρ γ ab = ̺ γ ab , i γ ab , e γ , where:</p><formula xml:id="formula_0">• ̺ γ ab ∈ [0, 1]</formula><p>is the reliability value that a gives to b referred to the services of the category γ, where ̺ ab = 0 (resp. 1) means that b is totally unreliable (resp., reliable).</p><p>• i γ ab is the number of interactions that a and b performed in the past with respect to the services of the category γ. • e γ ∈ [0, 1] is the expertise level that a assumes to have in evaluating the services of the category γ and that depends on the knowledge acquired by a about the category γ. In other words, the TRR approach does not assume that the reliability perceived from a about b is a simple scalar value, but for each category γ it is possible to have a different reliability. To this aim it also considers both the knowledge level that a has of b (represented by i γ ab ) in interactions associated with the category γ and the expertise level that a assumes to have about the services of the category γ (represented by e γ ).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Reputation</head><p>Let π γ ab be the reputation of b in the whole community as perceived by a and with respect to services belonging to the category γ. To obtain it, a should require to each other agent of the community an opinion about b in providing good services in the category γ. It is important to remark that in the TRR scenario more reputations of an agent b exist since each agent has it personal perception of the b's reputation. This way, the reputation π γ ab is a function (F ) of the set of opinions {o γ cb }, where o γ cb is the opinion that each agent c gives to a about b in providing good services of the category γ. Formally, it is:</p><formula xml:id="formula_1">π γ ab = F ({o γ cb })<label>(1)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>C. Trust</head><p>Let τ γ ab be the trust measure that an agent a assigns to another agent b in a given category γ. In the most of the approaches proposed in the past, this measure is obtained by combining in some way the reliability (ρ γ ab ) and the reputation (π γ ab ) measures for taking into account both the direct knowledge that a has about the b's capabilities and the suggestions that the other agents give to a about b. Some of these approaches also requires to specify a coefficient (that we call α) ranging in [0..1] that expresses the relevance assigned to the reliability with respect to the reputation. Vice versa the relevance of the reputation with respect to reliability will be given from 1 − α. In the past approaches, this coefficient α is arbitrarily fixed to a given value accordingly to the user's preference. Differently, we assume that α increases with: i) The number of interactions i γ ab , carried out by the agent a with the agent b for the category γ, since the direct knowledge of a improves when the number of interactions increases; ii) The expertise level e γ the agent a has about the category γ so that the more expert is the agent a and the more great will be its confidence in judging the b's capability and consequently computing the b's reliability. Our viewpoint defines the α coefficient as an α γ ab coefficient, to remark its dependance on the agents a and b and the category γ.</p><p>For evaluating a reasonable value for α γ ab , we propose to exploit a direct relationship with both the number of interactions i γ ab and the expertise e γ , such that α γ ab will be 1 only if a is completely expert about the category γ and the number of interaction i γ ab is higher than or equal to a suitable threshold N (set by the system administrator). If i γ ab is higher than or equal to N , the parameter α γ ab will be simply equal to e γ . Otherwise, if i γ ab is smaller than N , the parameter α γ ab will linearly depend on e γ and i γ ab . More formally:</p><formula xml:id="formula_2">α γ ab = e γ • i γ ab N if i γ ab &lt; N e γ if i γ ab ≥ N (2)</formula><p>Therefore, the trust measure can be generally expressed as a function G depending on the reliability, the reputation and the α γ ab coefficient:</p><formula xml:id="formula_3">τ γ ab = G(ρ γ ab , π γ ab , α γ ab )<label>(3)</label></formula><p>where:</p><formula xml:id="formula_4">α γ ab = α γ ab (i γ ab , e γ )<label>(4)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>D. An example of TRR model</head><p>The TRR scenario cover most of the past trust approaches. For instance, in the RRAF approach <ref type="bibr" target="#b9">[10]</ref>, the reliability ρ γ ab depends only by the value of ̺ γ ab , since the parameters i γ ab and e γ are not considered. The reliability is updated each time the agent b provides a service to a. To compute the new reliability value the measure of the satisfaction expressed by a for this service is averaged with the current reliability value. Moreover, the reputation π γ ab is obtained by a by requiring to all the other agents an opinion about b in providing services of the category γ and averaging them to compute the new value π γ ab . Finally, the trust value τ γ ab is computed as a weighted mean between reliability and reputation, where the reliability is weighted by a parameter α, set by the agent's owner, and the reputation is weighted by (1 − α). Note that in RRAF the parameter α does not depends on either the category γ or the agent b, but it is the same for all the agents and the categories.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>IV. THE TRUST-REPUTATION-RELIABILITY MODEL</head><p>In this section, the functions F and G chosen to define the Trust-Reputation-reliability (TRR) model will be described. Fig. <ref type="figure">1:</ref> The agent 1 evaluates the reputation of the agent 2 based on the suggestions of the agents 3, 4 and 5</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Reputation in the TRR model</head><p>Let π γ ab be the reputation that in TRR an agent a assigns to another agent b for a given category γ. It is obtained as weighted mean of all the trust measures τ γ cb that each agent c (different from a and b) associates with b. In other words, the suggestion that each agent c gives of b to a is represented by the trust that c has in b. This suggestion coming from c is weighted by the trust measure τ γ ac that a has in c. Formally, the function F defined in the Equation 1 becomes:</p><formula xml:id="formula_5">π γ ab = c∈C−{a,b} τ γ cb • τ γ ac c∈C−{a,b} τ γ ac<label>(5)</label></formula><p>For instance, in Figure <ref type="figure">1</ref> it is depicted a scenario in which the agent 1 has to evaluate the reputation π 12 of the agent 2 (the category is omitted for simplicity). The agent 1 receives by the agents 3, 4 and 5 "suggestions" about the agent 2 (i.e., the trust that they assign to it) weighted by the agent 1 with the trust measure τ 13 , τ 14 and τ 15 that it assigns to the agents 3, 4 and 5, respectively. Thus, the weighted mean that gives the reputation assigned by the agent 1 to the agent 2 is: π 12 = (0.8 • 0.2 + 0.2 • 0.9 + 0.7 • 0.3)/(0.2 + 0.9 + 0.3) = 0.39</p><p>We remark that the high values suggested by the agents 3 and 5 (τ 32 =0.8 and τ 52 =0.7) have been marginally considered for the small trust that the agent 1 assigns to them, while the computed reputation is more similar to the suggestion given by the agent 4, to which the agent 1 assigns a high trust (τ 14 =0.9).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Trust in the TRR model</head><p>In order to compute the trust τ γ ab that the agent a assigns to the agent b in the category γ, we choose to use a weighted mean of the reliability value ̺ γ ab and the reputation value π γ ab , using the parameter α γ ab to weight the reliability value and (1 − α γ ab ) to weight the reputation. This way, the function G of the Equation 3 has the following form:</p><formula xml:id="formula_6">τ γ ab = α γ ab • ̺ γ ab + (1 − α γ ab ) • π γ ab<label>(6)</label></formula><p>and by considering the Equation 5 it becomes:</p><formula xml:id="formula_7">τ γ ab = α γ ab • ̺ γ ab + (1 − α γ ab ) • c∈C−{a,b} τ γ cb • τ γ ac c∈C−{a,b} τ γ ac<label>(7)</label></formula><p>This equation, written for all the n agents and all the m categories, respectively belonging to C and S, forms a system of m • n • (n − 1) linear equations, containing m • n • (n − 1) variables τ γ ab . This system is equivalent to that described in <ref type="bibr" target="#b4">[5]</ref> and admits only one solution.</p><p>V. AN EXPERIMENTAL COMPARISON BETWEEN RRAF AND TRR In this section, we perform some experiments using the ART platform. On ART, each agent takes the role of an art appraiser who gives appraisals on paintings presented by its clients. In order to fulfill his appraisals, each agent can ask opinions to other agents. These agents are also in competition among them and thus, they may lie in order to fool opponents. The game is supervised by a simulator that runs in a synchronous and step by step manner, and it can be described as follows:</p><p>• The clients, simulated by the simulator, request opinions on paintings to the appraiser agents. Each painting belongs to an era. For each appraisal, an agent earns a given money amount that is stored in its bank amount BA.</p><p>• Each agent has a specific expertise level in each era, assigned by the simulator. The error made by an agent while appraising a painting depends on both this expertise and the price the appraiser decides to spend for that appraisal.</p><p>• An agent cannot appraise its paintings himself but he has to ask other agents to obtain opinions. Each opinion has a fixed cost for the agent. • Each agent can obtain recommendations about another agent by other players. Each recommendation has a given price. This way, the agent can build a reputation model of the other agents. • Agents weight each received opinion in order to compute the final evaluation of the paintings. • At the end of each step, the accuracy of agents final evaluations is compared to each other, in order to determine the client share for each agent during the next step. In other words, the most accurate agent receives more clients. • At the end of each step, the simulator reveals the real value of each painting, thus allowing each agent to update its reliability and reputation model. • At the end of the game, the winner of the competition is the agent having the highest bank amount BA. The purpose of our experiment is to analyze the improvements the TRR model introduces along the RRAF model. We have built two agents implementing the RRAF and TRR model respectively, and we have run some games in presence of different percentage of unreliable agents P . In particular, in the performed experiment 5 different agent populations characterized by a size of N = 100 agents and a different percentage P of unreliable agents have been considered. Namely, the 5 values of P we have considered are 10%, 30%, 50%, 70% and 90%. For each of these values, we have run an ART game, where the RRAF agent participates to each game using the parameter α = 0, 71. This value was chosen according to <ref type="bibr" target="#b9">[10]</ref>, where the RRAF agent obtained the maximum bank amount using α = 0, 71 at the same conditions. For each game, besides the RRAF and TRR agents, a population of 98 Simplet agents have run as competitors. Simplet agent is an agent that has participated to the 2008 ART Competition, and whose software can be downloaded at the ART site <ref type="bibr" target="#b2">[3]</ref>, and that uses a reliability-reputation model. We have programmed two different versions of Simplet agent:</p><p>• the former with a low availability to pay for the opinions, thus generating unreliable answers to the opinion requests. This low availability is represented by the internal ART parameter c g = 1.</p><p>• the latter with a high availability to pay for the opinions, thus characterized by the parameter c g = 15. Figure <ref type="figure" target="#fig_1">2</ref> reports the results of this experiment, in terms of variation of the bank amount BA of both the RRAF and TRR agents against the different percentage of unreliable agents P .</p><p>We note that, while the RRAF agent reaches its maximum bank amount for P = 50% as espected in <ref type="bibr" target="#b9">[10]</ref>, the performances decrease for other values of P . This is due to the following reasons: i) RRAF agent isn't able to recognize unreliable agents effectively, and ii) it incurs useless costs to ask recommendations when the population is reliable (P &lt; 50%). Differently from the RRAF agent which has an α value that is fixed during the game for all the agents, TRR assigns a different α value for each era of each agent in the community, and also it is able to modify these values at each step of the game. This way, TRR gradually learns to recognize reliable agents thus saving recommendation costs. Moreover, in TRR the reliability is a function of also the number of interaction (i γ ab ) between trustor and trustee, and the expertise of the trustor (e γ ) in evaluating the services. As a consequence, TRR is able to better evaluate the reliability of the other agents thus obtaining more significant results in term of bank amount. Finally, Figure <ref type="figure" target="#fig_1">2</ref> shows that the performance of TRR are not influenced by the presence of unreliable agents.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VI. CONCLUSIONS</head><p>The large number of trust-based approaches in MASs emerged in the last recent years implies the necessity of clearly understanding what are the advantages and the limitations of using trust measures to improve the effectiveness of the systems. In particular, the two main measures considered in the literature, i.e. reliability and reputation, should be suitably combined to obtain a trust measure to support agent decisions.</p><p>In the past, we proposed a framework, called RRAF, to build competitive agents provided with an internal reliabilityreputation model, where the relevance of reliability with respect to reputation is given by a suitable parameter. However, RRAF introduces some simplifications in computing the trust, that affected the effectiveness of its practical application.</p><p>In this paper, it is proposed the TRR model to overcome the RRAF limitations. The TRR model i) dynamically computes the parameter representing the importance of the reliability with respect to the reputation, based on the evolution of the knowledge acquired by the agents in time, and ii) models the interdependence between the trust measures of the agents, considering that, when an agent a computes the trust measure about an agent b, the computation exploits the trust measures about b coming from each other agent of the community.</p><p>The TRR model has been tested by comparing it with RRAF on the standard testbed ART. The experimental results clearly shows a significant improvement introduced by ART in the effectiveness of the agent when computing the trust measures. We argue that such improvement is strictly related to the capability of the trust model in capturing the interdependence of the trust measures, highlighting the social aspect of the community in which the agents interact.</p><p>As for our ongoing research, we are developing more advanced studies about such social aspects. In particular, we plan to analyze how the characteristics of the agent population, e.g. the honesty, the competence, the privacy requirements etc., can be considered for designing a more accurate trust model.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 :</head><label>2</label><figDesc>Fig. 2: Variation of the bank amount BA against the percentage of unreliable agents P , with population size N = 100.</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A Distributed Trust Model</title>
		<author>
			<persName><forename type="first">A</forename><surname>Abdul-Rahman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hailes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 1997 Work. on New Security Paradigms (NSPW &apos;97)</title>
				<meeting>of the 1997 Work. on New Security Paradigms (NSPW &apos;97)</meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="1997">1997</date>
			<biblScope unit="page" from="48" to="60" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Managing Trust in peer-2-peer Information Systems</title>
		<author>
			<persName><forename type="first">K</forename><surname>Aberer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Despotovic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 10th Int Conf on Information and knowledge management</title>
				<meeting>of the 10th Int Conf on Information and knowledge management</meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="310" to="317" />
		</imprint>
	</monogr>
	<note>CIKM &apos;01)</note>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<ptr target="http://megatron.iiia.csic.es/art-testbed/" />
		<title level="m">ART-Testbed</title>
				<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Boosting Cooperation by Evolving Trust Applied</title>
		<author>
			<persName><forename type="first">A</forename><surname>Birk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="769" to="784" />
			<date type="published" when="2000">2000</date>
			<publisher>Taylor &amp; Francis</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Modeling Cooperation in Multi-Agent Communities Cognitive Systems Research</title>
		<author>
			<persName><forename type="first">F</forename><surname>Buccafurri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Palopoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Rosaci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">M L</forename><surname>Sarné</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<publisher>Elsevier</publisher>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="171" to="190" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A Robust Reputation System for P2P and Mobile Ad-hoc Networks</title>
		<author>
			<persName><forename type="first">S</forename><surname>Buchegger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Le Boudec</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 2nd Workshop on the Economics of Peer-to-Peer Systems</title>
				<meeting>of the 2nd Workshop on the Economics of Peer-to-Peer Systems</meeting>
		<imprint>
			<publisher>P2PEcon</publisher>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>Burton</surname></persName>
		</author>
		<ptr target="http://www.peerfear.org/papers/openprivacy-reputation.pdf" />
		<title level="m">The Design of the Openprivacy Distributed Reputation System</title>
				<imprint>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">P2P Reputation Management: Probabilistic Estimation vs</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Despotovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Aberer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Social Networks, Computer Networks</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="485" to="500" />
			<date type="published" when="2006">2006</date>
			<publisher>Elsevier</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">From dependence networks to trust networks</title>
		<author>
			<persName><forename type="first">R</forename><surname>Falcone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Castelfranchi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 11th AAMAS Workshop on Trust in Agent Societies (Trust &apos;09)</title>
				<meeting>of the 11th AAMAS Workshop on Trust in Agent Societies (Trust &apos;09)</meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="13" to="26" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">The Roles of Reliability and Reputation in Competitive Multi Agent Systems</title>
		<author>
			<persName><forename type="first">S</forename><surname>Garruzzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Rosaci</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the COOPIS Conf. 2010</title>
				<meeting>of the COOPIS Conf. 2010</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="volume">6426</biblScope>
			<biblScope unit="page" from="439" to="442" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">An Anticipatory Trust model for Open Distributed Systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Gómez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Carbó</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Benac-Earle</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">LNAI</title>
		<imprint>
			<biblScope unit="volume">4250</biblScope>
			<biblScope unit="page" from="307" to="324" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Trust Management Tools for Internet Applications</title>
		<author>
			<persName><forename type="first">T</forename><surname>Grandison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sloman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 1st Int. Conf. on Trust Management</title>
				<meeting>of the 1st Int. Conf. on Trust Management</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="91" to="107" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">An Integrated Trust and Reputation Model for Open Multi-Agent System</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">D</forename><surname>Huynh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">R</forename><surname>Jennings</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">R</forename><surname>Shadbolt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Autonmous Agent and Multi Agent Systems</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="119" to="154" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Dirichlet Reputation Systems</title>
		<author>
			<persName><forename type="first">A</forename><surname>Jösang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Haller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 2nd Int. Conf. on Availability, Reliability and Security (ARES)</title>
				<meeting>of the 2nd Int. Conf. on Availability, Reliability and Security (ARES)</meeting>
		<imprint>
			<publisher>IEEE Press</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="112" to="119" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A Survey of Trust and Reputation Systems for Online Service Provision</title>
		<author>
			<persName><forename type="first">A</forename><surname>Jöosang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ismail</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Boyd</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Decision Support System</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="618" to="644" />
			<date type="published" when="2005">2005</date>
			<publisher>Elsevier</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">The Eigentrust Algorithm for Reputation Management in P2P Networks</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">D</forename><surname>Kamvar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Schlosser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Garcia-Molina</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 12th Int. Conf. on World Wide Web, (WWW &apos;03)</title>
				<meeting>of the 12th Int. Conf. on World Wide Web, (WWW &apos;03)</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="640" to="651" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Maintenancebased Trust for Multi-Agent Systems</title>
		<author>
			<persName><forename type="first">B</forename><surname>Khosravifar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gomrokchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bentahar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Thiran</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 8th Int. Conf. on Autonomous Agents and Multiagent Systems</title>
				<meeting>of the 8th Int. Conf. on Autonomous Agents and Multiagent Systems</meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="1017" to="1024" />
		</imprint>
	</monogr>
	<note>Int. Foundation for Autonomous Agents and Multiagent Systems</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">CellTrust: a Reputation Model for C2C Commerce</title>
		<author>
			<persName><forename type="first">G</forename><surname>Lax</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">M L</forename><surname>Sarné</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electronic Commerce Research</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="193" to="216" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Taxonomy of Trust: Categorizing P2P Reputation Systems</title>
		<author>
			<persName><forename type="first">S</forename><surname>Marti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Garcia-Molina</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer Networks</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="472" to="484" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">A Survey of Trust Use and Modeling in Current Real Systems</title>
		<author>
			<persName><forename type="first">P</forename><surname>Massa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Trust in E-Services: Technologies, Practices and Challenges</title>
				<editor>
			<persName><forename type="first">R</forename><surname>Song</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Korba</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Yee</surname></persName>
		</editor>
		<imprint>
			<publisher>Idea Group Publishing</publisher>
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Notions of Reputation in Multi-Agents Systems: a Review</title>
		<author>
			<persName><forename type="first">L</forename><surname>Mui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mohtashemi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Halberstadt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the First Int. Joint Conf. on Autonomous Agents and Multiagent Systems (AAMAS &apos;02)</title>
				<meeting>of the First Int. Joint Conf. on Autonomous Agents and Multiagent Systems (AAMAS &apos;02)</meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="280" to="287" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Analysis of a Reputation System for Mobile Ad-Hoc Networks with Liars</title>
		<author>
			<persName><forename type="first">J</forename><surname>Mundinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Le Boudec</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Performance Evaluation</title>
		<imprint>
			<biblScope unit="volume">65</biblScope>
			<biblScope unit="issue">3-4</biblScope>
			<biblScope unit="page" from="212" to="226" />
			<date type="published" when="2008">2008</date>
			<publisher>Elsevier</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Reputation-based Service Discovery in Multi-Agents Systems</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Na</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">H</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">R</forename><surname>Shin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc of the IEEE Int. Work. on Semantic Computing and Applications</title>
				<meeting>of the IEEE Int. Work. on Semantic Computing and Applications</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="326" to="339" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Trust in Multi-Agent Systems</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">D</forename><surname>Ramchurn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Huynh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">R</forename><surname>Jennings</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Knowledge Engeenering Review</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="25" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Resnick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Zeckhauser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Friedman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kuwabara</surname></persName>
		</author>
		<title level="m">Reputation Systems, Commununication of ACM</title>
				<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2000">2000</date>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="page" from="45" to="48" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">On Open Representation and Aggregation of Social Evaluationa in Computational Trust and Reputation Models</title>
		<author>
			<persName><forename type="first">J</forename><surname>Sabater-Mir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Paoulucci</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Approximate Reasoning</title>
		<imprint>
			<biblScope unit="volume">46</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="458" to="483" />
			<date type="published" when="2007">2007</date>
			<publisher>Elsevier</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">REGRET: Reputation in Gregarious Societies</title>
		<author>
			<persName><forename type="first">J</forename><surname>Sabater</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Sierra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 5th Int. Conf. on Autonomous Agents, (AGENTS &apos;01)</title>
				<meeting>of the 5th Int. Conf. on Autonomous Agents, (AGENTS &apos;01)</meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="194" to="195" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Review on Computational Trust and Reputation Models</title>
		<author>
			<persName><forename type="first">J</forename><surname>Sabater</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Sierra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence Review</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="33" to="60" />
			<date type="published" when="2005">2005</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Trust in Multi-Agent Systems</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">H</forename><surname>Sarvapali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">D</forename><surname>Ramchurn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">R</forename><surname>Jennings</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Knowledge Engineering Review</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="1" to="25" />
			<date type="published" when="2004">2004</date>
			<publisher>Elsevier</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Trust as dependence: A logical approach</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Singh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 10th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS &apos;11)</title>
				<meeting>of the 10th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS &apos;11)</meeting>
		<imprint>
			<publisher>Int. Foundation for Autonomous Agents and Multiagent Systems</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="863" to="870" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Trusted P2P Transactions with Fuzzy Reputation Aggregation</title>
		<author>
			<persName><forename type="first">S</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hwang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">K</forename><surname>Kwok</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Internet Computing</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="24" to="34" />
			<date type="published" when="2005">2005</date>
			<publisher>IEEE Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Reputation Based Two Way Trust Model for Reliable Transactions in Grid Computing</title>
		<author>
			<persName><forename type="first">P</forename><surname>Srivaramangai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Renagaramanujam</forename><surname>Srinivasan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Science</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="33" to="39" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">A Probabilistic Reputation Algorithm for Decentralized Multi-Agent Environments</title>
		<author>
			<persName><forename type="first">M</forename><surname>Tavakolifard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Knapskog</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 4th Int. Work. on Security and Trust Management (STM 2008) -Electronic Notes in Theoretical Computer Science</title>
				<meeting>of the 4th Int. Work. on Security and Trust Management (STM 2008) -Electronic Notes in Theoretical Computer Science</meeting>
		<imprint>
			<publisher>Elsevier</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="139" to="149" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Characterizing Economic and Social Properties of Trust and Reputation Systems in P2P Environment</title>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">F</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Hori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Sakurai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Computer Science and Tecnology</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="129" to="140" />
			<date type="published" when="2008">2008</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Adding Security and Trust to Multi-Agent Systems Proc</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">C</forename><surname>Wong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Sycara</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">of Autonomous Agents99 (Work. on Deception, Fraud and Trust in Agent Societies)</title>
				<imprint>
			<date type="published" when="1999">1999</date>
			<biblScope unit="page" from="149" to="161" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
