Users’ Collaboration as a Driver for Reputation System Effectiveness: a Simulation Study Guido Boella and Marco Remondino Department of Computer Science, University of Turin boella@di.unito.it , remond@di.unito.it Gianluca Tornese (for the implementation) gianluca.tornese@libero.it Abstract reputation, it is implicitly founded on two main assumptions: Reputation management is about evaluating an agent's actions and other agents' opinions about those 1) The correctness of shared information actions, reporting on those actions and opinions, and 2) The participation of users to the system reacting to that report thus creating a feedback loop. This social mechanism has been successfully used, While the negation of the first could be considered through Reputation Management Systems (RMSs) to as an attack to the system itself, performed by users classify agents within normative systems. Most RMSs trying to crash it, and its occurrence is quite rare, the rely on the feedbacks given by the member of the social second factor is often underestimated, when designing network in which the RMS itself operates. In this way, a collaborative RMS. Users without a vision of the the reputation index can be seen as an endogenous and macro level often use the system, but simply forget to self produced indicator, created by the users for the collaborate, since this seems to cause a waste of time. users' benefit. This implies that users’ participation The purpose of the present work is to give a and collaboration is a key factor for the effectiveness a qualitative and, when possible, quantitative evaluation RMS. In this work the above factor is explored by of the collaborative factor in RMSs, by means of an means of an agent based simulation, and is tested on a empirical analysis conducted via an agent based P2P network for file sharing. simulation. Thus, the main research question is: what’s the effectiveness of a RMS, when changing the 1. Introduction collaboration rate coming from the involved users? In order to answer this question, in the paper an In everyday's life, when a choice subject to limited agent based model is introduced, representing a peer- resources (like for instance money, time, and so on) to-peer (P2P) network for file sharing. A basic RMS is must be done, due to the overwhelming number of applied to the system, in order to help users to choose possibilities that people have to choose from, the best peers to download from. In fact, some of the something is needed to help them make choices. peers are malicious, and they try to exploit the way in People often follow the advice of others when it comes which the P2P system rewards users for sharing files, to which products to by, which movies to watch, which by uploading inauthentic resources when they do not music to listen, which websites to visit, and so on. This own the real ones. The model is described in detail and is a social attitude that uses others’ experience They the results are evaluated through a multi-run coeteris base their judgments of whether or not to follow this paribus technique, in which only one setting is advice partially upon the other person's reputation in changed at a time. In particular, the most important helping to find reliable and useful information, even parameters which will be compared, to evaluate the with all the noise. effectiveness of the RMS are: verification of the files, Using and building upon early collaboration performed by the users and negative payoff, given in filtering techniques, reputation management software case a resource is reported as being inauthentic. The gather ratings for people, companies, and information verification of the files, i.e. users’ the collaboration, is sources. Since this is a distributed way of computing an exogenous factor for the RMS, while the negative payoff is an endogenous and thus directly controllable with RMSs, we can question their real applicability, an factor, from the point of view of a RMS’s designer. issue which remains unanswered in the simulation The P2P framework has been chosen since there are based tests made by the authors. To provide an answer many works focusing on the reputation as a system to to this question it is necessary to build a simulation overcome the issue of inauthentic files, but, when tool which aims at a more accurate modeling of the evaluating the effectiveness of the system, the authors users’ behavior rather than at modeling the reputation [1] usually refer to idealized situations, in which users system in detail. always verify the files for authenticity, as soon as they start a download. This is obviously not the case in the 3. Model Framework real world: first of all, most resources require to be at least partially owned, in order to be checked. Besides, We assume a simple idealized model of reputation, some users could simply decide not to check them for since the objective is not to prove the effectiveness of a long time. Even worse, other users could simply forget particular algorithm but to study the effect of users’ about a downloaded resource and never check it. Last behavior on a reputation system. We use a centralized but not least, other users might verify it, but simply not system which assumes the correctness of information report anything, if it’s not authentic. provided by users, e.g., it is not possible to give an evaluation of a user with whom there was no 2. Reputation and P2P Systems interaction. When verifying a file, the agents give a negative payoff to the agent uploading it, in case it’s Since uploading bandwidth is a limited resource and inauthentic. In turn, the system will spread it to the the download priority queues are based on a uploading- agents (if any) who uploaded it to the sender. There are credit system to reward the most collaborative peers on two reputation thresholds: the first and higher one, the network, some malicious users create inauthentic under which it’s impossible to ask for resources to files, just to have something to share, thus obtaining other agents, the second, lower than the other, which credits, without being penalized for their behavior. To makes it impossible even to share the owned files. This balance this, RMSs have been introduced, which guarantees that an agents that falls under the first one dynamically assign to the users a reputation value, (because she shared too many inauthentic files), can considered in the decision to download files from them still regain credits by sharing authentic ones and come or not. RMSs are proven, via simulation, to make P2P back over the first threshold. On the contrary, if she networks safe from attacks by malicious peers, even continues sharing inauthentic files, she will fall also when forming coalitions. In networks of millions of under the second threshold, being de facto excluded peers attacks are less frequent, but users still have a from the network, still being a working link from and benefit from sharing inauthentic files. It’s not clear if to other agents. The agents are randomly connected on RMSs can be effective against this selfish widespread a graph and feature the following parameters: Unique misbehavior, since they make several ideal ID, Reputation value, set of neighbors, set of owned assumptions about the behavior of peers who have to resources, set of goals (resources), set of resources verify files to discover inauthentic ones. This operation being downloaded, set of suppliers (by resource). At is assumed to be automatic and with no costs. each time step, agents reply to requests for download, Moreover, since the files are usually shared before perform requests (according to their goals) or verify downloading is completed, peers downloading files. While an upload is performed – if possible - each inauthentic files unwillingly spread them if they are not time another agent makes a request, requesting a cooperative enough to verify their download as soon as resource and verification are performed in alternative. possible. In the present work, the creation and Verification ratio is a parameter for the simulation and spreading of inauthentic files is not considered as an acts stochastically on agents’ behavior. All agents attack, but as a way in which some agents try to raise belong to two disjoint classes: malicious agents and their credits, while not possessing the real resource loyal ones. They have different behaviors concerning that's being searched by others. A basic RMSs is uploading, while feature the same behavior about introduced, acting as a positive or negative reward for downloading and verification: malicious agents are the users and human factor behind the RMSs is simply agents who exploit for selfishness the considered, in the form of costs and benefits of weaknesses of the system, by always uploading verifying files. Most approaches, most notably inauthentic files if they don’t own the authentic ones. EigenTrust [2], assume that verification is made Loyal agents, on the contrary, only upload a resource if automatically upon the start of download of the file. By they own it. A number of resources are introduced in looking as we do at the collaboration factor in dealing the system at the beginning of the simulation, representing both the owned objects and the agents' – Reputation value (or credits) N(ai), goals. For coherence, an owned resource can't be a – Set of agent’s neighbors RP(ai), goal, for the same agent. The distribution of the – Set of owned resources RO(ai), resource is stochastic. During the simulation, other – Set of goals (resource identifiers) RD(ai), resources (and corresponding goals) are stochastically – Set of resources being downloaded P(ai), distributed among the agents. Each agent – Set of pairs < supplier; resource >. (metaphorically, the P2P client) keeps track of the providers, and this information is preserved also after A resource is a tuple , where the download is finished. Name is the resource identifier and Authenticity is a To test the limits and effectiveness of a reputation Boolean attribute indicating whether the resource is mechanism under different user behaviors an agent authentic or not. The agent owning the resource, based simulation of a P2P network is used as however, does not have access to this attribute unless methodology, employing reactive agents to model the he verifies the file. users; these have a deterministic behavior based on the The resources represent the object being shared on class they belong to (malicious or loyal) and a the P2P network. A number of resources are introduced stochastic idealized behavior about verifying policy. in the system at the beginning of the simulation; they Their use shows how the system works at an aggregate represent both the owned objects and the agents' goals. level. However, reactive agents can also be regarded as For coherence, an owned resource can't be a goal, for a limit for our approach, since real users have a flexible the same agent. The distribution of the resource is behavior and adapt themselves to what they observe. stochastic. During the simulation, other resources are We built a model which is less idealized about the stochastically introduced. In this way, each agent in the verifying factor, but it’s still rigid when considering system has the same probabilities to own a resource, the agents’ behavior about sending out inauthentic independently from her inner nature (malicious or files. That’s why we envision the necessity to employ loyal). In the same way also the corresponding new cognitive agents based on reinforcement learning goals are distributed to the agents; the difference is that techniques. Though, reactive agents can also be a key the distribution probability is constrained by its being point, in the sense that they allow the results to be possessed by an agent. Formally R be the set of all the easily readable and comparable among them, while the resources in the system. We have that: use of cognitive agents would have moved the focus RDai  R, ROai  R and RDai ROai  Ø. from the evaluation of collaborative factor to that of Each agent in the system features a set of neighbors real users’ behavior when facing a RMS, which is very N(ai), containing all the agents to which she is directly interesting, but beyond the purpose of the present linked in the graph: Nai  aj  Ag |  ;   work. In future works, this paradigm for agents will be Rel. This information characterizes the information of considered. each agent about the environment. The implemented The model is written in pure Java and does not protocol is a totally distributed one, so looking for the make use of any agent development environment. resource is heavily based on the set of neighbors. In the real word the shared resources often have big 4. Model Specifications and Parameters dimensions; after finding the resource, a lot of time is usually required for the complete download. In order to The P2P network is modeled as an undirected and simulate this the set of the "resources being non-reflexive graph. Each node is an agent, downloaded" (Ris) introduced. These are described as representing a P2P user. Agents are reactive: their Ris = , where behavior is thus determined a priori, and the strategies ID is the resource identifier, completion is the are the result of the stimuli coming from the percentage already downloaded and "check status" environment and of the condition-action rules. Their indicates whether the resource has been checked for behavior is illustrated in next section. Formally the authenticity or not. In particular, it can be not yet multi agent system is defined as MAS = , verified, verified and authentic and verified and with Ag set of nodes and Rel set of edges. Each edge inauthentic: among two nodes is a link among the agents and is check status  NOT CHECKED; AUTH; INAUTH indicated by the tuple < ai; aj > with ai and aj Another information is ID of the provider of a certain belonging to Ag. Each agent features the following resource, identified by P(ai). Each agent keeps track of internal parameters: those which are uploading to him, and this information is preserved also after the download is finished. The – Unique ID (identifier), real P2P systems allow the same resource to be download in parallel from many providers, to improve the performance and to split the bandwidth load. This threshold", the requestee denies the upload (even if the simplification should not affect the aggregate result of requestee is a malicious agent). the simulation, since the negative payoff would reach - The "replyTo" method refers to the reply each more agents instead of just one (so the case with agent gives when asked for a resource. When the agent multiple provider is a sub-case of that with a single is faced with a request he cannot comply but the provider). requester's reputation is above the "replying threshold", if he belongs to the malicious class, he has to decide whether to create and upload an inauthentic file by 4.1. The Reputation Model copying and renaming one of his other resources. The decision is based depending on a parameter. If the In this work we assume a simple idealized model of resource is owned, she sends it to the requesting agent, reputation, since the objective is not to prove the after verifying if her reputation is higher than the effectiveness of a particular reputation algorithm but to "replying threshold". Each agent performs at each study the effect of users' behavior on a reputation round of simulation two steps: system. We use a centralized system which assumes the correctness of information provided by users, e.g., 1) Performing the downloads in progress. For each it is not possible to give an evaluation of a user with resource being downloaded, the agents check if the whom there was no interaction. The reason is that we download is finished. If not, the system checks if the focus on the behavior of common agents and not on resource is still present in the provider's "sharing pool". hackers who attack the system by manipulating the In case it's no longer there, the download is stopped code of the peer application. In the system there are and is removed from the list of the "owned resources". two reputation thresholds: the first and higher one, Each file is formed by n units; when 2/n of the file has under which it’s impossible to ask for resources to been downloaded, then the file gets automatically other agents, the second, lower than the other, which owned and shared also by the agent that is makes it impossible even to share the owned files. This downloading it. guarantees that an agents that falls under the first one 2) Making new requests to other peers or verifying (because she shared too many inauthentic files), can the authenticity of a file downloaded or in still regain credits by sharing authentic ones and come downloading, but not both: back over the first threshold. On the contrary, if she a) When searching for a resource all the continues sharing inauthentic files, she will fall also agents within a depth of 3 from the requesting under the second threshold, being de facto excluded one are considered. The list is ordered by from the network, still being a working link from and reputation. A method is invoked on every agent to other agents. with a reputation higher than the "requests threshold", until the resource is found or the list 4.2. The User Model reaches the ending point. If the resource is found, it's put in the "downloading list", the goal is Peers are reactive agents replying to requests, cancelled, the supplier is recorded and linked with performing requests or verifying files. While upload is that specific download in progress and her performed each time another agent makes a request, reputation is increased according to the value requesting a file and verification are performed (in defined in the simulation parameters. If no alternative) when it is the turn of the agent in the resource is found, the goal is given up. simulation. All agents belong to two disjoint classes: b) Verification means that a file is malicious agents and loyal agents. The classes have previewed and if the content does not correspond different behaviors concerning uploading, while they to its description or filename, this fact is notified have the same behavior concerning downloading and to the reputation system. Verification phase verification: malicious agents are just common agents requires that at least one file must be in progress who exploit for selfishness the weaknesses of the and it must be beyond the 2/n threshold described system. When it is the turn of another peer, and he above. An agent has a given probability to verify requests a file to the agent, he has to decide whether to instead of looking for a new file. In case the agent comply with the request and to decide how to comply verifies, a random resource is selected among with it. those “in download” and not checked. If authentic, the turn is over. Otherwise, a - The decision to upload a file is based on the "punishment" method is invoked, the resource reputation of the requester: if it is below the "replying deleted from the "downloading" and from the "owned " lists and put among the "goals" once times the simulation with the same parameters, again. sampling the inauthentic/total ratio every 50 steps. This is to overcome the sampling effect; many The RMS is based on the "punishment" method variables in the simulation are stochastic, so this which lowers the supplier's reputation, deletes her from technique gives an high level of confidence for the the "providers" list in order to avoid cyclic punishment produced results. In 2000 turns, we have a total of 40 chains, and recursively invokes the "punishment" samples. After all the executions are over, the average method on the punished provider. A punishment chain for each time step is calculated, and represented in a is thus created, reaching the creator of the inauthentic chart. In the same way, the grand average of the file, and all the aware or unaware agents that average reputations for loyal and malicious agents is contributed in spreading it. calculated, and represented in a bar chart. In figure 1, the chart with the trend of inauthentic/total resources is 5. Results represented for the results coming from experiments 1, 2, 3, 5 and 6. The results of experiment 4 is discussed The simulation goes on until at least one goal exists later. and/or a download is still in progress. In the following table a summary of the most important parameters for the experiments are given: Figure 1 – inauthentic/total ratio Experiment 5 depicts the worst case: no negative Table 1 – the main parameters payoff is given: this is the case of a P2P network without a RMS behind it. The ratio initially grows and, In all the experiments, the other relevant parameters at a certain point, it gets constant over time, since new are fixed, while the following ones change: resources are stochastically distributed among all the agents with the same probability. In this way also malicious agents have new resources to share, and they will send out inauthentic files only for those resources they do not own. In the idealized world modeled in this simulation, since agents are 50 malicious and 50 loyal, and since the ones with higher reputation are preferred Table 2 – the scenarios when asking for a file, it’s straightforward that malicious agents’ reputation fly away, and that an high A crucial index, defining the wellbeing of the P2P percentage of files in the system are inauthentic (about system, is the ratio among the number of inauthentic 63%). Experiment 1 shows how a simple RMS, with resources and the total number of files on the network. quite a light punishing factor (3) is already sufficient to The total number is increasing more and more over lower the percentage of inauthentic files in the network time, since new resources are introduced iteratively. over time. We can see a positive trend, reaching about Another measure collected is the average reputation of 28% after 2000 time steps, which is an over 100% loyal and malicious agents at the end of the simulation; improvement compared to the situation in which there in an ideal world, we expect malicious ones to be was no punishment for inauthentic files. In this penalized for their behavior, and loyal ones to be experiment the verification percentage is at 30%. This rewarded. The results were obtained by a batch is quite low, since it means that 70% of the files remain execution mode for the simulation. This executes 50 unchecked forever (downloaded, but never used). In order to show how much the human factor can slightly higher verification rate (from 30% to 40%) influence the way in which a RMS works, in weights about the same of a heavy upgrade of the experiment 2 the verification percentage has been punishing factor (from 3 to 8). This can be considered increased up to 40%, leaving the negative payoff still as a quantitative result, comparing the exogenous at 3. The result is surprisingly good: the factor (resource verification performed by the users) to inauthentic/total ratio is dramatically lowered after few the endogenous one (negative payoff). turns (less than 10% after 200), reaching less than 1% Besides considering the ratio of inauthentic files after 2000 steps. Since 40% of files checked is quite a moving on a P2P network, it’s also crucial to verify realistic percentage for a P2P user, this empirically that the proposed RMS algorithm could punish the proves that even the simple RMS proposed here agents that maliciously share inauthentic files, without dramatically helps in reducing the number of involving too much unwilling accomplices, which are inauthentic files. In order to assign a quantitative loyal users that unconsciously spread the files created weight to the human factor, in experiment 3, the by the former ones. This is considered by looking at negative payoff is moved from 3 to 4, while bringing the average reputations, at the end of simulation steps back the verification percentage to 30%. Even with a (figure 3). higher punishing factor, the ratio is worse than in experiment 2, meaning that it’s preferable to have a higher verification rate, compared to a higher negative payoff. Experiment 6 shows the opposite trend: the negative payoff is lighter (2), but the verification rate is again at 40%, as in experiment 2. The trend is very similar – just a bit worse - to that of experiment 3. In particular, the ratio of inauthentic files, after 2000 turns, is about 16%. At this point, it gets quite interesting to find the “break even point” among the punishing factor and the verification rate. After some empirical simulations, we have that, compared with 40% of verification and 3 negative payoff, if now Figure 3 – final average reputations verification is just at 30%, the negative payoff must be set to a whopping value of 8, in order to get a In the worst case scenario, the malicious agents, comparable trend in the ratio. This is done in that are not punished for producing inauthentic files, experiment 4 (figure 2): after 2000 turns, there’s 1% of always upload the file they are asked for (be it inauthentic files with a negative payoff of 3 and a authentic or not). In this way, they soon gain credits, verification percentage of 40%, and about 0.7 with 8 topping the loyal ones. Since in the model the users and 30% respectively. with a higher reputation are preferred when asking files, this phenomenon soon triggers an explosive effects: loyal agents are marginalized, and never get asked for files. This results in a very low average reputation for loyal agents (around 70 after 2000 turns) and a very high average value for malicious agents (more than 2800) at the same time. In experiment 1 the basic RMS presented here, changes this result; even with a low negative payoff (3) the average reputations after 2000 turns, the results are clear: about 700 for loyal agents and slightly more than 200 for malicious ones. The algorithm preserves loyal agents, while punishing malicious ones. In experiment 2, with a higher verification percentage (human factor), we see a tremendous improvement for the effectiveness of the Figure 2 – weighting the collaboration factor RMS algorithm. The average reputation for loyal agents, after 2000 steps, reaches almost 1400, while all This clearly indicates that collaboration factor (the the malicious agents go under the lower threshold (they files verification) is crucial for a RMS to work can’t either download or share resources), with an correctly and give the desired aggregate results (few average reputation of less than 9 points. Experiment 3 inauthentic files over a P2P network). In particular, a explores the scenario in which the users just check reputation for agents, when whitewashing mode is 30% of the files they download, but the negative active. payoff is raised from 3 to 4. The final figure about Even with CBM activated, the results are very average reputations is again very good. Loyal agents, similar to those in which this mode is off. They are after 2000 steps, averagely reach a reputation of over actually a bit worse when the negative payoff is low 1200, while malicious ones stay down at about 40. (3) and so is the verification percentage (30%): the This again proves the proposed RMS system to be ratio of inauthentic files in the network is quite high, at quite effective, though, with a low verification rate, not about 41% after 2000 turns versus the 27% observed in all the malicious agents get under the lower threshold, experiment 1, which had the same parameters, but no even if the negative payoff is 4. In experiment 6 the CBM. When the verification percentage is increased to verification percentage is again at the more realistic 40%, though, things get quite better. Now the ratio of 40%, while negative payoff is reduced to 2. Even with inauthentic files has the same levels as in experiment 2 this low negative payoff, the results are good: most (less than 1% after 2000 steps). Also with a lower malicious agents fall under the lowest threshold, so verification percentage (again at 30%), but leaving the they can’t share files and they get an average negative payoff at 4, the figure is almost identical to reputation of about 100. Loyal agents behave very well the one with the same parameters, but without a CBM. and reach an average reputation of more than 900. After 2000 turns, the inauthentic files ratio is about Experiment 4 is the one in which we wanted to harshly 12%. penalize inauthentic file sharing (negative payoff is set at 8), while leaving an high laxity in the verification percentage (30%). Unlikely what it could have been expected, this setup does not punish too much loyal agents that, unwillingly, spread unchecked inauthentic files. After 2000 turns, all the malicious agents fall under the lowest threshold, and feature an average reputation of less than 7 points, while loyal agents fly at an average of almost 1300 points. The fact that no loyal agent falls under the “point of no return” (the lowest threshold) is probably due to the fact that they do not systematically share inauthentic files, while malicious agents do. Loyal ones just share the inauthentic resources they never check. Malicious agents, on the other side, always send out inauthentic Figure 4 – inauthentic/total ratio in whitewashing mode files when asked for a resource they do not own, thus being hardly punished by the RMS, when the negative The experiments show that malicious agents, even payoff is more than 3. resetting their own reputation after going below the lowest threshold, can’t overcome this basic RMS, if 6. Whitewashing they always produce inauthentic files. This happens because, even if they reset their reputation to the initial A "whitewashing" mode is implemented and value, it’s still low compared to the one reached by selectable before the simulation starts, in order to loyal agents; if they shared authentic files, this value simulate the real behavior of some P2P users who, would go up in few turns, but since they again start realizing that they cannot download anymore (since spreading inauthentic files, they almost immediately they have low credits or, in this case, bad reputation), fall under the thresholds again. disconnect their client, and then connect again, so to start from the initial pool of credits/reputation. When this mode is active, at the beginning of each turn all the agents that are under a given threshold reset it to the initial value, metaphorically representing the disconnection and reconnection. In experiments 7, 8 and 9 this is tested to see if it affects previous results. In figure 4, the ratio among inauthentic and total resources is depicted, and in figure 5 the final average the same way as before. Real users are flexible, and adapt themselves to different situations. If they see that many inauthentic files are moving on the network since informal norms regulating the P2P are not respected, it is likely that they would also start producing them, in order to gain credits, by an imitative behavior. While the use of reactive agents keeps the results more readable and easy comparable, in future works we’ll implement cognitive ones, in order to explore their behavior under a RMS; they feature a policy which is dynamically created through trial and error, and progressive reinforcement learning. Two are the Figure 5 – final average reputations in whitewashing mode dimensions of learning that should be considered: one regarding the long term satisfaction of goals (related to 7. Conclusion and Outlook the action of sending out an inauthentic file or not) and the other about the convenience in verifying a file (thus The main purpose of the work was to show, by potentially losing a turn) related to the risk of being means of an empirical analysis based on simulation, punished as an unwilling accomplice in spreading how the collaboration coming from the agents in a inauthentic files. social system can be a crucial driver for the Besides, the threshold study now carried on at an effectiveness of a RMS. aggregate level will be made also from the point of As a test-bed we considered a P2P network for file view of the individual agent: when does it become too sharing and, by an agent based simulation, we show costly to "cheat" for an agent so that it ceases to be how a basic RMS can be effective to reduce beneficial? Such study will be made at a higher scale, inauthentic files circulating on the network. In order to referring to the number of agents and resources. enhance its performance, though, the collaboration Also, if control through user collaboration has been factor, in the form of verifying policy, is crucial: a 33% studied, rewarding control should be considered as an more in verification results in about thirty times less individual incentive to control (with possible biases inauthentic files on the network. While a qualitative from malicious agent) and thus relate more to the analysis of this factor is straightforward for the collaboration objective of the study. This will also be presented model, we added a quantitative result, trying studied in future works. to weight the exogenous factor (the verification rate) by comparing it to the endogenous one (the negative 8. Acknowledgements payoff). We showed that a 33% increase in verification percentage leads to similar results obtained by This work has been partially funded by the project increasing the negative payoff of 66%. Again, the ICT4LAW, financed by Regione Piemonte. collaboration factor proves to be crucial for the RMS to work efficiently. While the provided results are encouraging, the 9. References model is not yet realistic under certain aspects. The [1] A. Josang, R. Ismail, and C. Boyd. A survey of trust and weakest part is not the simplicity of the RMS reputation systems for online service provision. Decis. algorithm or of the representation of the P2P network, Support Syst., 43(2):618–644, March 2007. rather the deterministic (reactive) behavior of the [2] S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina. agents: the agents involved are too naive to represent The eigentrust algorithm for reputation real users. In particular, potentially malicious users try management in p2p networks. In WWW ’03: Proceedings of to exploit the weaker points of the system, by changing the 12th international conference their behavior according to what they observe, like on World Wide Web, pages 640–651, New York, NY, USA, satisfaction of their own goals. It’s very unlikely that 2003. ACM Press. users, when realizing not to download at the same rate as before, would go on sending out inauthentic files in