Trust on Information Sources: A theoretical and computation approach Alessandro Sapienza, Rino Falcone and Cristiano Castelfranchi, Institute of Cognitive Science and Technologies, ISTC-CNR, Rome, Italy, {alessandro.sapienza, rino.falcone, cristiano.castelfranchi}@istc.cnr.it Abstract— We start from the claim that trust in information Reputation (the shared general opinion of others about sources is just a kind of social trust. We are interested in the fact F) on that specific information content; [3; 4; 5; 12; that the relevance and the trustworthiness of the information 13]; acquired by an agent X from a given number of sources strictly depends and derives from the X's trust on each of these sources c) Categorization of F (it is assumed that a source can be with respect the kind of that information. In this paper, we categorized and that it is known this category), analyze the different dimensions of trust in information sources exploiting inference and reasoning: and formalize the degree of subjective certainty or strength of the X's belief P, considering three main factors: the X's trust about P  inheritance from classes or groups were Z id just depending from the X's judgment of the source's competence belonging (as a good "exemplar"); and reliability; the sources' degree of certainty about P; and the  analogy: Z is (as for that) like Y, Y is good for, X's degree of trust that P derives from that given source. Finally then Z too is good for; we present a computational approach based on fuzzy sets.  analogy on the task: Z is good/reliable for P he should be good also for P', since P and P' are I. DIMENSIONS OF TRUST IN INFORMATION SOURCES very similar. (In any case: how much do I trust Which are the important specific dimensions of trust in my reasoning ability?). information sources (TIS)? Many of these dimensions are quite On this basis it is possible to establish the sophisticated, given the importance of information for human competence/reliability of F on the specific activity and cooperation. We will simplify and put aside several of them. information content [2,6]. First of all, we have to trust (more or less) the source (F) as The two faces of F's trustworthiness (competence and competent and reliable in that domain, in the domain of the reliability) are relatively independent1; we will treat them as specific information content. Am I waiting for some advice on such. Moreover, we will simplify these complex components train schedule? On weather forecast? On the program for the in just one quantitative fuzzy parameter: F's estimated examination? On a cooking recipe? trustworthiness; by combining competence and reliability. Is this F not only competent but also reliable (in general or In particular we define the following fuzzy set: terrible, poor, specifically towards me)? Is F sincere and honest? Or leaning mediocre, good, excellent (see figure 1) and apply it to each of to lie and deceive? Will F do what has promised to do or "has" the previous different dimensions (direct experience, to do for his role? And so on. recommendations and reputation, categorization). These competence and reliability evaluations can derive from different reasons, basically: These competence and reliability evaluations can derive from different reasons, basically: a) Our previous direct experience with F (how F performed in the past interactions) on that specific Second, information sources have and give us a specific information content , or better our "memory" about, information that they know/believe; but believing something is and the adjustment that we have made of our not a yes/no status; we can be more or less convinced and sure evaluation of F in several interaction, and possible (on the basis of our evidences, sources, reasoning). Thus a successes or failure relying on its information; good source might inform us not only about P, but also about b) Recommendations (other individuals Z reporting their 1 Actually they are not fully independent. For example, F might be tempted to direct experience and evaluation about F) or lie to me if/when is not so competent or providing good products: he has more motives for fudging me. its degree of certainty about P, its trust in the truth of P. For A. Additional problems and dimensions example: "It is absolutely sure that P", "Probably P", "It is We believe in a given datum on the basis of its origin, its frequent that P", "It might be that P", and so on. source: perception? communication? inference? And so on. Of course there are more sophisticated meta-trust A) The more reliable (trusted) the F the stronger the dimensions like: how much am I sure, confident, in F's evaluation of the probability of the event or in his subjective trust in P, the strength of the Belief that P. certainty?2 Is F not sincere? Or not so self-confident and good evaluator? For example, in drug leaflet they say that a given This is why it is very important to have a "memory" of the possible bad side effect is only in 1% of cases. sources of our beliefs. However, there is another fundamental principle of the degree of credibility of a given Belief (its trustworthiness): B) The many the converging sources, the stronger our belief (of course, if there are no correlations among the sources). Thus we have the problem to combine different sources about P, and their subjective degrees of certainty, and their credibility, in order to weigh the credibility of P, and have an incentive due to a large convergence of sources. There might be different heuristics for dealing with contradictory information and sources. One (prudent) agent Figure 1: Representation of the five fuzzy sets might adopt as assumption the worst hypothesis, the weaker degree of P; another (optimistic) agent, might choose the best, Have I to believe that? Or they are not reliable since they want more favorable estimation; another agent might choose the to sell that drug? For the moment, we put aside that dimension most reliable source. We will formalize only one strategy: the of how much meta-trust we have in the provided degree of credibility. We will just combine the provided certainty of P weighing up and combination of the different strengths of the with the reliability of F as source. It in fact makes a difference different sources, avoiding however the psychologically if an excellent or a mediocre F says that the degree of certainty incorrect result of probability values, where by combining of P is 70% (see §I.B). different probabilities we always decrease the certainty, it never increases. On the contrary - as we said - convergent Third, especially for information sources it is very relevant sources reinforce each other and make us more certain of that the following form of trust: the trust we have that the datum. information under analysis derives from that specific source, how much we are sure about that "transmission"; that is, that the communication has been correct and working (and B. Feedback on source credibility/TIS complete); that there are no interferences and alterations, and I received and understood correctly; that the F is really that F We have to store the sources of our beliefs because, since (Identity).Otherwise I cannot apply the first factor: F's we believe on the basis of source credibility, we have to be in credibility. condition to adjust such credibility, our TIS, on the basis of the result. If I believe that P on the basis of source F1, and later I Let's simplify also these dimensions, and formalize just the discover that P is false, that F1 was wrong or deceptive, I have degree of trust that F is F; that the F of that information (I to readjust my trust in F1, in order next time (or with similar have to decide whether believe or not) is actually F. In the sources) to be more prudent. And the same also in case of WEB this is an imperative problem: the problem of the real positive confirmation . identity of the F, and of the reliability of the signs of that However, remember that it is well known [8] that the identity, and of the communication. negative feedback (invalidation of TIS) is more effective and These dimensions of TIS are quite independent of each other heavy than the positive one (confirmation). This asymmetry (and we will treat them as such); we have just to combine (the collapse of trust in case on negative experience versus the them and provide the appropriate dynamics. For example, slow acquisition or increasing of trust) is not specific of trust what happen if a given very reliable source F' says that "it is and of TIS; it is -in our view- basically an effect of a general sure that P", but I'm not sure at all that the information really cognitive phenomenon. It is not an accident or weirdness if the comes from F' and I cannot ascertain that? disappointment of trust has much stronger (negative) impact than the (positive) impact of confirmation. It is just a sub-case of the general and fundamental asymmetry of negative vs. 2 In a sense it is a transitivity principle [7]: X trust Y, and Y trust Z; will X positive results, and more precisely of "losses" against trust Z? Only if X trusts Y "as a good evaluator of Z and of that domain". "winnings": the well-known Prospect theory [9]. We do not Analogously here: will X trust Y because Y trusts Y? Only if X trust Y "as a evaluate in a symmetric way and on the basis of an "objective" good and reliable evaluator" of it-self. value/quantity our progresses and acquisitions versus our A. Its origin/ground failures and wastes, relatively to our "status quo". Losses (with Concerning a single belief P, we have to consider n the same "objective" value) are perceived and treated as much different sources asserting or denying P. The final value of more severe: the curve of losses is convex and steep while that TrustX(P) depends on X’s trust towards every single source F of winnings is concave. Analogously the urgency and pressure of the information P (that could mean with respect the class of of the "avoidance" goals is greater than the impulse/strength of the achievement goals [10]. All this applies also to the slow information to which P belongs): increasing of trust and its fast decreasing; and to the subjective impact of trust disappointment (betrayal!) vs. trust TrustX(F,P) (3) confirmation. That's why usually we are prudent in deciding to In other words, we state that: trust somebody; in order do not expose us to disappointment TrustX(P) = f(TrustX(F1,P), …, TrustX(Fn,P)) (4) and betrayals, and harms. However, also this is not always true; we have quite naive forms of trust just based on gregariousness Where n is the total number of sources. and imitation, on sympathy and feelings, on the diffuse trust in Then to compute X’s trust value, we have to compose the n that environment and group, etc. This also plays a crucial role sources’ value in just one resulting factor. in social networks on the web, in web recommendations, etc. Applying now the conceptual modeling previously described Moreover, in our theory [11] not always and automatically we have that TrustX(F,P) can be articulated in: a bad result (or a good result) entails the revision of TIS. It 1. X’s trust about P just depending from the X’s depends on the "causal attribution": it has been a fault/defect of judgment of the F’s competence and reliability as F or an interference on the environment? The result might be derived from the composition of the three factors bad although F's performance was his best. Let us put aside (direct experience, recommendation/reputation, and here the feedback effect and revision on TIS. categorization), in practice the F’s credibility about P on view of X: C. Plausibility: the integration with previous knowledge Trust1X(F,P) (5) To believe something means not just to put it in a file in 2. F’s degree of certainty about P: information sources my mind; it means to "integrate" it with my previous give not only the information but also their certainty knowledge. Knowledge must be at least non-contradictory, and about this information; given that we are interested to possibly supported, justified: this explains that, and it is this certainty, but we have to consider that through explained, supported, by these other facts/arguments. If there X’s point of view, we introduce is contradiction I cannot believe P; either I have to reject P or TrustX(TrustF(P)) (6) I have to revise my previous beliefs in order to coherently in particular, we consider that X completely trusts F, introduce P. It depends on the strength of the new information so that TrustX(TrustF(P)) = TrustF(P) (its credibility, due to its sources) and on the number and 3. the X’s degree of trust that P derives from F: the trust strength of the internal opposition: the value of the we have that the information under analysis derives contradictory previous beliefs, and the extension and cost of from that specific source: the required revision. That is: it is not enough that the TrustX(Source(F,P)) (7) candidate belief that P be well supported and highly credible; 4. the fact that F is supporting P or is opposing to it (not is there an epistemic conflict? Is it "implausible" to me? Are P): there antagonistic beliefs? And which is their strength? The SupportF(P) (8) winner of the conflict will be the stronger "group" of beliefs. Resuming: Even the information of a very credible source (like our own TrustX(F,P) = f3(Trust1X(F,P), TrustX(TrustF(P)), eyes) can be rejected! TrustX(Source(F,P)), SupportF(P)) (9) II. FORMALIZING AND COMPUTING THE DEGREE OF CERTAINTY Here we could introduce a threshold for each of these 3 AS TRUST IN THE BELIEF dimensions, allowing to reduce risk factors. As we have said, there is a confidence, a trust in the beliefs we B. A modality of computation have and on which we rely. 1) Trust1X(F,P) Suppose X is a cognitive agent, an agent who has beliefs and As specified in §I the value of Trust1X(F,P) is a function goals. Given BelX, the set of the X’s beliefs, then P is a belief of: of X if: 1. Past interactions; 2. The category of membership; P  BelX (1) 3. Reputation. The degree of subjective certainty or strength of the X’s belief As previously said, each of these values is represented by a P corresponds with the X’s trust about P, and call it: fuzzy set: terrible, poor, mediocre, good, excellent. We then compose them into a single fuzzy set, considering a weight for TrustX(P) (2) each of these three parameters. Those weights are defined in range [0;10], with 0 meaning that the element has no importance in the evaluation and 10 meaning that it has the The maximum uncertainty value is 1 (+-50%) meaning that X maximal importance. is absolutely not sure about its evaluation. On the contrary, the It is worth noting that the weight of experience has to be minimum value of uncertainty is 0, meaning that X is referred to a twofold meaning: it must take into account the absolutely sure about its evaluation. numerosity of experiences (with their positive and negative values), but also the intrinsic value of experience for that In a way similar to uncertainty, we used the following formula subject. to compute a value of TrustX(F,P): 1) If SupportF(P) =1, namely F is supporting P However, the fuzzy set in and by itself is not very useful: what interests us in the end is to have a plausibility range, TrustX(F,P) = ½ + (Trust1X(F,P) – ½) * TrustX(TrustF(P)) * which is representative of the expected value of Trust1X(F,P). TrustX(Source(F,P)) (13a) To get that, it is therefore necessary to apply a defuzzyfication method. Among the various possibilities (mean of maxima, 2) If SupportF(P) =1, namely F is opposing P mean of centers …) we have chosen to use the centroid method, as we believed it can provide a good representation of TrustX(F,P) = ½ - (Trust1X(F,P) – ½) * TrustX(TrustF(P)) * the fuzzy set. The centroid method exploits the following TrustX(Source(F,P)) (13b) formula: This formula has a particular trend, different from that of k=  (∫0 x f(x) dx)/ (∫01f(x) dx) 1 (10) uncertainty. Here in fact the point of convergence is ½, value that does not give any information about how much X can trust F about P. Notice that, if F is supporting P: were f(x) is the fuzzy set function.  If Trust1X(F,P) is less than ½, as TrustX(TrustF(P)) The value k, obtained in output, is equal to the abscissa of the and TrustX(Source(F,P)) increase the value of trust gravity center of the fuzzy set. will decrease going to the value of Trust1X(F,P); as This value is also associated with the variance, obtained by TrustX(TrustF(P)) and TrustX(Source(F,P)) decrease the formula: the value of trust will increase going to ½;  If Trust1X(F,P) is more than ½, as TrustX(TrustF(P)) σ2 = (∫01 (x – k)2 f(x) dx)/ (∫01f(x) dx) (11) and TrustX(Source(F,P)) increase the value of trust will increase going to the value of Trust1X(F,P); as With these two values, we determine Trust1X(F,P). as the TrustX(TrustF(P)) and TrustX(Source(F,P))decrease interval [k- ;k+ ]. the value of trust will decrease going to ½; Conversely, when F is opposing P: 2) TrustX(F,P)  If Trust1X(F,P) is less than ½, as TrustX(TrustF(P)) Once we get Trust1X(F,P)., we can determine the value of and TrustX(Source(F,P)) increase the value of trust TrustX(F,P). In particular, we determine a trust value followed will increase going to the value of Trust1X(F,P); as by an interval, namely the uncertainty on TrustX(F,P). TrustX(TrustF(P)) and TrustX(Source(F,P)) decrease  the value of trust will decrease going to ½; For uncertainty calculation we use the formula:  If Trust1X(F,P) is more than ½, as TrustX(TrustF(P)) and TrustX(Source(F,P)) increase the value of trust Uncertainty = 1 - (1- ΔTrust1X)* TrustX(TrustF(P))* will decrease going to the value of Trust1X(F,P); as TrustX(Source(F,P)) (12) TrustX(TrustF(P)) and TrustX(Source(F,P)) decrease ΔTrust1X =Max(Trust1X(F,P)) – Min(Trust1X(F,P)) the value of trust will increase going to ½;  In other words, the uncertainty depended on the uncertainty 3) Computing a final trust value: sources’ aggregation interval of Trust1X(F,P), properly modulated by How to evaluate the contribution of different sources? In TrustX(TrustF(P)) and TrustX(Source(F,P)). general, the average value is given by the average of This formula implies that uncertainty: individual sources’ trust value.  Increase / decrease linearly when ΔTrust1X increase / This issue gets more complicated when you need to find an decrease; average uncertainty value: computing the average of  Increase / decrease linearly when TrustX(TrustF(P)) uncertainties is not enough. For instance, suppose we have two decrease / increase; sources, the former asserting 0 with uncertainty 0 and the  Increase / decrease linearly when TrustX(Source(F,P)) latter asserting 1 with uncertainty 0. Intuitively, a trust value decrease / increase. of 0.5 is fine by me, but it is implausible that uncertainty is The inverse behavior of TrustX(TrustF(P)) and equal to 0; on the contrary, it should take the maximum value. TrustX(Source(F,P)) is perfectly explained by the fact that Thus it is easy to note how global uncertainty depends on both when X is not so sure that P derives from F or F’s degree of the single values of uncertainty and the single trust values. certainty about P is low, global uncertainty should increase. Plus we state that the greater the number of convergent sources towards a trust value, the lower the uncertainty I and Innovation (Programma Operativo Nazionale Ricerca e have. Then the formula to compute this global value should Competitività 2007-2013). take into account these factors. The domain of uncertainty [0,1] has been divided into 5 intervals of amplitude 0.2. Values falling in the same interval REFERENCES are considered convergent. Here is the used formula: [1] Castelfranchi, C., Falcone R., Pezzulo, (2003) Trust in Information Sources as a Source for Trust: A Fuzzy Approach, Proceedings of the Second International Joint Conference on Autonomous Agents and Unc = Unc0 + ∑j∑iI Unci / (I*N) (14) Multiagent Systems (AAMAS-03) Melburne (Australia), 14-18 July, ACM Press, pp.89-96. where: [2] Falcone R., Piunti, M., Venanzi, M., Castelfranchi C., (2013), From Unc0 = minimum distance value between the computed Manifesta to Krypta: The Relevance of Categories for Trusting Others, in R. Falcone and M. Singh (Eds.) Trust in Multiagent Systems, ACM medium trust value and each single trust value (of every single Transaction on Intelligent Systems and Technology, Volume 4 Issue 2, source); March 2013 j = intervals, 1