New Concepts for Trust Propagation in Knowledge Processing Systems Markus Jäger and Josef Küng Institute for Application Oriented Knowledge Processing (FAW) Faculty of Engineering and Natural Sciences (TNF) Johannes Kepler University Linz (JKU), Austria {markus.jaeger, josef.kueng}@jku.at Abstract. Everybody has a sense of trusting people or institutions, but how is trust defined? It always depends on the specific field of research and application and is different most of the time, which makes it hard to answer this question in general at a computational level. Thinking on knowledge processing systems we have this question twice. How can we define and calculate trust values for the input data and, much more challenging, what is the trust value of the output? Meeting this challenge we first investigate appropriate ways of defining trust. Within this paper we consider three different existing trust models and a self developed one. Then we show ways, how knowledge processing systems can handle these trust values and propagate them through a network of processing steps in a way that the final results are representative. Therefore we show the propagation of trust with the three existing trust models and with a recently self developed approach, where also precision- and importance- values are considered. With these models, we can give insights to the topic of defining and propagating trust in knowledge processing systems. Keywords: Trust; Propagation; Knowledge Processing Systems; Trust Metrics; Trust Models; Precision; Fusion; Knowledge; Provenance; 1 Introduction The main subject of our research is the topic of how knowledge processing sys- tems can work with trust values. While going deeper into this research field, several questions arise. In our work, we try to figure out, how trust can be defined and mea- sured and in particular the question of how knowledge processing systems can deal with trust? Furthermore we investigate the topic of how several trust values can be combined (in general and in knowledge process- ing) and how can trust values be propagated through several steps of a knowledge processing system? We try to give answers to these questions by investigating possibilities of trust measurement, combination and propagation and try to propose a sound and all-encompassing way of handling these topics. The rest of this paper is structured as following: section 2 defines common terminologies and shows related work in our field of research. As the term ”trust” has a significant high importance in our research, we dedicated an extra section for defining trust in knowledge processing - section 3. In this section, we investigate different models of measuring or determining trust. We briefly introduce our recently developed and already published approach, where trust- and precision values are handled, processed and propagated by taking into account several importances in section 4. Section 5 examines the question, how trust can be propagated through knowl- edge processing systems. Here we cover the different trust measuring models, which were introduced in section 3 and 4 and how trust can be propagated in these different models. Section 6 shows the application of the presented trust propagation models in a scenario. We close this paper with section 7 by giving a summary of our work and an outlook for further research. 2 Related Work In this section we provide some insights into important terms which are relevant to our work.Furthermore we discuss the fusion of sensor precision values. 2.1 Trust The meaning of the term ”Trust” always depends on the specific environment and field of research and application. In a recent publication about trust, we state: ”The question of ’How can we trust anything/anybody?’ is discussed since the beginning of mankind, but what does this topic mean in context to today’s technology age and especially for the information technology?” [11]. The three main types of applicable trust by Rousseau et al. [18] are (1) trusting beliefs, (2) trusting intentions, and (3) trusting behaviours, where these three types are connected to each other. Another point of view is the similarity of trusting people and trusting tech- nology, especially information technology, where the main difference is within the application of trust in the specific area [16]. Also a very interesting publication about trust in information sources is from Hertzum et al. [9]. They compare the concept of trust between people and virtual agents, based on two empirical studies. Some relational aspects concerning trust in the industrial marketing and management sector can be found in ”Concerning trust and information” from Denize et al. [8]. 2.2 Provenance When we come into trust concerning trusting in data and trusting the sources of data, the term ”Data Provenance” comes into account. It means the origin and complete processing history of any kind of data. A quite good introduction and overview can be found in [2] and [3]. Several problems concerning data provenance are covered in [5]. Recent research work on provenance can be found in the following literature: ”Trust Evaluation Scheme of Web Data Based on Provenance in Social Semantic Web Environments” [19] and ”Transparently tracking provenance information in distributed data systems” [4]. ”Research of Data Resource Description Method oriented Provenance” [20] and ”A semantic Foundation for Provenance Management” [17] provide more theoretical and conceptual foundations for the usage of provenance. 2.3 Risk Risk in general addresses the potential of losing something with a special personal value. It is also seen as an intentional interaction with uncertainty, where the outcome is hard to predict. Rousseau et al. [18] say that ”Risk is the perceived probability of loss, as in- terpreted by a decision maker [...]. The path-dependent connection between trust and risk taking arises from a reciprocal relationship: risk creates an opportunity for trust, which leads to risk taking.” 2.4 Precision & Multi Data Sensor Fusion The link on related work of fusion precision values in sensor networks can be found in our recent publication ”Focussing on Precision- and Trust-Propagation in Knowledge Processing Systems” [12]. The concluding findings are, that sensor fusion is motivated to avoid problems which come from the use of single sensors (e.g. sensor deprivation, limited spatial and temporal coverage, imprecision and uncertainty). Fusion processes in the sensor domain are often categorized in three levels: (1) raw data fusion (low level), (2) feature fusion (medium level), and (3) decision fusion (high level). 3 Defining Trust in Knowledge Processing The main question in our work is, how knowledge processing systems can handle and work with trust. In this context, the first step is to find a definition of trust, which is suitable for this scientific domain. In this section we investigate different applicable models of measuring or determining trust. Therefore we describe three existing ways: the binary trust model, the probabilistic trust model and the opinion-space trust model. 3.1 Trust, Certainty and Precision in Knowledge Processing To the best knowledge of the authors, there is no related work dealing with this topic directly – neither for processing trust and certainty, nor for the aggregation of trust, (un)certainty or precision. A good approach for measuring trust is given in [7]. Recent research on modeling uncertainty is given in [15] and in [6]. The propagation/fusion of (sensor) precision values has been evaluated in recent publications, as stated in section 2.4. Some of the investigated propa- gation/fusion methods of sensor precision values seem quite promising also for the application on trust. Nevertheless we focus on models that cover only trust values, as presented in the following sections. Another approach is the refactoring of a trust value from a given precision, which will be covered in our future work. 3.2 Binary Trust Model One of the easiest ways to represent trust values in an understandable and applicable way is the usage of a binary trust model. In this model the possible trust values can either be 0 or 1. Therefore the only differentiation is to fully trust a subject (trust = 1) or not (trust = 0). In our opinion, the model is very hard to apply in a real world domain because it is very hard to get a trust value of 1 anyway. The definition of possible states in the binary trust model, can be seen in formula 1. T =0 ∨ 1 (1) Formula 1: Boundaries for the scope of trust in the binary trust model. 3.3 Probabilistic Trust Model A very application oriented and realistic way of representing trust is the usage of a probabilistic trust model. In this model the possible trust values range from 0 to 1 and can, for example be seen as a type of percentage view. The value of not trusting a subject (trust = 0) and fully trusting a subject (trust = 1 or 100%) is like in the binary trust model, but here the grading can be more precise, as there are (theoretically) infinite states of trust between 0 and 1 (or 100%). The range of possible states of trust in the probabilistic trust model can be seen in formula 2. 0≤T ≤1 (2) Formula 2: Boundaries for the scope of trust in the probabilistic trust model. 3.4 Opinion-Space Trust Model A very well developed model for measuring trust values is the opinion-space model by Audun Jøsang and S.J. Knapskog from 1998: ”A Metric for Trusted Systems” [14]. They introduce an evidence space and an opinion space which are two equiv- alent models for representing human beliefs, which can be summarized as trust in their model. We focus on the opinion-space trust model, which consists of the values belief b, disbelief d, and uncertainty u. These three values represent the trust, which is determined. The sum of the three values is always 1, so the interpretation of trust has to be clarified for the current domain of application. The opinion-space trust model can be seen in figure 1. Fig. 1: The opinion-space trust model / opinion triangle from [14]. b + d + u = 1, {b, d, u} ∈ [0, 1]3 (3) Formula 3: Boundaries for the scope of trust in the opinion-space model. 4 Specification of our Approach In our recent research, we designed a convenient approach for propagating trust and precision values through multi-step knowledge processing systems, where also a factor importance was introduced and considered in the calculation. Our approach was evaluated and published in several conferences before e.g. in [10, 12, 13] and tested on some artificial and real world scenarios. 4.1 Principle Idea Following, we describe the idea of our approach. The main components are: – any Source (S), which provides data; there can be multiple sources. – any Data (D), which is provided by one source; for our model, every source usually provides one or more data (elements). – any Knowledge Processing System (KPS), which processes data from one or more sources; each KPS itself produces new data as output. The main values in our approach are: – Trust value (T) of source (S), which defines how trustable the source is. The system has to be seen as a whole environment, hence the trust level for one source should always be the same. – Precision value (P) of data (D), which describes how precise, reliable, confi- dent or steady the provided data is. – Importance value (I) of one input data (D), decided by the current knowledge processing system (KPS) for the current step of computation. Our approach is sketched in figure 2. Fig. 2: Graphical description of our approach. 4.2 Scopes & Calculation In our approach, the scopes of possible values are fixed as following: – Trust T of source S, for each S, has to be greater than 0 and less or equal than 1, where each value of T for each S has to be the same (if used multiple times) - a higher value represents higher trust: 0