Reasoning about Simultaneous Change in Trust and Belief Aaron Hunter1,*,† 1 British Columbia Institute of Technology, Burnaby, BC, Canada Abstract We consider domains where agents can receive information from observations, and also from reports. When an agent receives a sequence of observations and reports, this can trigger two changes. First, the agent’s beliefs should change to incorporate new information received from trusted agents and from observations. Second, the agent’s trust in other agents should change, depending on the accuracy of their reports. In this paper, we address this problem incrementally by first considering trust change. We introduce trust states, and we demonstrate how trust states can be explicitly updated by a new class of trust change operators. Trust change postulates are introduced, and a representation result is presented for these operators. We then demonstrate how we can use trust change operators to make implicit updates to trust in other agents, based on the accuracy of the reports provided by other agents. Broadly, agents are more strongly trusted when they provide reports that agree with observation and they are less strongly trusted when they provide reports that conflict with observation. We define combined trust-belief change operators that allow an agent to simultaneously update their trust in other agents while also revising their beliefs. We then introduce a software tool that automates this entire process. In other words, the software allows us to enter a sequence of reports and observations, and it returns both a new belief state and a new trust state. Applications and directions for future work are considered. Keywords Belief Revision, Trust, Knowledge Representation 1. Introduction contribution is the introduction of combined change op- erators for trust and belief. Our approach makes the rela- Belief revision occurs when an agent receives new informa- tionship between belief change and trust change explicit, tion that must be incorporated with some previous beliefs. framed in the context of a classical model of belief revision. The most influential approaches to belief revision make Finally, the work here introduces a practical software tool the assumption that new information is “better” than the that can opens up the opportunity to reason about practical original beliefs. Hence, an agent should believe the new in- applications related to reputation systems.1 formation is true while keeping as much of the initial belief state as consistently possible. However, when the new infor- mation comes from an agent that may not be honest, then 2. Preliminaries this is no longer sensible. In these cases, there are actually two different concerns related to trust. First, trust impacts 2.1. Belief Revision the likelihood that we will believe reports from other agents. The theory of belief revision is framed in the context of This problem has been addressed to some extent in the lit- propositional logic. We assume an underlying propositional erature. The second concern is related to trust change. Our signature 𝐹 , and we define propositional formulas in the level of trust in other agents should change, depending on usual manner using connectives ∧, ∨, →, and ¬. A belief how often their reports agree with our observations. This state 𝐾 is a logically closed set of formulas. The most in- problem has not been addressed extensively in the context fluential approach to belief revision is the AGM approach, of belief revision operators. In this paper, we introduce a where a belief state 𝐾 and a formula 𝜑 are mapped to a formal approach to modeling the simultaneous dynamics new belief state 𝐾 * 𝜑 [3]. AGM revision is defined with of trust and belief along with a prototype system that cal- respect to a set of postulates, and the revision is calculated culates the result of belief revision when new information semantically by finding the most plausible states that are includes observations and reports from other agents. consistent with the new formula [4]. Our approach is the following. We first introduce trust AGM revision can only be used for single-shot belief revi- states, along with a set of postulates for trust change; these sion. If we want to perform iterated belief revision, then the postulates specify basic conditions that we expect to hold dominant approach is the DP approach [5]. In DP revision, when we determine that some agent should be more (or the initial beliefs are represented by an epistemic state. Al- less) trusted. We then prove a representation result for though the exact composition of an epistemic state is flexible operators satisfying these postulates. Next, we introduce [6], one important feature is that it includes a total pre-order new combined change operators, which specify both how over states. Our formal approach is defined for DP revision, beliefs and trust should change when an agent receives a but the implemented tool is based on variations of the Dalal sequence of reports followed by an observation. Finally, we revision operator [7]. This is an operator where plausibility introduce our implemented system for solving problems is defined in terms of the Hamming distance between states; involving the joint revision of trust and belief. it is the rare example of an AGM revision operator that can This paper makes several contributions to the literature be iterated. When we discuss our theoretical framework, we on belief change. First, trust change operators have not been give all definitions with respect to epistemic states in the DP explored in detail in the theory of belief change. As such, sense. However, in our practical tool, epistemic states are both the set of trust-change postulates and the represen- specified by a set of formulas along with a distance-based tation result are new contributions in the area. Another operator that defines the total pre-order. 22nd International Workshop on Nonmonotonic Reasoning, November 2-4, 1 2024, Hanoi, Vietnam This paper is a combination of two existing papers. The content of $ aaron_hunter@bcit.ca (A. Hunter) sections 1-4, with some small modifications, iappears in [1]. The system © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribu- tion 4.0 International (CC BY 4.0). description in section 5 was previously published in [2]. CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings Most approaches to belief revision require that the new revisit our trust in Alma. She has provided a report that is information must be believed; this is formalized by the so- consistent with our observation, so our trust in her should called success postulate of AGM revision. As noted, this is not increase. We might even want to retroactively believe her reasonable if information is reported by other agents. Two initial report. different approaches to this problem have been explored in Reasoning about this kind of problem requires a model the literature. There has been work on knowledge-based that keeps track of beliefs as well as trust in reporting agents. trust, where we only consider the ’part’ of the information As information is acquired, we not only need to revise our where the reporting agent has expertise[8, 9]. There has also beliefs - we also need to increase (resp. decrease) trust in been work on trust in terms of the reliability of information agents that have provided accurate (resp. inaccurate) reports. provided by different agents [10, 11, 12]. Throughout this In this paper, we introduce a family of combined trust-belief paper, we assume an underlying revision operator which is change operators for this purpose. We remark that this kind then modified to capture some form of trust. of reasoning does not only occur in commonsense problems; it is also the basis for trust in reputation systems [17, 18]. 2.2. Trust 3.2. Graded Trust Change Operators Trust has been studied extensively in distributed systems and network communication [13, 14]. However, it is not We introduce a model of trust change that is defined with considered in many formal models of belief revision, where respect to a set of agents A. Belief change will be added new information must be believed following revision. This in section 4. We first define trust states, which are ranking is, of course, not reasonable if the new information is a functions that capture the trust held in other agents. report from another agent. Definition 1. A trust state 𝑇 is a function 𝑇 : A → Z. We We distinguish between knowledge-based trust and write 𝛼(𝑇 ) = {𝐴 | 𝑇 (𝐴) ≤ 0}, and we refer to this as the reliability-based trust. Knowledge-based trust is concerned set of trusted agents. with the domain expertise of a reporting agent. For example, a doctor is trusted on medicine; they may not be trusted Informally, if 𝑇 (𝐴) < 𝑇 (𝐵), then 𝐴 is trusted more than on other topics. Knowledge-based trust has been used as a 𝐵. The set 𝛼(𝑇 ) is similar to the set of ”believed” states means for ranking search results on the Internet [15]. There in an epistemic state, but there is an important difference. has also been work on knowledge-based trust in formal Although the agents in 𝛼(𝑇 ) are all trusted, we still can models of belief change[8, 16]. However, knowledge-based rank them and to determine which agents are most strongly trust is not the focus of this paper. trusted. We are concerned with reliability-based trust. An agent We now introduce a simple class of trust change operators. is reliable if their reports agree with known facts or direct In the following definition, an agent literal is either 𝐴 or -𝐴, observation. If an agent provides inaccurate reports, they where 𝐴 ∈ A. will not be trusted. This has been addressed in [11], where a notion of conflict is introduced to determine which reports Definition 2. A basic trust change operator is a function ⋆ should be ignored. On the other hand, an agent that is not that maps a trust state 𝑇 and an agent literal 𝐿 to a new trust initially trusted may earn trust by continually providing state 𝑇 ⋆ 𝐿. accurate reports. This problem has not been directly ad- Intuitively, 𝑇 ⋆ 𝐴 is the operation that occurs when 𝐴 has dressed in connection with belief revision. We remark that done something that causes them to be more trusted. For honesty is one factor related to reliabiliy, but we do not example, if an agent provides a report that is consistent with assume agents are lying when a report is incorrect. direct observation, then we will increase trust in that agent. On the other hand, 𝑇 ⋆ -𝐴 captures the situation where an 3. Trust Change agent becomes less trusted. This would occur, for example, when the agent has provided a report that is inconsistent with direct observation. 3.1. Motivating Example We give some desirable properties for basic trust change Suppose we are investigating a crime scene and we can operators. The following postulates are all implicitly univer- receive reports from two agents: Juan(𝐽) and Alma(𝐴). Juan ally quantified over trust states 𝑇, 𝑇 ′ and agents 𝐴, 𝐵. For is considered to be trustworthy, whereas Alma is not. clarity, we use square brackets to write [𝑇 ⋆ 𝐴](𝐴), which We are initially unsure if the door was forced open (𝐹 ), is the value assigned to 𝐴 by the trust state 𝑇 ⋆ 𝐴. and we believe that there is no crowbar in the house(¬𝐶). So if our initial epistemic state is E, then 𝐵(E) should be R1. [𝑇 ⋆ 𝐴](𝐴) < 𝑇 (𝐴). the set of models of (𝐹 ∧ ¬𝐶) ∨ (¬𝐹 ∧ ¬𝐶). Now suppose R2. [𝑇 ⋆ 𝐴](-𝐴) > 𝑇 (𝐴). that we receive a report from Alma that the door was forced R3. If 𝐵 ̸= 𝐴, then 𝑇 (𝐵) = [𝑇 ⋆ 𝐴](𝐵) and 𝑇 (𝐵) = open and there is a crowbar in the house. Since Alma is not [𝑇 ⋆ -𝐴](𝐵). trusted, this report does not initially trigger a belief change. Postulate 𝑅1 says that, when an agent 𝐴 becomes more Juan then reports that the door was not forced open and that trusted, the 𝑇 -ranking for 𝐴 decreases. Postulate 𝑅2 makes there is no crowbar in the house. The most plausible states the dual statement for agents that become less trusted. Pos- in our new epistemic state E′ are the models of ¬𝐹 ∧ ¬𝐶. tulate 𝑅3 states that changing the trust level associated with Suppose that we now observe a crowbar is in the living an agent 𝐴 does not affect the trust level of any other agent. room. What should be believe now, and who should we We also introduce two postulates to ensure that ⋆ treats trust? all agents equally. In other words, the magnitude of the It seems like we should decrease our trust Juan, since trust change is equal for all agents: he has provided incorrect information. We also need to R4. 𝑇 (𝐴) − [𝑇 ⋆ 𝐴(𝐴)] = 𝑇 (𝐵) − [𝑇 ⋆ 𝐵(𝐵)]. R5. [𝑇 ⋆ -𝐴](𝐴) − 𝑇 (𝐴) = [𝑇 ⋆ -𝐵](𝐵) − 𝑇 (𝐵). . By 𝑅4, it follows that [𝑇0 ⋆ 𝐴](𝐴) + 𝑠 = 𝑇0 (𝐴) for all agents 𝐴. By 𝑅3, we also know that [𝑇0 ⋆ 𝐵] = 𝑇0 (𝐵) Finally, the change in trust induced by ⋆ is the same for all for all 𝐵 ̸= 𝐴. Moreover, by 𝑅6, we know that these trust states; the magnitute of trust change is determined by equalities are actually true for all trust states 𝑇 . Therefore ⋆ and not by the initial trust state: 𝑇 ⋆ 𝐴 = 𝑇 − (𝐴, 𝑠); so the result holds for positive literals. R6. 𝑇 (𝐴) − [𝑇 ⋆ 𝐴(𝐴)] = 𝑇 ′ (𝐴) − [𝑇 ′ ⋆ 𝐴(𝐴)]. By parallel reasoning, we can use propositions 𝑅2, 𝑅3, 𝑅5 R7. [𝑇 ⋆ -𝐴(𝐴)] − 𝑇 (𝐴) = [𝑇 ′ ⋆ -𝐴(𝐴)] − 𝑇 ′ (𝐴). and 𝑅7 to show that there is some 𝑤 that validates the result for negative literals as well. Taken together, these postulates define a class of basic trust To prove the converse, suppose that we have two positive change operators. integers 𝑠, 𝑤 that define ⋆ as in the definition. Then 𝑅1 Definition 3. A basic trust change operator 𝑇 that satisfies holds because [𝑇 ⋆ 𝐴](𝐴) + 𝑠 = 𝑇 (𝐴) and 𝑠 > 0. Similarly postulates 𝑅1 − 𝑅7 is called a graded trust change operator. 𝑅2 holds for 𝑤. Postulate 𝑅3 holds from the definition of the + and − operators, which only increment the ranking Some basic properties follow immediately. for one agent at a time. For any 𝐴, 𝐵 and any 𝑇, 𝑇 ′ , we have the following equal- Proposition 1. Let 𝑇 be a graded trust change operator. ities: Then the following conditions hold: (1) If 𝐴 ∈ 𝛼(𝑇 ), then 𝐴 ∈ 𝛼(𝑇 ⋆ 𝐴) and (2) If 𝐴 ̸∈ 𝛼(𝑇 ), then 𝐴 ̸∈ 𝛼(𝑇 ⋆ -𝐴). 𝑇 (𝐴) − [𝑇 ⋆ 𝐴](𝐴) = 𝑠 = 𝑇 (𝐵) − [𝑇 ⋆ 𝐵](𝐵) 𝑇 (𝐴) − [𝑇 ⋆ 𝐴](𝐴) = 𝑠 = 𝑇 ′ (𝐴) − [𝑇 ′ ⋆ 𝐴](𝐴) Hence, trusted agents remain trusted when we use ⋆ to increase trust. The reverse holds for untrusted agents that The first equality shows that 𝑅4 holds, and the second equal- lose trust. These properties are immedidate consequences ity shows that 𝑅6 holds. We can prove 𝑅5 and 𝑅7 holds of 𝑅1 and 𝑅2, respectively. through similar equalities, using 𝑤 as the middle value. The following proposition states that an agent can always Hence, ⋆ is a graded trust change operator. So graded trust become trusted after a finite number of trust strengthenings, and they can always become untrusted after a finite number change operators can be fully characterized by two positive finite number of weakenings. integers: the strengthening constant 𝑠 and the weakening constant 𝑤. Proposition 2. If 𝑇 is a graded trust change operator and There are some interesting variations that we can give 𝐴 ∈ A, then there is some 𝑛 such that 𝐴 ∈ 𝛼(𝑇 ⋆𝑛 𝐴). to characterize a larger set of basic trust change operators. Similarly, there is some 𝑚 such that 𝐴 ̸∈ 𝛼(𝑇 ⋆𝑚 -𝐴). The following is one such instance. This property is reminiscent of the key postulate for be- Proposition 4. A basic trust change operator ⋆ satisfies pos- lief improvement operators [19]. This is not surprising, as tulates 𝑅1 − 𝑅3 and 𝑅6 − 𝑅7 if and only if, for each agent graded trust change operators are also defined to induce 𝐴 there is a pair of positive integers 𝑠𝐴 , 𝑤𝐴 such that: incremental change. {︃ 𝑇 − (𝐴, 𝑠𝐴 ), if 𝐿 = 𝐴 for some 𝐴 ∈ A 𝑇 ⋆𝐿= 3.3. Representation Result 𝑇 + (𝐴, 𝑤𝐴 ), if 𝐿 = -𝐴 for some 𝐴 ∈ A (1) We introduce a class of transformations on ranking func- We call such an operator a non-uniform graded trust operator. tions over agents. This proposition states that, if we omit postulates 𝑅4 and Definition 4. Let 𝑟 : A → Z. If 𝑛 ∈ N, then define the 𝑅5, then we have a class of operators that is characterized ranking functions 𝑟 + (𝐴, 𝑛) and 𝑟 − (𝐴, 𝑛) as follows: by strengthening and weakening constants that could be {︃ distinct for each agent. The proof is similar to Proposition 𝑟(𝐴) + 𝑛, if 𝐴 = 𝐵 3. [𝑟 + (𝐴, 𝑛)](𝐵) = 𝑟(𝐴), otherwise Additional operators can be defined by modifying postu- {︃ lates 𝑅6 and 𝑅7. For example, we could model situations 𝑟(𝐴) − 𝑛, if 𝐴 = 𝐵 where trust is resilient by making trust decreases very small [𝑟 − (𝐴, 𝑛)](𝐵) = for strongly trusted agents. We leave a full exploration of 𝑟(𝐴), otherwise such variations for future work. Hence 𝑟 + (𝐴, 𝑛) increases the ranking of 𝐴 and 𝑟 − (𝐴, 𝑛) decreases the ranking. 3.4. Iterated Trust Change We can now give a representation result for graded trust change operators. We have defined graded trust change operators for a single agent literal 𝐿. However, we will generally be interested Proposition 3. The function ⋆ is a graded trust change op- in sequences of literals 𝐿 = 𝐿1 , . . . , 𝐿𝑛 . We will write erator if and only if there exist positive integers 𝑠, 𝑤 such 𝑇 ⋆ 𝐿 as a shorthand for 𝑇 ⋆ 𝐿1 ⋆ · · · ⋆ 𝐿𝑛 . Each literal 𝐿𝑖 that represents a single data point, indicating evidence that a particular agent should be more (or less) trusted. We adopt {︃ 𝑇 − (𝐴, 𝑠), if 𝐿 = 𝐴 for some 𝐴 ∈ A 𝑇 ⋆𝐿= the following notation: 𝑇 + (𝐴, 𝑤), if 𝐿 = -𝐴 for some 𝐴 ∈ A 𝐿𝑎 = |{𝐴 | 𝐴 in 𝐿}| Proof Suppose ⋆ is a graded trust operator. Let 𝑇0 be a 𝐿𝑐 = |{𝐴 | -𝐴 in 𝐿}| trust state, and let 𝐴0 be a particular agent. By 𝑅1, there is some 𝑠 such that Hence 𝐿𝑎 is the number of postive literals in 𝐿 and 𝐿𝑐 is the number of negative literals in 𝐿. The 𝑎 stands for agreement [𝑇0 ⋆ 𝐴0 ](𝐴0 ) + 𝑠 = 𝑇0 (𝐴0 ) while the 𝑐 stands for conflict. Proposition 5. Let ⋆ be a graded trust operator, defined by Definition 5. A multi-agent signature is a pair ⟨A, V⟩ 𝑠 and 𝑤. Then where A is a set of agents, V is a propositional signature. [𝑇 ⋆ 𝐿](𝐴) = 𝑇 (𝐴) − 𝐿𝑎 𝑠 + 𝐿𝑐 𝑤. The important connection between agents and formulas is that agents provide reports, and the content of a report is a This result follows directly from Proposition 3, since each propositional formula. increase or decrease is handled independently. So the iter- ated trust over a sequence of changes is just the aggregate Definition 6. A report is a pair (𝐴, 𝜑) where 𝐴 ∈ A and 𝜑 of individual trust change operations. As a result, for any is a formula over V. We write 𝑅 = (𝐴1 , 𝜑1 ), . . . , (𝐴𝑛 , 𝜑𝑛 ) operator ⋆, any sequence 𝐿, and any agent 𝐴 we can define for a finite sequence of reports. the following value: The problems that we address involve both reports and ob- servations. We will normally be concerned with sequences ∆(⋆, 𝐿, 𝐴) = [𝑇 ⋆ 𝐿](𝐴) − 𝑇 (𝐴). of reports followed by an observation. This concept is for- Hence, ∆ represents the change in trust for agent 𝐴 given malized below. the operator ⋆ and the sequence 𝐿. The properties of this Definition 7. A history-sensitive observation (hs- change value are given below. observation) is a pair ⟨𝑅, 𝜑⟩ where 𝑅 is a report history and Proposition 6. Let ⋆ be a graded trust change operator. 𝜑 is a formula. Then: Defining belief change with respect to hs-observations al- 1. If 𝐿𝑎 = 𝐿𝑐 = 0, then ∆(⋆, 𝐿, 𝐴) = 0. lows us to consider how the observation 𝜑 informs the 2. If 𝐿𝑎 = 0 and 𝐿𝑐 > 0, then ∆(⋆, 𝐿, 𝐴) > 0. extent to which the reports in 𝑅 should be incorporated. In 3. If 𝐿𝑐 = 0 and 𝐿𝑎 > 0, then ∆(⋆, 𝐿, 𝐴) < 0. order to represent an agent’s beliefs along with their trust in 4. If 𝐿𝑐 = 𝑀 𝑐 and 𝐿𝑎 > 𝑀 𝑎 then ∆(⋆, 𝐿, 𝐴) < other agents, we define the following notion of an epistemic ∆(⋆, 𝑀 , 𝐴). trust state. 5. If 𝐿𝑎 = 𝑀 𝑎 and 𝐿𝑐 < 𝑀 𝑐 then ∆(⋆, 𝐿, 𝐴) > Definition 8. An epistemic trust state is a pair ⟨E, 𝑇 ⟩ ∆(⋆, 𝑀 , 𝐴). where E is an epistemic state over V and 𝑇 is a trust state over A. Item (1) asserts that trust in 𝐴 does not change if 𝐴 does not occur in 𝐿. Item (2) says that 𝐴 becomes less trusted if they Note that E and 𝑇 are independent, but we will define only occur in conflict literals, while item (3) says the reverse change operators that impact them both at the same time. for agents that occur only in agreement literals. Item (4) compares different sequences. It says that, if two sequences include the same number of conflict literals, then the one 4.2. A Family of Combined Change with more agreement literals will have a more positive im- Operators pact on trust for 𝐴. Item (5) makes a similar statement for We now define combined change operators for trust and the case where the number of agreement literals is the same. belief. The first step is to show how an hs-observation Proposition 6 summarizes the properties of aggregate defines a sequence of trust change operations. trust change. However, since 𝑠 and 𝑤 are not constrained, we can not say anything specific about the aggregate change Definition 9. Let ⟨𝑅, 𝜑⟩ be an hs-observation where 𝑅 = due to a sequence that includes both confict and agreement. (𝐴1 , 𝜑1 ), . . . , (𝐴𝑛 , 𝜑𝑛 ). For each 𝑖 ≤ 𝑛, let: Proposition 7. Let 𝑇 be a trust state, let 𝐴 ∈ A, and let {︃ 𝐴𝑖 , if 𝜑𝑖 ̸|= 𝜑 𝐿 be any sequence of agent literals that contains at least one 𝐿𝑖 = -𝐴𝑖 , if 𝜑 |= 𝜑. instance of 𝐴 and at least one instance of -𝐴. Then there are graded trust change operators ⋆1 , ⋆2 and ⋆3 such that Let 𝜏 (⟨𝑅, 𝜑⟩) = 𝐿1 , . . . , 𝐿𝑛 . ∆(⋆1 , 𝐿, 𝐴) < 0 = ∆(⋆2 , 𝐿, 𝐴) < ∆(⋆3 , 𝐿, 𝐴). Hence, 𝜏 (⟨𝑅, 𝜑⟩) is a sequence of literals. The literal in Hence, in the general case, there is no way to determine if position 𝑖 is 𝐴𝑖 if the formula reported by 𝐴𝑖 in position 𝑖 ∆(⋆, 𝐿, 𝐴) is positive or negative. This flexibility allows us is consistent with 𝜑. The literal in position 𝑖 is -𝐴𝑖 if the to define graded trust change operators that handle agree- formula reported by 𝐴𝑖 in position 𝑖 is inconsistent with ment and conflict very differently. For example, a single 𝜑. Intuitively, this sequence encodes how our trust in each conflict might increase the trust ranking as much as a mil- agent should change given the hs-observation ⟨𝑅, 𝜑⟩; the lion agreements. So if 𝐿 contains both a strengthening and agent should be more trusted if they have provided reports a weakening for 𝐴, then we can not say anything about consistent with 𝜑 and they should be less trusted if they whether or not 𝐴 will be trusted unless we know the spe- have provided reports inconsistent with 𝜑. cific change operator. We use Definition 9 to overload the ⋆ operator, by allow- ing it to take an hs-observation as input. Specifically, we adopt the following notation:: 4. Interacting Trust and Belief 𝑇 ⋆ ⟨𝑅, 𝜑⟩ = 𝑇 ⋆ 𝜏 (⟨𝑅, 𝜑⟩). 4.1. Reports and Histories Hence, when we given an hs-observation as an input to ⋆, We now move to the case involving both trust and belief. So we simply pass to the sequence of agent literals 𝜏 (⟨𝑅, 𝜑⟩). we need a signature that includes both agents and properties This sequence of literals captures the number of conflict and of the world. agreement reports that each agent has provided. We need one more piece of notation. Given a report 4.3. An Observation-Consistent Variation history 𝑅 and a set of agents 𝛽, we write 𝑅 ↾𝛽 as a short A report filter is any function that maps an hs-observation hand for the sequence of formulas 𝜑𝑖 where 𝐴𝑖 ∈ 𝛽. So if ⟨𝑅, 𝜑⟩ to a new hs-observation including a subsequence of 𝛽 represents the set of trusted agents, then 𝑅 ↾𝛽 represents the reports from the original. A report filter is observation- the sequence of formulas reported by trusted agents. consistent if every report in the output subsequence must We can now define an approach to combined change for be consistent with 𝜑. trust and belief. Definition 10. Let ⟨E, 𝑇 ⟩ be an epistemic trust state, let * Proposition 8. Let 𝑇 be a trust state. The report filter ↾𝛼(𝑇 ) be a DP operator, and let ⋆ be a graded trust change operator. is not guaranteed to be observation consistent. Then ∘ is defined as follows:2 This result is important, because it means the ∘ operator is ⟨E, 𝑇 ⟩ ∘ ⟨𝑅, 𝜑⟩ = ⟨𝐸 * 𝑅 ↾𝛽 *𝜑, 𝑇 ⋆ ⟨𝑅, 𝜑⟩⟩ based on a filter that can include reports that are inconsistent where 𝛽 = 𝛼(𝑇 ⋆ ⟨𝑅, 𝜑⟩). with the observation. We have seen this in Example 2, where Juan’s report in- Hence, the new trust state is obtained by strengthening fluences the final epistemic state despite the fact that it is and weakening trust in agents, based on whether they have inconsistent with the observation. This seems reasonable provided reports that are consistent with the observation 𝜑. in this case, because the report is a conjunction and we end The new epistemic state is obtained by iteratively revising up keeping only the “consistent part.” However, in some ap- by all reports from trusted agents, and then revising by the plications, it would be preferable to discard all inconsistent observation 𝜑. Note that the set of trusted agents used for reports regardless of how much the sender is trusted. We this revision is determined after any trust changes result- can enforce this condition by providing a modified definition ing from the given sequence of reports. We illustrate by of ∘. returning to our motivating example. Let ⟨𝑅, 𝜑⟩ be an hs-observation and let 𝛾 be the set of agents that have provided a report in 𝑅 that is inconsistent Example We can further formalize our motivating example with 𝜑. The following is immediate. involving Juan(𝐽) and Alma (𝐴) at the crime scene. Let 𝑇 be the trust state where 𝑇 (𝐽) = −1 and 𝑇 (𝐴) = 1, which Proposition 9. Let 𝑇 be a trust state. For every hs- reflects our assumption that Juan is initially trusted, while obsrvation, the report filter ↾𝛼(𝑇 )∪𝛾 is observation consistent. Alma is not. Suppose that ⋆ is the operator defined by the This simple change gives us a variation of ∘ that is based on constant 2 for both strengthening and weakening. Recall an observation-consistent filter. Specifically, we can modify that Alma reports 𝐹 ∧ 𝐶, then Juan reports ¬𝐹 ∧ ¬𝐶, then the definition of ∘ to define ∘𝑂𝐶 as follows: we observe 𝐶. So we need to calculate the following: ⟨E, 𝑇 ⟩ ∘ ⟨(𝐴, 𝐹 ∧ 𝐶), (𝐽, ¬𝐹 ∧ ¬𝐶), 𝐶⟩. ⟨E, 𝑇 ⟩ ∘𝑂𝐶 ⟨𝑅, 𝜑⟩ = ⟨𝐸 * 𝑅 ↾𝛽∪𝛾 *𝜑, 𝑇 ⋆ ⟨𝑅, 𝜑⟩⟩ From Definition 10, our new trust state 𝑇 assigns 𝑇 (𝐽) = ′ ′ where 𝛽 is defined as it was previously. Hence ∘𝑂𝐶 is just 1 and 𝑇 ′ (𝐴) = −1. So after all events, only Alma will both like ∘, except that it filters out all reports inconsistent with be trusted. This also means that the final epistemic state 𝜑 before performing belief revision. will be E * (𝐹 ∧ 𝐶) * 𝐶. Hence, we will not only believe the crowbar is in the house, but we will also believe the door was forced open. This is because Alma’s report has been 5. System Description incorporated, since she is now trusted. Note that the we can get a different result, if we return to 5.1. Overview the example with one small tweak. The Honesty-based Belief revision System (HBS) is a tool written in Python to automatically solve belief revision prob- Example Consider the same example, except that 𝑇 (𝐽) = lems3 . The system allows the user to specify an initial belief −3 while 𝑇 (𝐴) = 1. In this case, the new trust state 𝑇 ′ state, along with a sequence of reports and observations. assigns 𝑇 ′ (𝐽) = −1 and 𝑇 ′ (𝐴) = −1. So, despite the The system has a graphical user interface for interactive fact that Juan has provided an erroneous report, he is still model, where the user enters the reports and observations trusted. The intuition here is that Juan has built such a directly; it also allows users to enter the required informa- strong reputation that he will still be trusted after a single tion through external files. In this section, we describe the mistake. In this case, the final epistemic state will be E*(𝐹 ∧ main funcationality of the software. 𝐶) * (¬𝐹 ∧ ¬𝐶) * 𝐶. Following this sequence of revisions, we will believe the crowbar is in the house but we will not believe the door was forced open. This holds despite the 5.2. Interface fact that Alma has reported otherwise, because Juan is still The interface for HBS is shown in Figure 1. When HBS is a trusted source. launched, it will initially be set to the Formula Entry tab. Many other small tweaks that could be made to get differ- While the Initial state radio button is highlighted, this allows ent results. For example, if the ⋆ operator only strengthens the user to enter a set of formulas that define the initial trust with a constant 𝑠 = 1, then Alma will not be trusted belief state. In interactive mode, formulas are entered with despite the accuracy of her report. In all of these cases, the a simple graphic interface that prevents syntax errors. The basic framework handles the subtle distinctions without any formula being defined is displayed above the entry box, and difficulty. it is entered as part of the initial belief state when the user 2 Note that ∘ actually depends on * and ⋆, so it would be more appro- clicks on Add Formula. priate to write ∘*,⋆ . But this notation is cumbersome, so we omit the 3 subscripts unless they are required to reduce ambiguity. Software available at https://github.com/amhunter/HBS. Figure 1: HBS Interface The initial belief state can be modified iteratively by change constant, which is the amount that 𝑇 (𝐴) should adding more formulas, and a panel on the right will dis- decrease if 𝐴 provides a report that conflicts with a more play the set of states believed possible. The radio button at trusted agent. Similarly, 𝑡𝑟+ is the amount that 𝑇 (𝐴) should the bottom can be toggled to add observations or reports. increase if 𝐴 provides a report that agrees with a more For observations, the formula is added to the right panel trusted agent. However, we do not necessarily make these and labelled as an observation. For reports, an agent name changes in all cases. We only make these changes if the must also be provided. All of the items listed in the right trust differential between 𝐴 and the other agent is greater panel can be deleted at any time by clicking the X in the than the difference threshold 𝐿. corner. As such, what the interface allows the user to do is Essentially, these parameters allow the implementation to specify an expression of the form: to deal with a slightly more complex notion of trust change. Changes in trust no longer depend on conflict with observa- 𝐾 * (𝐴1 , 𝜑1 ) * 𝜓1 * · · · * (𝐴𝑛 , 𝜑𝑛 ) * 𝜓𝑚 . tion, agents can now become more (or less) trusted based on how much they agree with other reporting agents. However, The user can click Calculate Output to determine the new we remark that we essentially have graded trust change op- belief state after the given sequence of operations. Figure erators if we restrict 𝑡𝑟− and 𝑡𝑟+ to be 0. 2 shows the contents of the right panel after entering the The change in trust that occurs with this full set of pa- following: rameters is given in the following definition. 𝐶𝑛[¬(𝐴∧¬𝐵)]*(alice, 𝐴∨𝐵)*(bob, 𝐴∧𝐵)*(𝐴 → ¬𝐵) Definition 11. Let 𝑇min,𝑡 be a trust state and let 𝑃 = The output at the bottom indicates the new belief state; we ⟨𝑡𝑜+ , 𝑡𝑜− , 𝑡𝑟+ , 𝑡𝑟− , 𝐿⟩ be a tuple of natural numbers (called explain below how this is determined. We remark that the the trust parameter). Define 𝑇 · (⟨𝑅, 𝜑⟩, 𝑃 ) = 𝑇 ′ where display can be modified to a simplified form; this will hide 𝑇 ′ (𝐴) is specified by applying the following procedure: the lists of truth values for variables. This can be helpful for 1. Initially, set 𝑇 ′ (𝐴) = 𝑇 (𝐴). Update as follows by large examples with many variables. comparison with 𝜑: Note that the main interface also includes the Agent Trust • If there is a report (𝐴, 𝜓) in 𝑅 such that 𝜑 |= Entry tab. In this tab, the user can enter the initial trust ¬𝜓, then 𝑇 ′ (𝐴) = 𝑇 (𝐴) − 𝑡𝑜− . ranking for all agents. • If there is a report (𝐴, 𝜓) in 𝑅 such that 𝜑 |= 𝜓, then 𝑇 ′ (𝐴) = 𝑇 (𝐴) + 𝑡𝑜+ . 5.3. Trust 2. Next, compare with reports. For all agents 𝐵 with Trust change in HBS is based on a set of parameters. There 𝑇 (𝐴) < 𝑇 (𝐵), if 𝑇 (𝐵) − 𝑇 (𝐴) > 𝐿, then: are six different parameters available, listed in the following • If there are reports (𝐴, 𝜓) and (𝐵, 𝜏 ) in 𝑅 with table. 𝜏 |= ¬𝜓, then 𝑇 ′ (𝐴) = 𝑇 (𝐴) − 𝑡𝑟− . Obs. Decrease (𝑡𝑜− ) Rep. Decrease (𝑡𝑟− ) • If there are reports (𝐴, 𝜓) and (𝐵, 𝜏 ) in 𝑅 with Obs. Increase (𝑡𝑜+ ) Rep. Increase (𝑡𝑟− ) 𝜏 |= 𝜓, then 𝑇 ′ (𝐴) = 𝑇 (𝐴) + 𝑡𝑟+ . No Trust Threshold (min) Difference Threshold (𝐿) In HBS, the default values for all parameters is 1, but these Informally, 𝑡𝑜− and 𝑡𝑜+ are the amounts to decrease (resp. values can be modified either through the interface or by increase) the trust in a reporting agent when they have editing the values in the revision.py file. provided a report that conflicts with an observation. These paramaters represent the trust strengthening and trust weak- 5.4. Belief Revision Operators ening parameters from the previous section. Similarly, min is a flexible parameter that allows us to specify set of trusted We specify a default “idealized” revision operator. The ide- agents; in our formal theory this was fixed at 0. alized operator is the revision operator that would be used The remaining parameters are new, and they are con- if we only had to incorporate observations. In HBS, the cerned with conflicting reports. The paramater 𝑡𝑟− is a new default revision operator is the Dalal operator based on the Figure 2: HBS Output Hamming distance between states. However, this is not the every observation before continuing with more reports and only revision operator that HBS can capture. observations. Under the edit menu, the user can change to a weighted Hamming distance operator. In this case, a weight needs 5.6. Creating Test Cases to be associated with each variable and these weights are used in the distance calculation. It is easy to see that this Although the interface allows the user to enter a long se- approach to revision can capture many operators beyond quence of formulas, it can be cumbersome to do so. In order the standard Dalal operator. For example, if we use powers to make the software easier to use, there is also a mechanism of two for the weights, we can essentially specify a priority to load test cases from a text file in the following format: ordering over paramaters. Hence, the weighted Hamming (Av!A)^B/!A^!B distance can be used to capture any parametrized difference 1,2 operator [20]. This is a natural class of revision operators alice:AvB suitable for iteration, with nice computational properties. bob:A^B :A>!B 5.5. Calculation The first line specifies a set of formulas, separated by slashes. Suppose that a series of reports followed by a single obser- The second line gives weights to all variables, in the order vation has been entered, and the user presses ’Calculate that they appear in the input. These values are for the output.’ Then the following calculations are performed: weighted Hamming distance; they should all be set to 1 1. First, the trust values are updated in accordance with if Dalal revision is preferred. The remaining lines specify Definition 11. reports in the form “agent:formula.” If the agent part is left 2. The new trust values are used to determine the sub- blank, then the line represents an observation. When a test sequence of formulas for revision. case is loaded from a file in this format, then it automatically 3. The revision is performed based on the selected re- populates the right panel with all of the information. This vision operator. is a much easier way to enter examples involving a long 4. The new belief state is displayed; it will be used for sequence of reports. future revisions. If the sequence of reports and observations involves several 6. Discussion observations (rather than a single terminal observation), then step (1) and (2) involve multiple sweeps through the 6.1. Related Work reports to remove those that are inconsistent with any ob- Trust has been explored in a variety of formal settings in- servation. volving agents with limited beliefs exchanging information Hence, HBS takes a sequence of reports and a single obser- [10, 8, 12, 16, 9]. The work in this paper differs in that we vation, and it returns a new epistemic trust state. It includes focus explicitly on the interaction of a belief revision opera- the calculation of graded trust change operators as a special tor with a dynamic notion of trust that changes as reports case, but it also allows a wider range of operators to be spec- are received. The work in this paper is also distinguished ified. Of course, when we use non-zero values for the new by the fact that we provide an implemented system for ex- parameters, we no longer have a clear synactic definition perimentation with trust and belief change. for the operators. There has been previous work on the interaction between We remark also that HBS can actually be used to solve observations and reports in [11], where a notion of conflict iterated change problems with multiple observations. Since is used to determine which reports should be ignored. How- all of the revision operators included support iteration, we ever, the notion of conflict introduced is restricted, as it is simply need to update the trust levels for each agent after based solely on counting inconsistent reports. More impor- new class of combined report-revision operators. These op- tantly, the framework does not include trust rankings or erators take a sequence of reports along with an observation any model of trust dynamics. There has also been previous as input, and they simultaneously revise the agent’s beliefs work on trust revision, in the tradition started with [21]; and modify their trust in reporting agents. The result is a however, this work does not consider any direct connection single operator that combines two rational change functions with belief change operators. to ensure both beliefs and trust are changed appropriately. Perhaps the closest work in the literature to our approach There are many directions for future work. As noted, is in [22], where the authors argue that trust change and the current implementation could be improved in terms of belief change can not be separated. They introduce a new efficiency. The current version of HBS is a proof of con- class of information revision operators that operate on a hy- cept that is only suitable for small toy problems, due to the brid state which includes both beliefs and trust. While the computational complexity of belief revision. motivation of this work is similar to ours, the framework At a theoretical level, we remark that basic trust change is quite different. Whereas information revision operators operators are quite restrictive in that they can only take a are built from scratch to model a single change operation, literal as input. In future work, we would like to extend we build our approach from independent change operators the vocabulary of “trust formulas” to permit updates by for beliefs and trust. Hence, we maintain that belief and more complex trust statements. Another direction for future trust change are distinct operations; however, they need to research is to axiomatize the interaction properties of report be combined to effectively incorporate reports and obser- revision operators. Right now, we know that the revision vations. In future work, we intend to explore the formal part satisfies the DP postulates and the trust part satisfies relationship between the two approaches, and the extent to our new trust change postulates. But it would be useful to which information revision can be embedded in our work. provide a further set of interaction postulates to describe the properties that must hold when the operators are combined. 6.2. Speculative Application We are also interested in extending the current framework, so that it can model the interaction between knowledge- We briefly introduce a potential application for our frame- based trust and honesty-based trust. work. We propose that graded trust change operators and HBS are well suited for for reasoning about reputation sys- tems, where we have a seller that has been rated based on a References series of transactions. The information provided by ratings need not simply be judgments about the “goodness” of the [1] A. Hunter, Combined change operators for trust and seller; they might include information about the product, belief, in: Proceedings of the 37th Australasian Joint the promptness of delivery, or anything else about the trans- Conference on Artificial Intelligence, 2024. action. All of this can be encoded in a suitable logical theory. [2] A. Hunter, A. Iglesias, A tool for reasoning about trust When we read a series of reviews and then make a purchase, and belief, in: Proceedings of the International Confer- we are able to simultaneously update our beliefs and our ence on Logic for Programming, Artificial Intelligence trust in the ratings through the framework introduced in and Reasoning (LPAR), 2024, pp. 127–135. this paper. Automating this process, we can implement a [3] C. E. Alchourrón, P. Gärdenfors, D. Makinson, On simple reputation system that can maintain a sound trust the logic of theory change: Partial meet functions for ranking over all agents providing reviews. contraction and revision, Journal of Symbolic Logic The automation step here is actually not difficult. Using 50 (1985) 510–530. HBS, we can solve report revision problems by encoding [4] H. Katsuno, A. Mendelzon, Propositional knowledge them in the format specified. base revision and minimal change, Artificial Intelli- gence 52 (1992) 263–294. (Av!B)^(B^!C) [5] A. Darwiche, J. Pearl, On the logic of iterated belief Jordan:AvB, Alma:A^B, obs:AV!B revision, Artificial Intelligence 89 (1997) 1–29. [6] N. Schwind, S. Konieczny, R. P. Perez, Darwiche and There is a problem here in that real reputation systems often Pearl’s epistemic states are not total preorders, in: include thousands of reviews; the current iteration of HBS Proceedings of the International Conference on Princi- does not use a fast revision solver, so it runs slowly on ples of Knowledge Representation and Reasoning (KR problems involving many formulas. However, it is possible 2022), 2022. to signicantly improve the running time for revision solvers [7] M. Dalal, Investigations into a theory of knowledge by using a competition-level ALLSAT solver [23]. In the base revision, in: Proceedings of the National Confer- next iteration of the software, we will use this approach to ence on Artificial Intelligence (AAAI), 1988, pp. 475– produce a tool that is useful for reasoning about much larger 479. problems. We leave the application to reputation systems [8] R. Booth, A. Hunter, Trust as a precursor to belief for future work. revision, Journal of Artificial Intelligence Research 61 (2018) 699–722. 6.3. Conclusions [9] J. Singleton, R. Booth, Who’s the expert? on multi- source belief change, in: Proceedings of the Interna- We introduced graded trust change operators, which let tional Conference on Principles of Knowledge Repre- us incrementally change how much we trust information- sentation and Reasoning (KR 2022), 2022. providing agents. We then introduced a set of trust-change [10] Y. Ammar, H. O. Ismail, Trust is all you need: From postulates, and proved that every operator satisfying the belief revision to information revision, in: Proceedings postulates is defined by two values: a strengthening con- of the 17th European Conference on Logics in Artificial stant and a weakening constant. Trust change operators can Intelligence (JELIA), 2021, pp. 50–65. be combined with DP belief revision operators to define a [11] A. Hunter, Reports, observations, and belief change, in: Proceedings of the 36th Australasian Joint Conference on Artificial Intelligence, 2023, pp. 54–65. [12] D. Jelenc, L. H. Tamargo, S. Gottifredi, A. J. García, Credibility dynamics: A belief-revision-based trust model with pairwise comparisons, Artificial Intelli- gence 293 (2021) 103450. [13] A. Salehi-Abari, T. White, Towards con-resistant trust models for distributed agent systems, in: Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI), 2009, pp. 272–277. [14] J. Wang, Z. Yan, H. Wang, T. Li, W. Pedrycz, A survey on trust models in heterogeneous networks, IEEE Communications Surveys and Tutorials 24 (2022) 2127– 2162. [15] X. Dong, E. Gabrilovich, K. Murphy, V. Dang, W. Horn, C. Lugaresi, S. Sun, W. Zhang, Knowledge-based trust: Estimating the trustworthiness of web sources, Pro- ceedings of the VLDB Endowment 8 (2015). [16] F. Liu, E. Lorini, Reasoning about belief, evidence and trust in a multi-agent setting, in: PRIMA 2017: Principles and Practice of Multi-Agent Systems - 20th International Conference, volume 10621, 2017, pp. 71– 89. [17] T. D. Huynh, N. R. Jennings, N. R. Shadbolt, An in- tegrated trust and reputation model for open multi- agent systems, Autonomous Agents and Multi-Agent Systems 13 (2006) 119–154. [18] R. Govindaraj, P. Govindaraj, S. Chowdhury, D. Kim, D.-T. Tran, A. N. Le, A review on various applications of reputation based trust management., International Journal of Interactive Mobile Technologies 15 (2021). [19] S. Konieczny, R. P. Péréz, Improvement operators, in: Eleventh International Conference on Principles of Knowledge Representation and Reasoning (KR’08), 2008, pp. 177–186. [20] P. Peppas, M.-A. Williams, Parametrised difference revision, in: Proceedings of the International Confer- ence on Principles of Knowledge Representation and Reasoning (KR), 2018, pp. 277–286. [21] J. Ma, M. A. Orgun, Trust management and trust the- ory revision, IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 36 (2006) 451–460. [22] Y. Ammar, H. O. Ismail, Trust is all you need: From belief revision to information revision, in: Proceedings of the 17th European Conference on Logics in Artificial Intelligence (JELIA), 2021, pp. 50–65. [23] A. Hunter, J. Agapeyev, An efficient solver for parametrized difference revision, in: Proceedings of the Australasian Conference on Artificial Intelligence, 2019, pp. 143–152.