=Paper= {{Paper |id=Vol-2680/paper5 |storemode=property |title=On the Joint Revision of Belief and Trust |pdfUrl=https://ceur-ws.org/Vol-2680/paper5.pdf |volume=Vol-2680 |authors=Ammar Yasser,Haythem Ismail |dblpUrl=https://dblp.org/rec/conf/ki/AmmarI20 }} ==On the Joint Revision of Belief and Trust== https://ceur-ws.org/Vol-2680/paper5.pdf
    Proceedings of the 6th Workshop on Formal and Cognitive Reasoning




             On the Joint Revision of Belief and Trust

                        Ammar Yasser1 and Haythem O. Ismail2,1
                             1
                           German University in Cairo, Egypt
                             2
                               Cairo University, Egypt
                  {ammar.abbas,haythem.ismail}@guc.edu.eg



       Abstract. Trust plays a vital role when it comes to beliefs. Deciding what to
       believe and what not to believe depends, to a large extent, on trust. When an in-
       formation source we trust conveys a piece of information, we are more likely to
       believe it. On the contrary, we are more reluctant to believe information com-
       municated by information sources we do not trust. In general, trust guides us
       while revising our beliefs. Despite the existence of great bodies of literature on
       trust and belief revision separately, a formal treatment for their intertwined rela-
       tionship is lacking. Hence, in this paper, we argue that trust revision and belief
       revision are inseparable processes. To provide a formal treatment for the joint
       revision of beliefs and trust, we address issues concerning the formalization of
       trust in information sources and provide AGM-style postulates for the rational
       joint revision of the two attitudes which we refer to as information revision.

       Keywords: Belief Revision · Trust Revision · Information Revision.


1    Introduction

Trust acts, even if we are not aware, as an information filter. We are willing to believe
in information communicated by sources we trust, cautious about information from
sources we do not trust, and suspicious about information from sources we mistrust.
Trust and mistrust are constantly revised; we gain more trust in information sources the
more they prove themselves to be reliable, and our trust in them erodes as they mislead
us one time after the other. Such attitudes allow us to be resilient, selective and astute. If
exhibited by logic-based agents, these same attitudes would make them less susceptible
to holding false beliefs and, hence, less prone to excessive belief revision. Moreover, by
revising trust, these agents will not forever be naively trusting nor cynically mistrusting.
     Trust has been thoroughly investigated within multi-agent systems [3,9,20,19,29,22,
for instance], psychology [32,8,15, for instance], and philosophy [17,14,27, for in-
stance]. Crucially, it was also investigated in the logic-based artificial intelligence (AI)
literature by several authors [4,6,24,22,16,23]. Nevertheless, we believe that there are
several issues that are left unaddressed by the logical approaches. Intuitively, trust is
intimately related to misleading, on one hand, and belief revision, on the other. While
several logical treatments of misleading are to be found in the literature [31,7,30,18, for
instance], the relation of misleading to trust erosion is often not attended to or delegated
to future work. On the other hand, the extensive literature on belief revision [1,11, for
example], while occasionally addressing trust-based revision of beliefs [26,28,2] does


    Copyright c 2020 for this paper by its authors. Use permitted under
    Creative Commons License Attribution 4.0 International (CC BY 4.0).

                                               55
                      On the Joint Revision of Belief and Trust




not have much to say about the revision of trust (but see [24,26] for minimal discus-
sions) and, as far as we know, any systematic study of jointly revising belief and trust.
The goal of this paper is, hence, twofold: (i) to motivate why belief and trust revision are
intertwined and should be carried out together, and (ii) to propose AGM-style postulates
for the joint revision of trust and belief.
    The paper is structured as follows. Section 2 describes what we mean by trust, in-
formation and information sources. It also highlights the intuitions behind joint trust
and belief revision. In Section 3, we present information states, a generic structure rep-
resenting information and investigating its properties. Section 4 presents a powerful
notion of relevance which information structures give rise to. Finally, in Section 5, the
formal inter-dependency of belief and trust is explored, culminating in AGM-style pos-
tulates for joint belief-trust revision.3


2    Trust and Belief
It is often noted that trust is not a dyadic relation, between the trusted and the trustee,
but is a triadic relation involving an object of trust [27]. You trust your doctor with your
health, your mechanic with your car, your parents to unconditionally believe you, and
your mathematics professor to tell you only true statements of mathematics. Our inves-
tigation of the coupling of belief and trust lets us focus only on trust in sources of infor-
mation. Trust in information sources comes in different forms. Among Demolombe’s
[5,25] different types of trust in information sources, we focus on trust in sincerity and
competence since they are the two types relevant to belief revision and realistic in-
formation sources.4 A sincere information source is one which (if capable of forming
beliefs) only conveys what it believes; a competent source is one which only conveys
what is true. In this paper, we consider trust in the reliability of information sources,
where a source is reliable if it is both sincere and competent.5 Note that we do not take
information sources to only be cognitive agents. For example, a sensor (or perception,
in general) is a possible source of information. For information sources which are not
cognitive agents, reliability reduces to competence.
     Rational agents constantly receive information, and are faced with the question of
whether to believe or not to believe. The question is rather simple when the new in-
formation is consistent with the agent’s beliefs, since no obvious risk lies in deciding
either way. Things become more interesting if the new information is inconsistent with
what the agent believes; if the agent decides to accept the new information, it is faced
with the problem of deciding on which of its old beliefs to give up in order to maintain
consistency. Principles for rationally doing this are the focus of the vast literature on
belief revision [1,12, for example].
     It is natural to postulate that deciding whether to believe and how to revise our
beliefs–the process of belief revision–are influenced by how much we trust the source
 3
   For a more detailed discussion, the reader is kindly guided towards [33].
 4
   Trust in completeness, for example, is unrealistic since it requires that the source informs about
   P whenever P is true.
 5
   As suggested by [18], it is perhaps possible that breach of sincerity and competence should
   have different effects on belief revision; for simplicity, we do not consider this here though.




                                                56
                       On the Joint Revision of Belief and Trust




of the new piece of information. (Also see [26,28,2].) In particular, in case of a conflict
with old beliefs, how much we trust in the source’s reliability and how much evidence
we have accumulated for competing beliefs seem to be the obvious candidates for guid-
ing us in deciding what to do. Thus, rational belief revision depends on trust.
    But things are more complex. For example, suppose that information source σ1 ,
whom we trust very much, conveys φ to us. φ is inconsistent with our beliefs but,
because we trust σ1 , we decide to believe in φ and give away ψ which, together with
other beliefs, implies ¬φ. In this case, we say that φ is a refutation of ψ. So far, this is
just belief revision, albeit one which is based on trust. But, by stopping believing in ψ,
we may find it rational to revise, and decrease, our trust in σ2 who earlier conveyed ψ
to us. Moreover, suppose that φ, together with other beliefs, implies our old belief ξ.
We say that φ is a confirmation of ξ. This confirmation may trigger us to revise, and
increase, our trust in σ3 who is the source of ξ. Thus, trust revision depends on belief
revision. In fact, belief revision may be the sole factor that triggers rational trust revision
in information sources.
    We need not stop there though. For, by reducing our trust in σ2 ’s reliability, we
are perhaps obliged to stop believing (or reduce our degree of belief in) ψ 0 which was
conveyed by σ2 . It is crucial to note that ψ 0 may be totally consistent with φ and we,
nevertheless, give it away. While we find such scenario quite plausible, classical belief
revision, with its upholding of the principle of minimal change, would deem it irrational.
Likewise, by increasing our trust in σ3 we may start believing (or raise our degree of
belief) in ξ 0 which was earlier conveyed by σ3 . This second round of belief revision
can start a second round of trust revision. It is clear that we may keep on doing this for
several rounds (perhaps indefinitely) if we are really fanatic about information and its
sources. Hence, we contend that belief revision and trust revision are so entangled that
they need to be combined into one process of joint belief-trust revision or, as we shall
henceforth refer to it, information revision.


3      Information States

In this section, we introduce formal structures for representing information in a way
that would facilitate information revision.

Definition 1. An information grading structure G is a quadruple (Db , Dt , ≺b , ≺t ),
where Db and Dt are non-empty, countable sets; and ≺b and ≺t are, respectively, total
orders over Db and Dt .

    Db and Dt contain the degrees of belief and trust, respectively. They are not nec-
essarily finite, disjoint, different or identical.6 Moreover, to be able to distinguish the
strength by which an agent believes a proposition or trusts a source, the two sets are
ordered; here, we assume them to be totally ordered.

Definition 2. An information structure I is a quadruple (L, C, S, G), where
 6
     Db and Dt are usually the same; however, a qualitative account of trust and belief might have
     different sets for grading the two attitudes.




                                                57
                       On the Joint Revision of Belief and Trust




 1. L is a logical language with a Tarskian consequence operator Cn,
 2. C is a finite cover of L whose members are referred to as topics,
 3. S is a non-empty finite set of information sources, and
 4. G is an information grading structure.

    Information structures comprise our general assumptions about information. S is
the set of possible information sources. Possible pieces of information are statements
of the language L, with each piece being about one or more, but finitely many, topics as
indicated by the L-cover C. L is only required to have a Tarskian consequence operator
[13]. A topic represents the scope of trust. That is, an agent trusts an information source
on particular topics. Moreover, a topic is a set of statements which may be closed under
all connectives, some connectives or none at all. Topics could also be disjoint or over-
lapping. Choosing topics to be not necessarily closed under logical connectives allows
us to accommodate interesting cases. For example, A may have, for the same source, a
different trust value when conveying φ to when it conveys ¬φ.

Definition 3. Let I = (L, C, S, (Db , Dt , ≺b , ≺t )) be an information structure. An in-
formation state K over I is a triple (B, T , H), where
 1. B : L ,→ Db is a partial function referred to as the belief base,
 2. T : S × C ,→ Dt is a partial function referred to as the trust base, and
 3. H ⊆ L × S, the history, is a finite set of pairs (φ, σ) where, for every T ∈ C, if
    φ ∈ T then (σ, T, dt ) ∈ T , for some dt ∈ Dt .

    Trust in information sources is recorded in T (K).7 This is a generalization to ac-
commodate logics with an explicit account of trust in the object language [6,23, for
instance] as well as those without [22,21, for example]. H(K) acts as a formal device
for recording conveyance instances. Hence, a tuple of the form (φ, σ) in the history de-
notes that formula φ was acquired through source σ. As with T (K), we do not require
L to have an explicit account for conveying.
    With this setup, having trust on single propositions, as is most commonly the case
in the literature [5,23, for instance], reduces to restricting all topics to be singletons. On
the other hand, we may account for absolute trust in sources by having a single topic to
which all propositions belong.
    To put together all the pieces forming an information state, consider the following
example.

Example 1. Let an information grading structure G = ({b1 , b2 , b3 }, {t1 , t2 , t3 },
{(b1 ≺b b2 ), (b1 ≺b b3 ), (b2 ≺b b3 )}, {(t1 ≺t t2 ), (t1 ≺t t3 ), (t2 ≺t t3 )}). Hence,
a belief associated with b3 is the most preferred while a belief assigned b1 is the least
preferred. Similarly, a source associated with t3 is the most trusted while a source that
has a trust degree of t1 is the least trusted. Given G, we define information structure
I = (LV , C, {σ1 , σ2 }, G) where language LV is a propositional language with the set
V = {P, Q, S} of propositional variables and C = {TP , TQ , TS } where TP is the
 7
     It is worthy of mentioning that we use B(K), T (K), and H(K) to denote the belief base, trust
     base, and history of a particular information state K. More importantly, we do not use such
     notation to denote an invocation of a function with K as an argument.




                                                58
                       On the Joint Revision of Belief and Trust




set of formulas containing P as a sub-formula, likewise for TQ and TS with Q and S
respectively . Finally, information state K = (B, T , H) where

 – B = {(P, b2 ), (Q, b3 ), (Q → P, b3 )}
 – T = {(σ1 , TP , t2 ), (σ1 , TS , t1 ), (σ2 , TQ , t3 ), (σ2 , TS , t2 )}
 – H = {(P, σ1 ), (Q, σ2 ), (Q → P, σ2 ), (S, σ1 )}

     This example shows how information is represented in an information state. Trust
attribution for information sources with different topics is recorded. Based on that, we
have P ,Q, and Q → P that are believed with varying degrees of belief.8 Moreover, it is
important to note that S was conveyed by σ1 but was not believed. There could be many
reasons for not believing a conveyed proposition, one of them could be that source σ1
has the weakest degree of trust on the topic containing S.

    So far, we defined what information states are. We now define the following abbre-
viations of which we will later make use.

 – σ(H(K)) = {φ | (φ, σ) ∈ H(K)}
 – SK = {σ | (φ, σ) ∈ H(K)}
 – F or(B(K)) = {φ | (φ, db ) ∈ B(K)}
 – ΦK = {φ | φ ∈ F or(B(K)) or (φ, σ) ∈ H(K) f or any σ}

    Information revision is the process of revising an information state K with the con-
veyance of a formula φ by a source σ yielding a revised information state. Every infor-
mation revision operator is associated with a conveyance inclusion filter F ⊆ L × S
which determines the conveyance instances that make it into H(K). Hence, a generic
revision operator is denoted by nF , where F is the associated filter. Revising K with a
conveyance of φ by σ is denoted by K nF (φ, σ). We require all revision operators nF
to have the same effect on the history:
                                      
                                        H(K) ∪ {(φ, σ)} (φ, σ) ∈ F
                 H(K nF (φ, σ)) =
                                        H(K)               otherwise

    There are three major filter types. A filter F is non-forgetful if F = L × S; it is
forgetful if ∅ 6= F ⊂ S × L; and it is memory-less if F = ∅. Having filters beside
the non-forgetful one is to simulate realistic scenarios where an agent does not always
remember every piece of information that was conveyed to it. Henceforth, the subscript
F will be dropped from nF whenever this does not result in ambiguity.

Definition 4. Let φ ∈ L and σ ∈ S.

 1. φ is more entrenched in state K2 over state K1 , denoted K1 ≺φ K2 , if (i) φ ∈ /
    Cn(F or(B(K1 ))) and φ ∈ Cn(F or(B(K2 ))); or (ii) (φ, b1 ) ∈ B(K1 ), (φ, b2 ) ∈
    B(K2 ), and b1 ≺b b2 . If K1 ⊀φ K2 and K2 ⊀φ K1 , we write K1 ≡φ K2 .
 8
     For this simple example, the mapping between trust degrees and belief degrees was straight
     forward. However, as we will see later, addressing the relationship between trust degrees and
     belief degrees is a complex matter.




                                                 59
                    On the Joint Revision of Belief and Trust




 2. σ is more trusted on topic T in state K2 over state K1 , denoted K1 ≺σ,T K2 ,
    if (σ, T, t1 ) ∈ T (K1 ), (σ, T, t2 ) ∈ T (K2 ), and t1 ≺t t2 . If K1 ⊀σ,T K2 and
    K2 ⊀σ,T K1 , we write K1 ≡σ,T K2 .

    Intuitively, a belief changes after revision if it is added to or removed from the
belief base, or if its associated grade changes. Similarly, trust in a source regarding a
topic changes after revision if the associated trust grade changes.


4   Relevant Change

As proposed earlier, the degrees of trust in sources depend on the degrees of belief in
formulas conveyed by these sources and vice versa. To model such dependence, we
need to keep track of which formulas and which sources are “relevant” to each other.
First, we recall a piece of terminology due to [11]: Γ ⊂ L is a φ-kernel if Γ |= φ and,
for every ∆ ⊂ Γ , ∆ 6|= φ.

Definition 5. Let K be an information state. The support graph G(K) = (SK ∪ ΦK , E)
is such that (u, v) ∈ E if and only if

 1. u ∈ SK , v ∈ ΦK , and v ∈ u(H(K));
 2. u ∈ ΦK , v ∈ ΦK , u 6= v, and u ∈ Γ ⊆ ΦK where Γ is a v-kernel; or
 3. u ∈ ΦK , v ∈ SK , and (v, u) ∈ E.

A node u supports a node v if there is a simple path from u to v.

    Figure 1 shows an example of the support graph for the information state presented
in Example 1. Source σ1 conveyed both P and S, thus, according to clause 1 in Defini-
tion 5, there is an edge from σ1 to both P and S. Moreover, there is an edge from both
P and S to σ1 given the third clause in Definition 5. Similarly, there are edges from σ2
to both Q and Q → P as well as from Q and Q → P to σ2 . Finally, {Q → P, Q}
is a P -kernel. Hence, we have an edge from Q to P and from Q → P to P given the
second clause in the definition of the support graph.
    The support graph allows us to trace back and propagate changes in trust and belief
to relevant beliefs and information sources along support paths. Instances of support
may be classified according to the type of relata.

Observation 1. Let K be an information state.

 1. φ ∈ ΦK supports ψ ∈ ΦK if and only if φ 6= ψ and (i) φ ∈ Γ ⊆ ΦK where Γ is a
    ψ-kernel or (ii) φ supports some σ ∈ SK which supports ψ.
 2. φ ∈ ΦK supports σ ∈ SK if and only if ψ ∈ σ(H(K)) and φ ∈ Γ ⊆ ΦK where Γ
    is a ψ-kernel or φ supports some σ 0 ∈ SK which supports σ.
 3. σ ∈ SK supports φ ∈ ΦK if and only if ψ ∈ σ(H(K)) and ψ ∈ Γ ⊆ ΦK where Γ
    is a φ-kernel or σ supports some σ 0 ∈ SK which supports φ.
 4. σ ∈ SK supports σ 0 ∈ SK if and only if σ 6= σ 0 σ supports some φ ∈ ΦK which
    supports σ 0 .




                                          60
                    On the Joint Revision of Belief and Trust




Fig. 1: The support graph where σ2 conveyed Q and Q → P that both logically imply P which
was conveyed alongside S by σ1 .



    Thus, given the first three clauses, the support relation from a formula to a formula,
a formula to a source, or a source to formula may be established in two ways: (i) either
purely logically via a path of only formulas or (ii) with the aid of a trust link via an
intermediate source. A source can only support a source, however, by supporting a
formula which supports that other source. Note that self-support is avoided by requiring
support paths to be simple.
    The support graph provides the basis for constructing an operator of rational in-
formation revision. Traditionally, belief revision is concerned with minimal change
[10,12]. In this paper, we model minimality using relevance. However, our notion of
relevance is not restricted to logical relevance as with classical belief revision; it also
accounts for source relevance. When an information state K is revised with formula φ
conveyed by source σ, we would want to confine changes in belief and trust to formulas
and sources relevant to φ, ¬φ, and σ.

Definition 6. Let K be an information state and u and v be nodes in G(K). u is v-
relevant if u supports v or v supports u. Further, if φ, ψ ∈ L with Γφ ⊆ ΦK a φ-kernel
and Γψ ⊆ ΦK a ψ-kernel where u is v-relevant for some u ∈ Γφ and v ∈ Γψ , then φ is
ψ-relevant.

Observation 2. If K is an information state where u is v-relevant then (i) v is u-
relevant; (ii) if v ∈ σ(H(K)) and u 6= σ, then u is σ-relevant; and (iii) if v ∈ SK ,
φ ∈ v(H(K)), and u 6= φ, then, u is φ-relevant.

    Hence, relevance is a symmetric relation. Crucially, if σ conveys φ, then the for-
mulas and sources relevant to φ (other than σ) are exactly the formulas and sources
relevant to σ (other than φ). For this reason, when revising with a conveyance of φ by
σ it suffices to consider only φ-relevant (and ¬φ-relevant) formulas and sources.




                                           61
                     On the Joint Revision of Belief and Trust




5     Information Revision
5.1    Intuitions
Table 1 shows the possible reasonable effects on B(K) as agent A revises its information
state K with (φ, σ); Kn is shorthand for K n (φ, σ) while “neither” means that neither
φ nor ¬φ is in Cn(F or(B(K))).


       #               K                         Kn            φ            ¬φ Notes
      B1            neither               (φ, b) ∈ B(Kn ) K ≺φ Kn K ≡¬φ Kn       -
      B2            neither                    neither     K ≡φ Kn K ≡¬φ Kn      -
      B3       (φ, b1 ) ∈ B(K)           (φ, b2 ) ∈ B(Kn ) K ≺φ Kn K ≡¬φ Kn b1 ≺b b2
      B4       (φ, b1 ) ∈ B(K)           (φ, b1 ) ∈ B(Kn ) K ≡φ Kn K ≡¬φ Kn      -
      B5      (¬φ, b1 ) ∈ B(K)           (φ, b2 ) ∈ B(Kn ) K ≺φ Kn Kn ≺¬φ K      -
      B6      (¬φ, b1 ) ∈ B(K)                 neither     K ≡φ Kn Kn ≺¬φ K      -
      B7      (¬φ, b1 ) ∈ B(K)          (¬φ, b2 ) ∈ B(Kn ) K ≡φ Kn Kn ≺¬φ K b2 ≺b b1
      B8      (¬φ, b1 ) ∈ B(K)          (¬φ, b1 ) ∈ B(Kn ) K ≡φ Kn K ≡¬φ Kn      -
      B9 B(K) |= {(¬φ, b1 ), (φ, b2 )} (φ, b3 ) ∈ B(Kn ) K ≺φ Kn Kn ≺¬φ K b2 ≺b b3
      B10 B(K) |= {(¬φ, b1 ), (φ, b2 )} (φ, b2 ) ∈ B(Kn ) K ≡φ Kn Kn ≺¬φ K       -
      B11 B(K) |= {(¬φ, b1 ), (φ, b2 )} (φ, b3 ) ∈ B(Kn ) Kn ≺φ K Kn ≺¬φ K b3 ≺b b2
      B12 B(K) |= {(¬φ, b1 ), (φ, b2 )}        neither     Kn ≺φ K Kn ≺¬φ K      -
      B13 B(K) |= {(¬φ, b1 ), (φ, b2 )} (¬φ, b1 ) ∈ B(Kn ) Kn ≺φ K K ≡¬φ Kn      -
      B14 B(K) |= {(¬φ, b1 ), (φ, b2 )} (¬φ, b3 ) ∈ B(Kn ) Kn ≺φ K Kn ≺¬φ K b3 ≺b b1
                      Table 1: The admissible scenarios of belief revision.




    In Cases B1 and B2 , A initially believes neither φ nor ¬φ. After revision, B1 shows
the case where A believes φ as it has no evidence for the contrary. However, another
possible scenario would be that portrayed in Case B2 where A decides that the weight
of evidence for and against φ is comparable and hence φ can not be accepted.
    In the next two cases, A already believes φ and revision with φ confirms what is
already believed. Thus, φ could become more entrenched as now σ also supports φ
(Case B3 ) or φ’s degree may remain unchanged (Case B4 ) which could occur when φ
is believed with the maximum degree of belief, if there is such a degree, or when φ has
only been ever conveyed by σ, who are now only confirming itself. In this latter case,
A might choose not to increase the degree of belief in φ.
    A believes ¬φ before revision in Cases B5−8 . Thus, revising with the conflicting
piece of information φ could yield the following outcomes: (i) A stops believing ¬φ
and starts believing φ (Case B5 ), which could occur if, for example, σ is a highly
trusted source; (ii) A decides that there is not enough evidence to believe either φ or ¬φ
(Case B6 ); (iii) A decides not to completely give up ¬φ but there is enough evidence
to make A doubt ¬φ making it less entrenched (Case B7 ); or (iv) A decides not to
change its beliefs, even when provided with φ (Case B8 ). One scenario where this is
possible is when the source is not trusted and so A decides not to consider this instance
of conveyance.




                                           62
                       On the Joint Revision of Belief and Trust




    In general, revision may be applied to an inconsistent belief base. In such cases,
what matters is retaining consistency. This is accompanied by rejecting φ, ¬φ, or ac-
tually both. Depending on the weight of evidence and trust in σ this may result in φ’s
becoming more entrenched (Case B9 ) or not (Case B10 ). It may also happen that φ
becomes less entrenched (Case B11 ), for example when some supporters of φ are also
¬φ-relevant. Although revision may prefer ¬φ over φ (Case B13 and Case B14 ) this
should not make ¬φ more entrenched, since revision with φ provides no positive ev-
idence for its negation. Case B12 shows A contracting both φ and ¬φ upon deciding
that evidence for (and against) each one is comparable.
    Other cases, we believe, should be forbidden for a rational operation of information
revision. These cases are presented in Table 2. In Case B15 , A is neutral. However,
when provided with evidence for φ, surprisingly, A starts believing ¬φ. Cases B16−18
are ones where A already believes φ and, on revising with φ, the following occurred: (i)
φ becomes less entrenched (Case B16 ); (ii) A stops believing in φ (Case B17 ); (iii) an
extreme case where A receives a confirmation for the already believed φ, and ¬φ ends
up being believed (Case B18 ).9 Cases B19 and B20 show a scenario where ¬φ becomes
more entrenched. These cases are forbidden since revising with φ provides no support
for ¬φ.


       #               K                         Kn             φ           ¬φ Notes
      B15           neither              (¬φ, b) ∈ B(Kn ) Kn ≡φ K K ≺¬φ Kn       -
      B16      (φ, b1 ) ∈ B(K)           (φ, b2 ) ∈ B(Kn ) Kn ≺φ K K ≡¬φ Kn b2 ≺b b1
      B17       (φ, b) ∈ B(K)                 neither      Kn ≺φ K K ≡¬φ Kn      -
      B18      (φ, b1 ) ∈ B(K)          (¬φ, b2 ) ∈ B(Kn ) Kn ≺φ K K ≺¬φ Kn      -
      B19     (¬φ, b1 ) ∈ B(K)          (¬φ, b2 ) ∈ B(Kn ) K ≡φ Kn K ≺¬φ Kn b1 ≺b b2
      B20 B(K) |= {(¬φ, b1 ), (φ, b2 )} (¬φ, b3 ) ∈ B(Kn ) Kn ≺φ K K ≺¬φ Kn b1 ≺b b3
                       Table 2: The forbidden scenarios of belief revision.




    Similar to our treatment of belief revision, in Table 3, we present the possible sce-
narios for trust change. The cases in Table 3 are based on the admissible cases of belief
change presented in Table 1. In Table 3, Kn is also a shorthand for K n (φ, σ), θ is
any φ-relevant and not ¬φ-relevant source, and η is any ¬φ-relevant and not φ-relevant
source where σ 6= θ 6= η.
    If φ becomes more entrenched, then some new support was provided for φ. Hence,
θ (or σ) might become more trusted (in light of the new support) or remain unchanged
as the new evidence could be not enough to change trust. Regardless, σ and θ should
not be less trusted. Also, if ¬φ becomes less entrenched as a result of revising with φ, it
means that φ affected the revision even if it was not accepted. Thus σ and θ should not
become less trusted. In both cases, there is no reason to for η to be more trusted. The
foregoing is captured in Case T1 . If φ is rejected (φ ∈
                                                       / F or(B(Kn ))) and at the same
 9
     These extreme reactions to revising with φ may perhaps be justified by complete mistrust in
     σ. We do not address mistrust in this paper, though.




                                               63
                       On the Joint Revision of Belief and Trust




#           Condition          Belief Revision Cases         σ            θ   η
T1 K ≺φ Kn or Kn ≺¬φ K B1 , B3 , B5−7 , B9 , B10 Kn ⊀σ,T K Kn ⊀θ,T K K ⊀η,T Kn
T2 φ rejected and Kn ⊀¬φ K          B2 ,B8 ,B13       K ⊀σ,T Kn K ⊀θ,T Kn K ≡η,T Kn
T3 K ≡φ Kn and K ≡¬φ Kn                 B4            K ≡σ,T Kn K ≡θ,T Kn K ≡η,T Kn
T4 Kn ≺φ K and Kn ≺¬φ K            B11 ,B12 ,B14      K ⊀σ,T Kn K ⊀θ,T Kn K ⊀η,T Kn
                     Table 3: The admissible scenarios of trust revision.




time ¬φ does not get affected, then σ and θ should not become more trusted while
trust in η should stay the same as no new support was provided for/against it (Case
T2 ).10 If there is no change in the entrenchment of φ and ¬φ, then there is no reason to
change trust in any of σ, θ, or η (Case T3 ). Finally, Case T4 shows that if both φ and ¬φ
become less entrenched, then there is no reason for σ, θ, or η to become more trusted.
The forbidden cases of trust revision constitute all scenarios that contradict any of the
aforementioned admissible cases.
     Note that, in none of the cases in Table 3, do we require that trust should change
in certain ways, only that it should not. We believe it be unwise to postulate sufficient
conditions for trust change in a generic information revision operation. For example,
one might be tempted to say that, if after revision with φ, ¬φ is no longer believed, then
trust in any source supporting ¬φ should decrease. Things are not that straightforward,
though.

Example 2. Agent A’s belief base is {(S → P, b1 ), (Q → ¬S, b2 )}. Source Jordan,
conveys P then conveys Q. Since A has no evidence against either, it believes both. Now,
source N our, who is more trusted than Jordan, conveys S. Consequently, A starts
believing S despite having evidence against it. To maintain consistency, A also stops
believing Q (because it supports ¬S). What should happen to A’s trust in Jordan?
We might, at first glance, think that trust in Jordan should decrease as he conveyed Q
which is no longer believed. However, one could also argue that trust in Jordan should
increase because he conveyed P , which is now being confirmed by N our.

    This example shows that setting general rules for how trust must change is almost
impossible, as it depends on several factors. Whether A ends up trusting Jordan less,
more, or without change appears to depend on how the particular revision operators ma-
nipulates grades. The situation becomes more complex if the new conveyance by N our
supports several formulas supporting Jordan and refutes several formulas supported
by him. In this case, how trust in Jordan changes (or not) would also depend on how
the effects of all these support relations are aggregated. We contend that such issues
should not, and cannot, be settled by general constraints on information revision.
    This non-determinism about how trust changes extends to similar non-determinism
about how belief changes. According to Observation 1, a formula φ may support another
formula ψ by transitivity through an intermediate source σ. Given that, in general, the
10
     In Case B2 where A is neutral and φ is rejected after revision, a particular operator might
     actually choose to decrease trust in σ. An example of this is when σ informs A about φ and A
     is sure that there is no way for anyone to know either φ or ¬φ (Schrödinger’s cat).




                                               64
                        On the Joint Revision of Belief and Trust




effect of revising with φ on σ is non-deterministic, then so is its effect on ψ. Hence,
the postulates to follow only provide necessary conditions for different ways belief and
trust may change; the general principle being that the scope of change on revising with
φ is limited to formulas and sources which are φ- and ¬φ-relevant. Postulating sufficient
conditions is, we believe, ill-advised.

5.2     Postulates
In the sequel, where φ is a formula and σ is a source, a σ-independent φ-kernel is,
intuitively, a φ-kernel that would still exist if σ did not exit. More precisely, for every
ψ ∈ Γ , ψ is supported by some σ 00 6= σ, or ψ has no source. Of course, all formulas
are conveyed by sources. However, given a forgetful filter, record of sources for some
formulas may be missing from the history.
    We believe a rational information revision operator n should observe the following
postulates on revising an information state K with (φ, σ) and φ ∈ T where T is a topic.
The postulates are a formalization of the intuitions outlined earlier.
(n1 : Consistency) Cn(F or(B(Kn(φ, σ)))) 6= L.
(n2 : Resilience) If Cn({φ}) = L, then K ⊀σ,T Kn(φ, σ).
(n3 : Supported Entrenchment) Kn(φ, σ) ≺φ K only if Cn(F or(B(K))) = L.
(n4 : Opposed Entrenchment) K ⊀¬φ Kn(φ, σ).
(n5 : Positive Relevance) If K ≺σ0 ,T Kn(φ, σ) and φ ∈ F or(B(Kn(φ, σ))), then
      1. σ 0 6= σ is supported by φ; or
      2. σ 0 = σ and there is Γ ⊆ F or(B(K)) where Γ is a σ-independent φ-kernel.
(n6 : Negative Relevance) If Kn(φ, σ) ≺σ0 ,T K, then
      1. φ ∈ F or(B(Kn(φ, σ))) and σ 0 is ¬φ-relevant; or
      2. σ 0 = σ, but, there is Γ ⊆ F or(B(K n (φ, σ))) where Γ is a ¬φ-kernel.
(n7 : Belief Confirmation) If K ≺ψ Kn(φ, σ), then, ψ 6= φ is supported by φ.
(n8 : Belief Refutation) If Kn(φ, σ) ≺ψ K, then
      1. ψ is ¬φ-relevant and φ ∈ Cn(F or(B(Kn(φ, σ)))) or Kn(φ, σ) ≺¬φ K; or
      2. ψ is φ-relevant and φ ∈ / Cn(F or(B(Kn(φ, σ)))) or Kn(φ, σ) ≺φ K.
    A revised information state is consistent even if the revising formula is itself contra-
dictory (n1 ). If φ is inconsistent, σ should not become more trusted11 (n2 ). Following
Tables 1 and 2, φ cannot become less entrenched unless the belief base is inconsistent
(n3 ). ¬φ, even if the belief base is inconsistent, should never become more entrenched
(n4 ). If σ 0 is more trusted after revision, then (i) φ succeeds and (ii) either σ 0 is different
from σ and supported by φ or σ 0 is σ and there is independent believed evidence for φ
(n5 ). If σ 0 is less trusted after revision, then it must be either that φ succeeds and σ 0
(possibly identical to σ) is relevant to ¬φ, or that σ 0 is σ and there is believed evidence
for ¬φ that leads to rejecting φ (n6 ). ψ is more entrenched after revision only if it is
supported by φ (n7 ). ψ is less entrenched after revision only if it is relevant to φ or ¬φ
(or both) and the one it is relevant to is not favored by the revision (n8 ).
    Given the definition of information states, support graphs, and the postulates out-
lined earlier, the following observations hold.
11
     A specific operator might choose to actually decrease trust in a source that conveys contradic-
     tions as this is a proof of its unreliability.




                                                 65
                      On the Joint Revision of Belief and Trust




Observation 3. If φ ∈ Cn(F or(B(K))) 6= L, then φ ∈ Cn(F or(B(K n (φ, σ)))).

Observation 4. If ¬φ ∈
                     / Cn(F or(B(K))), then ¬φ ∈
                                               / Cn(F or(B(K n (φ, σ)))).

Observation 5. If (ψ, b1 ) ∈ B(K), (ψ, b2 ) ∈ B(K n (φ, σ)) where b1 6= b2 , then ψ is
either φ-relevant or ¬φ-relevant.

Observation 6. If (σ 0 , T, t1 ) ∈ T (K), (σ 0 , T, t2 ) ∈ T (Kn(φ, σ)) and t1 6= t2 , then σ 0
is φ-relevant or ¬φ-relevant.

Observation 7. If φ ∈/ F or(B(K n (φ, σ))), then there is no σ 0 ∈ SK such that
K ≺σ0 ,T K n (φ, σ).

Observation 8. If Cn(F or(B(K))) 6= L, then an operator that observes n4 and n5
allows for only cases in Table 1 to occur.


6   Conclusion and Future Work

It is our conviction that belief and trust revision are intertwined processes that should not
be separated. Hence, in this paper, we argued why that is the case and provided a model
for performing the joint belief-trust (information) revision with minimal assumptions on
the modeling language. Then, we introduced the notion of information states that allows
for the representation of information in a way that facilitates the revision process. We
also introduced the support graph which is a formal structure that allows us to identify
relevance relations between not only formulas, but also, information sources. Finally,
we proposed the postulates that we believe any rational information revision operator
should observe. Future work could go in one or more of the following directions:

 1. We intend to define a representation theorem for the postulates we provided.
 2. We intend to further investigate conveyance and information acquisition to further
    allow agents to trust/mistrust their own perception(s).
 3. Lastly, we would like to add desires, intentions, and other mental attitudes to create
    a unified revision theory for all mental attitudes.


References

 1. Alchourrón, C.E., Gärdenfors, P., Makinson, D.: On the logic of theory change: Partial meet
    contraction and revision functions. The journal of symbolic logic 50(2), 510–530 (1985)
 2. Booth, R., Hunter, A.: Trust as a precursor to belief revision. Journal of Artificial Intelligence
    Research 61, 699–722 (2018)
 3. Castelfranchi, C., Falcone, R.: Principles of trust for MAS: Cognitive anatomy, social impor-
    tance, and quantification. In: Proceedings International Conference on Multi Agent Systems.
    pp. 72–79. IEEE (1998)
 4. Demolombe, R.: To trust information sources: a proposal for a modal logical framework. In:
    Castelfranchi, C., Tan, Y.H. (eds.) Trust and Deception in Virtual Societies, pp. 111–124.
    Springer Netherlands, Dordrecht (2001)




                                                66
                      On the Joint Revision of Belief and Trust




 5. Demolombe, R.: Reasoning about trust: A formal logical framework. In: Jensen, C., Poslad,
    S., Dimitrakos, T. (eds.) Trust Management. pp. 291–303. Springer Berlin Heidelberg,
    Berlin, Heidelberg (2004)
 6. Demolombe, R., Liau, C.J.: A logic of graded trust and belief fusion. In: Proceedings of the
    4th workshop on deception, fraud and trust in agent societies. pp. 13–25 (2001)
 7. van Ditmarsch, H.: Dynamics of lying. Synthese 191(5), 745–777 (2014)
 8. Elangovan, A., Auer-Rizzi, W., Szabo, E.: Why don’t I trust you now? An attributional ap-
    proach to erosion of trust. Journal of Managerial Psychology 22(1), 4–24 (2007)
 9. Falcone, R., Castelfranchi, C.: Social trust: A cognitive approach. In: Trust and Deception in
    Virtual Societies, pp. 55–90. Springer Netherlands (2001)
10. Gärdenfors, P., Makinson, D.: Revisions of knowledge systems using epistemic entrench-
    ment. In: Proceedings of the 2nd conference on Theoretical aspects of reasoning about
    knowledge. pp. 83–95. Morgan Kaufmann Publishers Inc. (1988)
11. Hansson, S.O.: Kernel contraction. The Journal of Symbolic Logic 59(3), 845–859 (1994)
12. Hansson, S.O.: A survey of non-prioritized belief revision. Erkenntnis 50(2-3), 413–427
    (1999)
13. Hansson, S.O.: A textbook of belief dynamics - theory change and database updating, Ap-
    plied logic series, vol. 11. Kluwer (1999)
14. Hardwig, J.: The role of trust in knowledge. The Journal of Philosophy 88(12), 693–708
    (1991)
15. Haselhuhn, M.P., Schweitzer, M.E., Wood, A.M.: How implicit beliefs influence trust recov-
    ery. Psychological Science 21(5), 645–648 (2010)
16. Herzig, A., Lorini, E., Hübner, J.F., Vercouter, L.: A logic of trust and reputation. Logic
    Journal of the IGPL 18(1), 214–244 (2010)
17. Holton, R.: Deciding to trust, coming to believe. Australasian Journal of Philosophy 72(1),
    63–76 (1994)
18. Ismail, H., Attia, P.: Towards a logical analysis of misleading and trust erosion. In: Gordon,
    A.S., Miller, R., Turán, G. (eds.) Proceedings of the Thirteenth International Symposium
    on Commonsense Reasoning, COMMONSENSE 2017, London, UK, November 6-8, 2017.
    vol. 2052. CEUR-WS.org (2017)
19. Jones, A.J.: On the concept of trust. Decision Support Systems 33(3), 225–232 (2002)
20. Jones, A.J., Firozabadi, B.S.: On the characterisation of a trusting agent—aspects of a formal
    approach. In: Castelfranchi, C., Tan, Y.H. (eds.) Trust and Deception in Virtual Societies, pp.
    157–168. Springer Netherlands, Dordrecht (2001)
21. Jøsang, A., Ivanovska, M., Muller, T.: Trust revision for conflicting sources. In: The 18th
    International Conference on Information Fusion (Fusion 2015). pp. 550–557. IEEE (2015)
22. Katz, Y., Golbeck, J.: Social network-based trust in prioritized default logic. In: Proceedings
    of the Twenty-First National Conference on Artificial Intelligence (AAAI 2006). pp. 1345–
    1350 (2006)
23. Leturc, C., Bonnet, G.: A normal modal logic for trust in the sincerity. In: Proceedings of the
    17th International Conference on Autonomous Agents and MultiAgent Systems. pp. 175–
    183. International Foundation for Autonomous Agents and Multiagent Systems (2018)
24. Liau, C.J.: Belief, information acquisition, and trust in multi-agent systems—a modal logic
    formulation. Artificial Intelligence 149(1), 31–60 (2003)
25. Lorini, E., Demolombe, R.: From binary trust to graded trust in information sources: A
    logical perspective. In: International Workshop on Trust in Agent Societies. pp. 205–225.
    Springer (2008)
26. Lorini, E., Jiang, G., Perrussel, L.: Trust-based belief change. In: T. Schaub, G. Friedrich,
    B.O. (ed.) Proceedings of the 21st European Conference on Artificial Intelligence (ECAI
    2014). Frontiers in Artificial Intelligence and Applications, vol. 263, pp. 549–554. IOS Press,
    Amsterdam (2014)




                                               67
                      On the Joint Revision of Belief and Trust




27. McLeod, C.: Trust. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Meta-
    physics Research Lab, Stanford University, fall 2015 edn. (2015)
28. Rodenhäuser, L.B.: A matter of trust: Dynamic attitudes in epistemic logic. Universiteit van
    Amsterdam [Host] (2014)
29. Sabater, J., Sierra, C.: Review on computational trust and reputation models. Artificial Intel-
    ligence Review 24(1), 33–60 (Sep 2005)
30. Sakama, C.: A formal account of deception. In: 2015 AAAI Fall Symposia, Arlington, Vir-
    ginia, USA, November 12-14, 2015. pp. 34–41. AAAI Press (2015)
31. Sakama, C., Caminada, M., Herzig, A.: A logical account of lying. In: European Workshop
    on Logics in Artificial Intelligence. pp. 286–299. Springer (2010)
32. Simpson, J.A.: Psychological foundations of trust. Current Directions in Psychological Sci-
    ence 16(5), 264–268 (2007)
33. Yasser, A., Ismail, H.O.: Information revision: The joint revision of belief and trust. In: Varz-
    inczak, I.J., Martinez, M.V. (eds.) Proceedings of the 18th International Workshop on Non-
    Monotonic Reasoning (NMR2020) (2020), forthcoming




                                                68