=Paper= {{Paper |id=Vol-1740/paper5 |storemode=property |title=How Strong to Believe Information? A Trust Model in the Logical Belief Function Theory |pdfUrl=https://ceur-ws.org/Vol-1740/paper5.pdf |volume=Vol-1740 |authors=Laurence Cholvy |dblpUrl=https://dblp.org/rec/conf/atal/Cholvy14 }} ==How Strong to Believe Information? A Trust Model in the Logical Belief Function Theory== https://ceur-ws.org/Vol-1740/paper5.pdf
How strong to believe information ? a trust model in the
             logical belief function theory

                                                 Laurence Cholvy.
                                                     ONERA
                                             2 avenue Ed. Belin 31055
                                                Toulouse FRANCE
                                             laurence.cholvy@onera.fr




                                                        Abstract
                       To which extent an agent can believe a piece of information it gets?
                       This is the question we address in this paper. More precisely, we provide
                       a model for expressing the relations between the trust an agent puts
                       in the information sources and its beliefs about the information they
                       provide. This model is based on Demolombe’s model and extends it
                       by considering imperfect information sources i.e information sources
                       that can report false information. Furthermore, this model is defined
                       in the belief function theory allowing degrees of trust to be modelled
                       in a quantitative way. Not only this model can be used when the agent
                       directly gets information from the source but it can also be used when
                       the agent gets second hand information i.e., when the agent is not
                       directly in contact with the source.




1    Motivation
When it gets a new piece of information from an information source, a rational agent has to update or revise its
belief base if it considers this piece of information sufficiently supported [1], [20], [12], [13] . Thus an important
question for the agent is to estimate how this new piece of information is supported i.e, how strong it can believe
it ?
   Obviously, the agent may believe a new piece of information if it trusts the information source for delivering
true information [7], [14]. For instance, assume that in order to know if it will rain tomorrow, I look at Météo-
France web site and read that indeed it will rain. If I trust Météo-France for delivering correct forecast, then I
can believe what Météo-France is reporting i.e., I can believe that it will rain.
   But in many applications [19], [16] [17], information sources are not necessarly correct all the time. They may
deliver false information, intentionnaly or not. This is why, assuming them to be incorrect must also be useful.
Furthermore, it may happen that information of interest is second hand i.e., the agent who collects information
is not directly in contact with the information source: it obtains information through an agent which cites the
information source. The process can even be longer. This is the case when, for instance, I am informed by
my neighbour that he read on Météo-France web site that it is going to rain. In this case, the information my
neighbour reports is that he read that according to Météo-France it is going to rain. My neighbour does not

          c by the paper’s authors. Copying permitted only for private and academic purposes.
Copyright �
In: R. Cohen, R. Falcone and T. J. Norman (eds.): Proceedings of the 17th International Workshop on Trust in Agent Societies,
Paris, France, 06-MAY-2014, published at http://ceur-ws.org




                                                              1
tell me that it is going to rain. We insist on the fact that here, the very information which interests me (will
it rain ?) is reported via two agents, Météo-France and my neighbour, the second citing the first. Second hand
information management is a more delicate issue. Indeed, trusting my neighbour for giving true forecast is not
useful here. However, trusting him not to lie and trusting Météo France for giving true forecast will lead me to
believe that it is going to rain.
   One very interesting contribution in the trust community is Demolombe’s one [7],[8],[9]. The author considers
trust as an attitude of an agent who believes that another agent has a given property. As for an agent which
provides information (i.e an information source), it can be attributed six properties: it can be sincere (roughly
speaking, it believes what it reports), competent (information it believes is true), valid (information it reports is
true), cooperative (it reports what it believes), vigilant (it believes what is true) or complete (it reports what is
true). In this model, validity is defined as sincerity and competence and completeness is defined as cooperativity
and vigilance. This work shows that the trust an agent puts in an information source relatively to some of these
properties influences the fact that this agent believes what this source produces or does not produce. This is
formally shown by considering a formal logical framework expressed in modal logic. The operators of this modal
logic are : Bi (Bi p means “agent i believes that p”) and Iij (Iij p means “agent i informs1 agent j that p”).
Operators Bi obey KD system and their semantics are defined by serial accessibility relations between words.
Operators Iij satisfy Iij p ∧ Iij q → Iij (p ∧ q) and obey rule of equivalence substitutivity [2]. Their semantics are
defined by neighborhood functions. Furthermore, it is assumed that there is no failure in the communication
process when an agent informs another one. I.e, Iij p → Bi Iij p and¬Iij p → Bi ¬Iij p are considered as true. The
different sorts of trusts considered in this model are2 :

  • T sincerea,b (p) =def Ba (Iba p → Bb p) i.e., a trusts b for p in regard to sincerity iff a believes that if b tells it
    p then b believes p.
  • T competenta,b (p) =def Ba (Bb p → p) i.e., a trusts b for p in regard to competence iff a believes that if b
    believes p then p is true.
  • T cooperativea,b (p) =def Ba (Bp p → Iba p) i.e., a trusts b for p in regard to cooperativity iff a believes that if
    b believes p then b tells it p.
  • T vigilanta,b (p) =def Ba (p → Bb p) i.e., a trusts b for p in regard to vigilance iff a believes that if p is true
    then b believes it.
  • T valida,b (p) =def Ba (Iba p → p) i.e., a trusts b for p in regard to validity iff a believes that if b tells it p then
    p is true.
  • T completea,b (p) =def Ba (p → Iba p) i.e., a trusts b for p in regard to completeness iff a believes that if p is
    true then b tells it.

    These notions can then formally be used to derive the beliefs of an agent who receives a piece of information.
For instance, T validi,j (p) → (Iji p → Bi p) is a theorem. It shows that if agent i trusts agent j for information p
in regard to validity, then if j tells it p then i believes p. An instance of this theorem is: T validi,M F (rain) →
  i
(IM  F rain → Bi rain) which means that if I trust Météo-France for being valid for forecast, then if Météo-
France reports that it will rain then I believe that it will rain. Notice that we also have the following theorem:
                                i
T completei,M F (rain) → (¬IM     F rain → Bi ¬rain). This means that if I trust Météo-France for being complete
for forecast, then if Météo-France does not report that it will rain then I believe that it will not rain.
    This model can be used to make more complex inferences, in particular for the problem of trust propagation
when agents inform each other about their own trusts [9]. For instance, suppose that due to a gaz failure at
home, my husband calls a plumber. Suppose that my husband tells me that the plumber told him that our
gaz installation must be renewed. He adds that he trusts this person because he owns the Qualigaz agreement.
Then, if I trust my husband to tell the truth, I can conclude that our gaz installation must be renewed.
    Furthermore, agents may be uncertain about the trust they put in information sources. In [8], Demolombe
presented a qualitative framework to model this uncertainty and defines a modal logic for graded beliefs . Graded
beliefs are modelled by operators Big so that Big p means that the strength level of agent i about p is g. However,
  1 We will sometimes say “tells” or “reports” or “produces”
   2 We adopt here the definitions given in [9] which defines trust with the belief operator B and not the definitions of [7] which
                                                                                              i
defines trust with a “strong belief” Ki operator




                                                                2
the author did not show the application of this framework to second hand information management. In [6],
Pichon et al address very close questions by using the belief function theory [18] which is a framework for
modelling uncertainty in a quantitative way. They propose a mechanism for computing the plausibility of a piece
of information which is emitted by an agent given our uncertain belief about its reliability. For the authors,
the reliability of an agent is defined by its relevance and its truthfulness so that (1) information provided by a
non-relevant information source is ignored i.e a non-relevant source brings no information; (2) we can believe
the piece of information provided by a relevant and truthful source; (3) we can believe the negation of the piece
of information provided by a relevant but non-truthful source. It must be noticed that “being relevant and
truthful” is quite close to “being valid” as introduced by Demolombe. However, this work did not address the
case of second hand information either.
   In the same time, we suggested to extend Demolombe’s model by considering the negative counterparts of the
previous six properties [3], [4]. In particular, we considered negative counterparts of validity and completeness,
which led us to consider misinformer agents and falsifier agents. Roughly speaking, when a misinformer reports
a piece of information, then we can conclude that it is false. When a piece of information is false, a misinformer
reports it is true. In Demolombe’s terms, this could be defined by:

    • T misinf ormera,b (p) =def Ba (Iba p → ¬p) i.e., a trusts b for being misinformer relatively to p iff a believes
      that if b tells it p then p is false. Notice that “being misinformer” is quite close to “being relevant and non
      truthful” as introduced by Dubois et al.

    • T f alsif iera,b (p) =def Ba (¬p → Iba p) i.e., a trusts b for being falsifier relatively to p iff a believes that if p
      is false then b tells it is true.

    In the present paper, we go on our research by proposing a model which (i) allows an agent to express to
which extent it trusts an information source for being valid, complete, misinformer or falsifier; (ii) which can
also be used to manage second hand information. This model is defined in a framework recently defined, the
logical belief function theory [5]. This framework extends the belief function theory and allows the user to assign
degrees of beliefs to propositional formulas and to express integrity constraints.
    This paper is organized as follows. Section 2 describes a model which allows an agent to express uncertainty
about the trust it puts in sources. Section 3 shows how to use this model to evaluate information. We first study
the case when the agent is in direct contact with the source and then the case when it is not. Examples will
illustrate the model. Finally section 4 is devoted to a discussion.

2     A trust model in the logical belief function theory
The belief function theory [18] is a framework which offers several interesting tools to manage uncertain beliefs
such as mass functions and belief functions which are used to quantify the extent to which one can believe a
piece of information and rules of combination which are used to combine uncertain beliefs in many different way
[10]. Ignorance can explicitely be quantified in this framework and consequently this model does satisfy the rule
of additivity, i.e., the degree of belief in a proposition plus the degree of belief in its negation can be strictly less
than 1.
   In this paper, we will use the logical belief function theory [5] which extends the belief function theory by
allowing the user to express its beliefs in propositional logic and to consider integrity constraints. It has been
proved that this formalism is not more expressive than the belief function theory. However, it allows to express
uncertain beliefs directly and in a more compact way. Furthermore, it enlightens the role of integrity constraints.
This framework is summarized below.

2.1     The logical belief function theory
Let Θ be a finite propositional language and σ be a satisfiable formula of Θ.
                                                    σ                         σ
   We consider the equivalence relation denoted ↔ which is defined by: A ↔ B iff σ |= A ↔ B. Thus, two
                               σ
formulas are in relation by ↔ if and only if they are equivalent when σ is true. By convention, we will say that
                                                  σ
all the formulas of a given equivalent class of ↔ are identical. With this convention, we can consider that the
                                                σ
set of formulas is finite. It is denoted F ORM . Finally, in the following, true denotes any tautology and f alse
denotes any inconsistant formula.
   In the logical belief function theory, the user can assign masses to propositional formulas. This is shown by
the following definition.




                                                              3
Definition 1 A logical mass function is a function m : F ORM σ → [0, 1] such that:

                                                       m(f alse) = 0

and                                                    �
                                                                 m(ϕ) = 1
                                                    ϕ∈F ORM σ


   Like the belief function theory, the logical belief function theory offers several interesting functions for decision.
In particular, the logical belief function and the logical plausibility functions are defined by:

Definition 2 Given a logical mass function m, the logical belief function which is associated to m is defined by:
Bel : F ORM σ → [0, 1] such that:
                                                       �
                                          Bel(ϕ) =            m(ψ)
                                                              ψ∈F ORM σ
                                                               σ|=ψ→ϕ


Definition 3 Given a logical mass function, the logical plausibility function which is associated to m is defined
by: P l : F ORM σ → [0, 1] such that
                                           P l(ϕ) = 1 − Bel(¬ϕ)

  It can be noticed that                                        �
                                           P l(ϕ) =                             m(ψ)
                                                             ψ∈F ORM σ
                                                      (σ∧ψ∧ϕ) is satisf iable

   Like the belief function theory, the logical belief function theory offers several rules for combining logical mass
functions. Let us only detail the logical DS rule defined by:

Definition 4 Let m1 and m2 be two logical mass functions. The logical DS rule defines the logical mass function
m1 ⊕ m2 : F ORM σ → [0, 1] by:
                                           m1 ⊕ m2 (f alse) = 0
                                σ
  and for any ϕ such that ϕ ↔
                            � f alse:
                                                      �
                                                                    σ
                                                          (ϕ1 ∧ϕ2 )↔ϕ
                                                                         m1 (ϕ1 ).m2 (ϕ2 )
                            m1 ⊕ m2 (ϕ) = �
                                                  σ∧ϕ1 ∧ϕ2 is satisf iable    m1 (ϕ1 ).m2 (ϕ2 )

                                                   �
                                    if                              m1 (ϕ1 ).m2 (ϕ2 )   �= 0
                                         σ∧ϕ1 ∧ϕ2 is satisf iable

   It can easily be shown that this combination rule is the reformulation, into the logical belief function theory,
of the Demspter’s rule of combination.

2.2   A trust model
The trust model we define here will be used to express the degrees at which an agent i thinks that another agent
j is valid (resp is a misinformer, or complete, or is a falsifier) when it reports information ϕ. The model does
not specify how these degrees are defined. For instance, they can be given by agent i on the basis on its past
experience with j. More precisely, if i has already get information from j, it can evaluate how many times it
believes it and how many times it believes the opposite. We can also imagine that if i does not know j, these
degrees are provided by a reputation fusion mechanism or some other trust assessment model [15].
   We consider a propositional language Θ with two kinds of letters. The “information letters” p, q..... will be
used to model the information which is reported by the agents; the “reporting letters” of the form Ij ϕ will be
used to represent “agent j reports information ϕ”, for any propositional formula ϕ made of information letters.
For instance Ia p is a letter which represents “agent a reports information p”, Ib (p ∧ q) is a letter which represents
“agent b reports information p ∧ q”.




                                                                4
Definition 5 Consider two agents i and j and a piece of information ϕ i.e., a formula made of information
letters. Let vj ∈ [0, 1] and mj ∈ [0, 1] two real numbers 3 such that 0 ≤ vj + mj ≤ 1. vj is the degree to which i
trusts j for being valid relatively to ϕ and mj is the degree to which i trusts j for being a misinformer relatively
to ϕ (written V M (i, j, ϕ, vj , mj )) iff i’s beliefs can be modelled by the mass assignment mV M (i,j,ϕ,vj ,mj ) defined
by :
                                              mV M (i,j,ϕ,vj ,mj ) (Ij ϕ → ϕ) = vj
                                            mV M (i,j,ϕ,vj ,mj ) (Ij ϕ → ¬ϕ) = mj
                                            V M (i,j,ϕ,vj ,mj )
                                          m                     (true) = 1 − (vj + mj )

   According to this definition, if i believes at degree vj that j is valid for ϕ and believes at degree mj that j is
a misinformer then its belief degree in the fact “if j reports ϕ then ϕ is true” is vj ; its belief degree in the fact
“if j reports ϕ then ϕ is false” is mj ; and its total ignorance degree is 1 − (vj + mj ). The following particular
cases are worth pointing out:

  • (vj = 1) and (mj = 0) In this case, mV M (i,j,ϕ,1,0) (Ij ϕ → ϕ) = 1. I.e, i is certain that if j tells ϕ then ϕ is
    true, i.e, i is certain that j is valid for ϕ. I.e i totally trusts j for being valid relatively to ϕ.
  • (vj = 0) and (mj = 1) In this case, mV M (i,j,ϕ,0,1) (Ij ϕ → ¬ϕ) = 1. I.e. i is certain that if j reports ϕ then
    ϕ is false, i.e, i is certain that j is a misinformer for ϕ. I.e i totally trusts j for being misinformer relatively
    to ϕ.

Definition 6 Consider two agents i and j and a piece of information ϕ. Let cj ∈ [0, 1] and fj ∈ [0, 1] two real
numbers4 such that 0 ≤ cj + fj ≤ 1. cj is the degree to which i trust j for being complete relatively to ϕ and fj
is the degree to which i trusts j for being a falsifier relatively to ϕ (written CF (i, j, ϕ, cj , fj )) iff i’s beliefs can
be modelled by the mass assignment mCF (i,j,ϕ,cj ,fj ) defined by:
                                             mCF (i,j,ϕ,cj ,fj ) (ϕ → Ij ϕ) = cj
                                             mCF (i,j,ϕ,cj ,fj ) (¬ϕ → Ij ϕ) = fj
                                            CF (i,j,ϕ,cj ,fj )
                                           m                   (true) = 1 − (cj + fj )

   According to this definition, if i believes at degree cj that j is complete for ϕ and believes at degree fj that j
is a falsifier then its belief degree in the fact “if ϕ is true then j reports ϕ” is cj ; its belief degree in the fact “if
ϕ is false then j reports ϕ” is fj ; and its total ignorance degree is 1 − (cj + fj ). The following particular cases
are worth pointing out:

  • (cj = 1) and (fj = 0) In this case, mCF (i,j,ϕ,1,0) (ϕ → Ij ϕ) = 1. I.e, i is certain that if ϕ is true then j
    reports ϕ i.e, i is certain that j is complete for ϕ. I.e i totally trusts j for being complete relatively to ϕ.
  • (cj = 0) and (fj = 1) In this case, mCF (i,j,ϕ,0,1) (¬ϕ → Ij ϕ) = 1. I.e. i is certain that j is a falsifier for ϕ.
    I.e i totally trusts j for being falsifier relatively to ϕ.

2.3   A quick comparision with graded trust model
According to Demolombe, [8], the notion of graded trust envolves two components respectively called graded
beliefs and graded regularities. Consequently, graded trust are modelled by formulas of the form: Big (φ →h ψ)
where Big is a modal operator so that Big a expresses that the strength of agent i’s beliefs in a is g and →h is a
conditional so that (φ →h ψ) expresses that φ entails ψ at degree h.
   The trust model we defined in the previous section does not permit to quantify the strength of the implication.
So Demolombe’s graded trust model is richer. For instance it can represent the fact that agent i is totally certain
that j is highly valid for proposition p. The previous model cannot.
   As for graded beliefs, the two models provide different ways to model them since they do not impose the
same axioms on graded beliefs. In particular, in the logical belief function theory, we have: if Beli (ϕ1 ) = g1 and
Beli (ϕ2 ) = g2 then Beli (ϕ1 ∧ ϕ2 ) ≤ min(g1 , g2 ) and Beli (ϕ1 ∨ ϕ2 ) ≥ max(g1 , g2 ). These two assertions are less
restrictive that axioms (U 2) and (U 3) of [8] which impose that Beli (ϕ1 ∧ ϕ2 ) = min(g1 , g2 ) and Beli (ϕ1 ∨ ϕ2 ) =
max(g1 , g2 ).
  3 These degrees should be indexed by i and by ϕ but this is omitted for readibility
  4 Again these degrees should be indexed by i and ϕ




                                                                5
3     Applying this trust model to evaluate information
The question which is addressed in this section is:

                            To which extent an agent can believe information he gets?

   We will successively examine the case when the agent directly gets information from the information source,
then the case when there is an intermediary agent between the agent and the information source.

3.1     The agent is in direct contact with the source
Let us first give the following preliminary definitions.

Definition 7 mV M CF denotes the mass assignment obtained by combining the two previous mass assignments.
I.e,

                                  mV M CF = mV M (i,j,ϕ,vj ,mj ) ⊕ mCF (i,j,ϕ,cj ,fj ) .

   This assignment represents i’s degrees of trust in the fact that j is valid, complete, misinformer or falsifier
relatively to information ϕ. One can check that mV M CF is defined by:

                                            mV M CF (ϕ ↔ Ij ϕ) = vj .cj
                                                mV M CF (ϕ) = vj .fj
                                        V M CF
                                      m        (Ij ϕ → ϕ) = vj .(1 − cj − fj )
                                               mV M CF (¬ϕ) = mj .cj
                                          mV M CF (¬ϕ ↔ Ij ϕ) = mj .fj
                                       V M CF
                                     m        (Ij ϕ → ¬ϕ) = mj .(1 − cj − fj )
                                     mV M CF (ϕ → Ij ϕ) = (1 − vj − mj ).cj
                                     mV M CF (¬ϕ → Ij ϕ) = (1 − vj − mj ).fj
                                    V M CF
                                   m       (true) = (1 − vj − mj ).(1 − cj − fj )

Definition 8 mψ is the mass assignments defined by: mψ (ψ) = 1.

  In particular, if ψ is Ij ϕ, then mIj ϕ represents the fact that agent i is certain that j has reported ϕ. If ψ is
¬Ij ϕ, then m¬Ij ϕ represents the fact that agent i is certain that j did not report ϕ.


3.1.1    First case.
Here, we assume that agent i get information from the information source j which reports ϕ. In this case, i’s
beliefs are modelled by the following mass assignment mi :

                                               mi = mV M CF ⊕ mIj ϕ
    One can check that mi is defined by:

                                                  mi (Ij ϕ ∧ ϕ) = vj
                                                mi (Ij ϕ ∧ ¬ϕ) = mj
                                               mi (Ij ) = (1 − vj − mj )

Theorem 1 Let Beli be the belief function associated with assignment mi . Then:

                                           Beli (ϕ) = vj , Beli (¬ϕ) = mj

   Consequently, when i knows that j reported ϕ and when V M (i, j, ϕ, vj , mj ) and CF (i, j, , ϕ, cj , fj ), then i
believes ϕ more than ¬ϕ if and only if vj > mj i.e, its belief degree in j’s being valid is greater that its belief
degree in j’s being a misinformer. These results are not surprising.




                                                            6
Example 1
   In this example, we consider an agent (denoted a) which gets information provided by Météo-France web site
(denoted M F ). Θ is the propositional language whose “information letters” are: rain (tomorrow will be rainy),
cold (tomorrow will be cold), dry (tomorrow will be dry), warm (tomorrow will be warm), and whose “reporting
letters” are: IM F rain (Météo-France forecats rain for tomorrow), IM F ¬rain (Météo-France forecats no rain for
tomorrow) IM F cold (Météo-France forecats cold for tomorrow), etc.
   σ is here the formula σ = ¬(rain ∧ dry) ∧ ¬(cold ∧ warm)

 1. Suppose that Météo-France web site indicates that tomorrow will be rainy. Suppose that, given a’s numerous
    previous access to this web site, a believes at degree 0.8 that Météo-France web site is valid for rain forecast
    and a believes at degree 0.1 that it misinforms for rain forecast i.e., V M (a, M F, rain, 0.8, 0.1).
    Then, by theorem 1, we have: Bela (rain) = 0.8 and Bela (¬rain) = 0.1, i..e, a believes at degree 0.8 that
    tomorrow will be rainy and a believes at degree 0.1 that tomorrow will be dry. Notice that we also have
    Bela (¬dry) = 0.8 and Bela (dry) = 0.1.

 2. Now suppose that Météo-France web site forecats rain and cold for tomorrow. Suppose that a trusts this web
    site at degree 0.8 for being valid and a trusts it at degree 0.1 for being misinformer, i.e V M (a, M F, (rain ∧
    cold), 0.8, 0.1). Then, by theorem 1, we have:

        • Bela (rain ∧ cold) = Bela (rain) = Bela (cold) = 0.8.
        • Bela (¬rain ∨ ¬cold) = Bela (dry ∨ warm) = 0.1,
        • Bela (¬rain) = Bela (¬cold) = Bela (dry) = Bela (warm) = 0.

    I.e., a’s degree of belief in the fact that tomorrow will be rainy and cold (resp tomorrow will be rainy,
    tomorrow will be cold) is 0.8. a’s degree of belief in the fact that tomorrow will be dry or warm is 0.1. But
    a’s degree of belief in that the fact that tomorrow will be dry (resp will be warm) is 0.
    One can notice however that we also have P la (dry) = P la (warm) = 0.2 i.e., a’s plausibility degree in that
    the fact that tomorrow will dry (resp will be warm) is 0.2.


  As a consequence of theorem 1 we have:

  • If V M (i, j, ϕ, 1, 0) then Beli (ϕ) = 1 and Beli (¬ϕ) = 0 i.e, i believes ϕ and does not believe ¬ϕ;

  • If V M (i, j, ϕ, 0, 1) then Beli (ϕ) = 0 and Beli (¬ϕ) = 1 i.e, i does not believe ¬ϕ and believes ϕ.

   The first result is coherent with a result obtained in Demolombe’s model in which Tvalidi,j (ϕ) ∧ Bi IJi ϕ → Bi ϕ
and Tvalidi,j (ϕ) ∧ Bi IJi ϕ → ¬Bi ¬ϕ are theorems. However, Demolombe’s model does not model misinformers
thus the second result cannot be compared.

3.1.2   Second case.
Here we assume that the information source j did not report ϕ. In this case, i’s beliefs can be modelled by the
mass assignment mi :

                                              mi = mV M CF ⊕ m¬Ij ϕ
  One can check that mi is defined by:

                                                mi (¬Ij ϕ ∧ ϕ) = fj
                                               mi (¬Ij ϕ ∧ ¬ϕ) = cj
                                              mi (¬Ij ) = (1 − cj − fj )

Theorem 2 Let Beli be the belief function associated with assignment mi . Then,

                                           Beli (ϕ) = fj , Beli (¬ϕ) = cj




                                                          7
   Consequently, when i knows that j did report ϕ and when V M (i, j, ϕ, vj , mj ) and CF (i, j, , ϕ, cj , fj ) then i
believes ϕ more than ¬ϕ if and only if fj > cj i.e, its belief degree in j’s being a falsifier is greater that its belief
degree in j’s being complete. Again this is not surprising.

Example 2 Suppose now that agent a reads Météo-France web site to be informed about storms in the south
of France and that no forecast of storm is indicated on the site. Here we consider a propositional language
whose “information letter” are: storm (there will be a storm in the south of France), beach (beach access will
be open) and whose “reporting letters” are: IM F storm (Météo-France forecats a storm in the south of France),
IM F ¬storm (Météo-France forecats no storm in the south of France).
   Consider that σ = storm → ¬beach, i.e, in case of storm beach access will be closed.
   Suppose that a trusts at degree 0.8 this web site for being complete for storm forecast and a trusts at degree
0.1 this site for being falsifier for storm forecast, i.e., CF (a, M F, storm, 0.8, 0.1).
   Then, according to theorem 2, we have: Bela (storm) = 0.1 and Bela (¬storm) = 0.8. Consequently, a has a
greater degree of belief in the fact that there will be no storm than in the fact that there will be one. We also
have Bela (¬beach) = 0.1 and Bela (beach) = 0. Bela (beach) = 0 may look strange but it comes from the fact
that a has no guarantee that beach acces will be open (even if there is no storm). Indeed, ¬storm → beach is
not deducible from σ, i.e., nothing states that if there is no storm, beach access will be open.


  As a consequence of theorem 2 we have:

  • If CF (i, j, ϕ, 1, 0), then Beli (ϕ) = 0 and Beli (¬ϕ) = 1 i.e, i does not believes ϕ and believes ¬ϕ;

  • If CF (i, j, ϕ, 0, 1), then Beli (ϕ) = 1 and Beli (¬ϕ) = 0 i.e, i believes ϕ and does not believe ¬ϕ.

   The first result is coherent with a result obtained in Demolombe’s model in which Tcompletei,j (ϕ) ∧ Bi ¬IJi ϕ →
Bi ¬ϕ and Tcompletei,j (ϕ)∧Bi ¬IJi ϕ → ¬Bi ϕ are theorems. However, Demolombe’s model does not model falsifiers
thus the second result cannot be compared.

3.2     There is a third agent between the agent and the source
We consider here that i is not in direct contact with the source of information k, but there is a go-between
agent named j. The question is again to know to which extent agent i can believe information it gets. Here,
we consider a propositional language whose “information letters” are: p, q... and whose “reporting letters” are
Ik ϕ, Ij Ik ϕ, Ij ¬Ik ϕ. which respectively mean k reported ϕ, j told that k reported ϕ, j told that k did not report ϕ.

  In this section, the mass assigment mV M CF is defined by:

Definition 9

              mV M CF = mV M (i,j,Ik ϕ,vj ,mj ) ⊕ mCF (i,j,Ik ϕ,cj ,fj ) ⊕ mV M (i,k,ϕ,vk ,mk ) ⊕ mCF (i,k,ϕ,ck ,fk )

   This assignment represents the beliefs of i as regard to agent j being valid, complete, a misinformer or a
falsifier for information Ik ϕ and as regard to agent k being valid, complete, a misinformer or a falsifier for
information ϕ.


3.2.1    First case.
Assume that j reported that k told it ϕ. In this case, i’s beliefs are modelled by the mass assignment mi defined
by:

                                                   mi = mV M CF ⊕ mIj Ik ϕ

Theorem 3 Let Beli be the belief function associated with assignment m.

                             Beli (ϕ) = vk .fk + vk .vj − vk .fk .vj − vk .fk .mj + fk .mj and
                             Beli (¬ϕ) = mk .ck + mk .vj − mk .ck .vj + ck .mj − mk .ck .mj




                                                                8
   Consequently,


  • If V M (i, j, Ik ϕ, 1, 0) and V M (i, k, ϕ, 1, 0) then Beli (ϕ) = 1 and Beli (¬ϕ) = 0.

  • If V M (i, j, Ik ϕ, 1, 0) and V M (i, k, ϕ, 0, 1) then Beli (ϕ) = 0 and Beli (¬ϕ) = 1.

  • If V M (i, j, Ik ϕ, 0, 1) and CF (i, k, ϕ, 1, 0) then Beli (ϕ) = 0 and Beli (¬ϕ) = 1.

  • If V M (i, j, Ik ϕ, 0, 1) and CF (i, k, ϕ, 0, 1) then Beli (ϕ) = 1 and Beli (¬ϕ) = 0.


   The first result could be provided in Demolombe’s model if the inform operator was defined by omitting
the agent which receives the information. In this case, we would have T validi,k (ϕ) = Bi (Ik ϕ → ϕ). Thus
T validi,j (Ik ϕ) ∧ T validi,k (ϕ) ∧ Ij Ik ϕ → Bi ϕ and T validi,j (Ik ϕ) ∧ T validi,k (ϕ) ∧ Ij Ik ϕ → ¬Bi ¬ϕ would be
theorems. As for the last three results, they cannot be compared since Demolombe’s framework does not model
misinformers nor falsifiers.

3.2.2   Second case.
Here, we assume that j does not tell that k reported ϕ. In this case, i’s beliefs are modelled by:

                                               mi = mV M CF ⊕ m¬Ij Ik ϕ

Theorem 4 Let Beli be the belief function associated with assignment m.

           Beli (ϕ) = vk .ck .fj + vk .fk + vk .(1 − ck − fk ).(1 − cj ) + mk .fk .cj + (1 − vk − mk ).fk and
           Beli (¬ϕ) = vk .ck .cj + mk .ck + mk .fk .(1 − cj ) + mk .(1 − ck − fk ).fj + (1 − vk − mk ).ck .cj

   Consequently,


  • If CF (i, j, Ik ϕ, 1, 0) and CF (i, k, ϕ, 1, 0) then Beli (ϕ) = 0 and Beli (¬ϕ) = 1.

  • If CF (i, j, Ik ϕ, 1, 0) and CF (i, k, ϕ, 0, 1) then Beli (ϕ) = 1 and Beli (¬ϕ) = 0.

  • If CF (i, j, Ik ϕ, 0, 1) and V M (i, k, ϕ, 1, 0) then Beli (ϕ) = 1 and Beli (¬ϕ) = 0.

  • If CF (i, j, Ik ϕ, 0, 1) and V M (i, k, ϕ, 0, 1) then Beli (ϕ) = 0 and Beli (¬ϕ) = 1.


   Again, the first result could be provided in Demolombe’s model if the inform operator was defined by omitting
the agent which receives the information. In this case, we would have T completei,k (ϕ) = Bi (ϕ → Ij ϕ). Thus
T completei,j (Ik ϕ) ∧ T completei,k (ϕ) ∧ ¬Ij Ik ϕ → Bi ¬ϕ and T validi,j (Ik ϕ) ∧ T validi,k (ϕ) ∧ Ij Ik ϕ → ¬Bi ϕ would
be theorems. As for the last three results, they cannot be compared since Demolombe’s framework does not
model misinformers nor falsifiers.
   The following example illustrates the first item.

Example 3 Let us consider three agents denoted me (me), n (my neighbour) and M F (Météo-France web site).
Suppose that my neighbour, who regularly reads Météo-France web site, does not tell me that a storm is forecasted
in the south of France.
   Suppose that I trust my neighbour to be complete (he always tell me what he reads on Météo-France web site)
i.e.in particular we have CF (me, n, IM F storm, 1, 0).
   Suppose that I also trust Météo-France to be complete relatively to storms forecats (they always indicate
forecasted storms) i.e., we have: and CF (me, M F, storm, 1, 0).
   In this case, we get Belme (storm) = 0 and Belme (¬storm) = 1 i.e., I can conclude that there will be no
storm.




                                                            9
3.2.3     Third case
Here, we assume that j reports that k did not report ϕ. In this case, i’s beliefs are modelled by:

                                              mi = mV M CF ⊕ mIj ¬Ik ϕ
    We do not detail this case but we think that the reader gets the idea.

3.2.4     Fourth case
Here, we assume that j does not tell that k did not report ϕ. In this case, i’s beliefs are modelled by:

                                             mi = mV M CF ⊕ m¬Ij ¬Ik ϕ
    We do not detail this case but the reader can easily imagine the kind of conclusions we can derive.

4    Concluding remarks
To which extent an agent can believe a piece of information it gets from an information source? This was the
question addressed in this paper in which we have provided a model for expressing the relations between the
trust an agent puts in the sources and its beliefs about the information they provide. This model is based
on Demolombe’s model and extends it by considering information sources that can report false information.
Furthermore, this model is defined in the logical belief function theory allowing degrees of trust to be modelled
in a quantitative way and allowing the agent to consider integrity constraints. We have shown that not only this
model can be used when the agent directly gets information from the source but it can also be used when the
agent gets second hand information i.e., when the agent is not directly in contact with the source.
   Notice that the belief function theory has already been used in problems related to trust management. For
instance, subjective logic, a specific case of belief function theory, has been used in [11] for trust network analysis.
Notice however that trust network analysis is not the problem addressed here. [6], already mentionned in the
introduction is another paper which uses belief function for estimating the plausibility of a piece of information
emitted by sources which may be unrelevant or untruthful. But the case of second hand information is not
addressed. More recently [17] defines a formalism based on description logic and belief function theory to reason
about uncertain information provided by untrustworthy sources. However, in this work, a source is given a
single degree. This degree, called the “degree of trust”, is used for discounting the information it provides. It
looks close to the “degree at which an agent trusts a source for being valid” as introduced here but a formal
comparision has to be done. Furthermore again, the case of second hand information is not addressed.
   As to the use of the logical belief function theory, a question concerns the choice of the combination rule.
Here, we have chosen the logical DS rule of combination which is the reformulation of the most classical rule.
But can the other rules of combination be used ? Or more precisely, in which cases, should we use them instead?
This is an important question to be answered in the future.
   The model defined here would be more general if it could define information sources properties according to
sets of propositions and not to a given proposition. For instance, we could express that an agent is valid for
any proposition related to the topic “weather forecasts” and complete for any proposition related to the topic
“south of France”. That would imply that this agent is valid and complete for any information related to weather
forecat in the south of France. However, such a reasoning has to be formally defined and describing these topics
by means of an ontology is a possible solution.
   Finally, this work could be extended by considering the case when information sources provide uncertain
information like [17] does. This would allow us to deal with the case when an agent reports that some fact
is highly certain or when an agent reports that another agent has reported that some fact was fairly certain.
Modelling this kind of uncertainty and mixing it with the one which is introduced in this paper would allow us
to deal with more complex cases.

Acknowledgements. I thank the anonymous referees whose pertinent questions led me to improve this paper.

References
        [1] C. E. Alchourron, P. Gardenfors, and D. Makinson. On the logic of theory change: Partial meet con-
            traction and revision functions. Journal of Symbolic Logic, 50:510530, 1985.




                                                          10
 [2] B. F Chellas. Modal logic: An introduction. Cambridge University Press, 1980.
 [3] J. Chauvet-Paz and L. Cholvy. Reasoning with Information reported by Imperfect Agents. Proceed-
     ings of First International Workshop on Uncertainty Reasoning and Multi-Agent Systems for Sensor
     Networks (URMASSN’11), Belfast, Northern Ireland, 2011.
 [4] L. Cholvy. Collecting information reported by imperfect information sources. In S. Greco, B. Bouchon-
     Meunier, G. Coletti, M. Fedrizzi, B. Matarazzo, R. R. Yager (Eds.): Advances in Computational
     Intelligence: 14th International Conference on Information Processing and Management of Uncertainty
     in Knowledge-Based Systems (IPMU 2012), Springer, 2012,
 [5] L. Cholvy. Logical representation of beliefs in the belief function theory. Proceedings of IJCAI 2013
     Workshop on Weighted Logics for Artificial Intelligence. August 2-3 2013, Beijing, China.
 [6] F. Pichon, D. Dubois, T. Denoeux. Relevance and Truthfulness in Information Correction and Fusion.
     International Journal of Approximate Reasoning, 53(2), 159-175 (2012).
 [7] R. Demolombe, Reasoning about trust: a formal logical work. in: Proceedings of the 2nd second
     International Conference iTrust, 291-303, 2004.
 [8] R. Demolombe. Graded Trust. In Proceedings of Workshop on Trust in Agent Societies, Budapest,
     2009.
 [9] R. Demolombe. Transitivity and Propagation of Trust in Information Sources. An Analysis in Modal
     Logic In Proceedings of the 12th International Workshop on Computational Logic in Multi-Agent Sys-
     tems (CLIMA 2011), LNAI 6814, Springer-Verlag, 2011.
[10] T. Denoeux. Conjunctive and disjunctive combination of belief functions induced by nondistinct bodies
     of evidence. Artificial Intelligence, 172 (23), 234264, 2008.
[11] A Josang, R. Hayward and S. Pope. Trust Network Analysis with subjective logic In Proceedings of
     29th Australasian Computer Science Conference (ACSC2006), 2006.
[12] H. Katsuno and A. Mendelzon. On the Difference between Updating a Knowledge Base and Revising
     it. Proceedings of Principles of Knowledge Representation and Reasoning (KR), 1991.
[13] H. Katsuno and A. O. Mendelzon Propositional knowledge base revision and minimal change. Artificial
     Intelligence, 52:263294. 1991
[14] C-J. Liau Belief, information acquisition, and trust in multi-agent systems. A modal logic formulation.
     Artificial Intelligence, 149:31-60. 2003.
[15] TD Huynh, N Jennings, N Shadbolt. FIRE: An integrated trust and reputation model for open multi-
     agent systems. Proccedings of 16th European Conference on Artificial Intelliegnce, ECAI 2004.
[16] E. Lindhal, S. O’Hara, Q. Zhu. A Multi-Agent System of Evidential Reasoning for Intelligence Analyses
     In Edmund H. Durfee, Makoto Yokoo, Michael N. Huhns, Onn Shehory (Eds.): Proceedings of the
     6th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2007),
     Honolulu, Hawaii, USA, May 14-18, 2007
[17] M. Sensoy et al. Reasoning aabout Uncertain Information and Conflict Resolution through Trust
     Revision In M. L. Gini, O. Shehory, T. Ito, C M. Jonker (Eds.). Proceedings of the 12th International
     Conference on Autonomous Agents and Multiagent Systems (AAMAS 2013), Saint Paul, MN, USA,
     May 6-10, 2013.
[18] G. Shafer. A mathematical Theory of Evidence Princeton University Press, 1976.
[19] STANAG 2511 Standardization Agreement 2511. Intelligence reports, NATO (unclassified), january
     2003.
[20] M. Winslett Updating Logical Databases. Cambridge University Press, 1990.




                                                  11