=Paper= {{Paper |id=Vol-1578/paper6 |storemode=property |title=Trust on Beliefs: Source, Time and Expertise |pdfUrl=https://ceur-ws.org/Vol-1578/paper6.pdf |volume=Vol-1578 |authors=Victor S. Melo,Alison R. Panisson,Rafael H. Bordini |dblpUrl=https://dblp.org/rec/conf/atal/MeloPB16a }} ==Trust on Beliefs: Source, Time and Expertise== https://ceur-ws.org/Vol-1578/paper6.pdf
             Trust on Beliefs: Source, Time and Expertise

                   Victor S. Melo               Alison R. Panisson                Rafael H. Bordini
                         Pontifical Catholic University of Rio Grande do Sul (PUCRS)
                 Postgraduate Programme in Computer Science – School of Informatics (FACIN)
                                           Porto Alegre – RS – Brazil
                 {victor.melo.001,alison.panisson}@acad.pucrs.br,r.bordini@pucrs.br




                                                        Abstract
                       Trust is an important mechanism that describes how credible is the
                       relation between agents in a multi-agent system. In this work, we
                       extend the idea of trust to beliefs of agents, combining not only the
                       provenance of information but also the outdated of such information.
                       The resulting approach allows the agent generate different trust values
                       for beliefs, depending on which meta-information is more important
                       for that particular application, the trust in the source or how recent
                       the information is. For this end, we describe some profiles of agents
                       with different characteristics, combining the trust on the source and
                       the outdated of information. Furthermore, we discuss how patterns of
                       reasoning like argumentation schemes can play an important role in
                       our approach, considering the expertise of the source of information.




1    Introduction
From the dictionary, trust means “belief that someone or something is reliable, good, honest, effective,
etc.” [TRU]. This definition fits well with the context of multi-agent systems, where much work has been
devoted to that topic [PAH+ 12, PSM12, PTS+ 11, PSM13, TCM+ 11]. The principal focus of trust in multi-
agent systems is to describe the relation between agents about how credible agents appear to be to each other
in such system.
   However, as can be observed in the definition from the dictionary, trust is a term broadly applicable, and it
can be applicable to agents, information, objects (such as vehicles, electronics, etc.), among others. In this work,
we focus on the different sources of information available to agents in a multi-agent system and the trust on
each of those sources. Furthermore, it is very important for dynamic environments, i.e., environments that are
constantly changing, like the ones where multi-agent systems commonly are situated, the consideration of the
time that information is stored/received; because of the constant changes on the environment, the information
becomes outdated very quickly.
   Therefore, in addition to the different sources of information available to agents in multi-agent systems, we
consider how outdated is the information. To this end, we introduce some profiles for agents, which differ in the
weights that are attributed to each of those criteria discussed in this work, i.e., meta-information about beliefs
available in multi-agent systems.
   The main contributions of this paper are: (i) we discuss various meta-information available in multi-agent
systems, which are very useful when the agent has conflicting beliefs, allowing it to decide in which one to

Copyright c by the paper’s authors. Copying permitted only for private and academic purposes.
In: J. Zhang, R. Cohen, and M. Sensoy (eds.): Proceedings of the 18th International Workshop on Trust in Agent Societies,
Singapore, 09-MAY-2016, published at http://ceur-ws.org




                                                              1
believe; (ii) the meta-information considered in this work is inspired from practical platforms to develop multi-
agent systems, which makes our work attractive in practical terms; and (iii) we introduce some interesting agent
profiles, which are based on various criteria applicable to the meta-information considered. These profiles are
interesting for different application domains where different meta-information may have different weights, as
discussed in this work.
   The remainder of this paper is structured as follows. We first describe the background of our work, including
some interesting features from agent-oriented programming languages for our approach, and trust in multi-agent
systems. Next, in Sections 3, 4 and 5, we discuss the application of trust values for beliefs, the possibilities of
using time as meta-information, and the trust values combined to meta information such as time, respectively.
In Section 6, we show how patterns of reasoning (named argumentation schemes) can play an interesting role in
our approach, considering the expertise of the source of information. After that we discuss some related work
and, finally, we conclude the paper with some final remarks.

2     Background
2.1    Agent-Oriented Programming Languages
There are many agent-oriented programming languages, such as Jason, Jadex, Jack, AgentFactory, 2APL, GOAL,
Golog, and MetateM, as discussed in [BDDS09], each one with different characteristics. In this work, we choose
Jason [BHW07]. Jason extends AgentSpeak(L), an abstract logic-based agent-oriented programming language
introduced by Rao [Rao96], which is one of the best-known languages inspired by the BDI (Beliefs-Desires-
Intentions) architecture, one of the most studied architectures for cognitive agents.
   Jason has some interesting characteristics for our approach and, in this section, we describe some of such
features.

    • Strong negation: Strong negation helps the modelling of systems where uncertainty cannot be avoided,
      allowing the representation of things that the agent believes to be true, believes to be false, and things that
      the agent is ignorant about. Therefore, an agent is able to believe that, for example, a particular block is
      blue, represented by the predicate blue(block), or that the block is not blue, represented by the predicate
      ¬blue(block)1 . Furthermore, Jason allows agents to have both information in its belief base, with different
      annotation, indicating different sources, time-steps, etc. as described below;

    • Belief annotations: One interesting characteristic of Jason is that it automatically generates annotations
      for all beliefs in the belief base about the source from where the belief was obtained (sensing the environment,
      communication with another agent, or a mental note created by the agent itself). An annotation has the
      following format: blue(block)[source(john)], stating that the source of the belief that the block is blue is
      agent john. In addition to the automatic annotation of the source, the programmer can treat the events
      of receiving/perceiving any information, including annotation of time and any other meta-information he
      wants to store.

    • Speech-act based communication: Jason uses performatives based on speech acts in its communication
      language, and formal semantics has been given for the changes in mental attitudes caused by the perfor-
      matives available in the Jason extension of AgentSpeak. The performatives available in Jason can be easily
      extended, and their effects over the agent mental state can also be customised. Among such customisations,
      it is possible to add the annotations mentioned above.

  There are other interesting characteristics in Jason, as a series of functions of the interpreter that are cus-
tomisable, more details can be found in [BHW07].

2.2    Trust in Multi-agent Systems
Trust is a useful mechanism for decentralised systems, where autonomous entities deal with uncertain information
and have to decide what to believe [PAH+ 12, PTS+ 11, TCM+ 11]. In trust-based approaches, agents can use
the level of trust associated with the sources of contradictory information in order to decide about which one
to believe. There are many different approaches to trust in the literature [PAH+ 12, PSM12, PTS+ 11, PSM13,
TCM+ 11, CFP03], but here we will build our definitions mostly based on the concepts presented in [PTS+ 11,
    1 Where, we use the ¬ symbol for representing strong negation.




                                                                2
TCM+ 11]. First, in this section, we describe trust simply as a relation between agents, while in Section 3 we
expand it, associating trust values for beliefs, which represent how much an agent trusts in that belief based on
the sources which it have for it.
   Considering trust as a relation between agents and following the definition presented in [TCM+ 11], a trust
relation can be formalised as:

                                                         τ ⊆ Ags × Ags
   where the existence of the relation indicates that an agent assigns some level of trust to another agent. For
example, τ (Agi , Agj ) means that agent Agi has at least some trust on agent Agj . It is important to realise that
this is not a symmetric relation, so if τ (Agi , Agj ) holds, this does not imply that τ (Agj , Agi ) holds too.
   A trust network is a directed graph representing a trust relation. It can be defined as:

                                                          Γ = hAgs, τ i
   where Ags is the set of nodes in the graph, representing the agents of the trust network, and τ is the set of
edges, where each edge is a pairwise trust relation between agents of Ags. An example of a trust network can
be seen in Figure 1.
                                                                                   Direct trust relation
                                                    Ag1                            Indirect trust relation

                                                               0.9
                                        0.8                                  0.7

                                      Ag4                            Ag2


                                0.0                                          0.7

                      Ag5                                                          Ag3
                                                   0.8

                                            Figure 1: Trust Network Example
   In order to measure trust, we follow the definition given in [PTS+ 11, TCM+ 11] where a function tr with the
following signature:

                                                     tr : Ags × Ags 7→ R
   is used. It returns a value between 0 and 1, representing how much an agent trusts another. However,
differently from [PTS+ 11, TCM+ 11], we define the relation between tr and τ as:

                                       tr (Agi , Agj ) ≥ 0    ⇔            (Agi , Agj ) ∈ τ
                                       tr (Agi , Agj ) = null ⇔            (Agi , Agj ) 6∈ τ
    so, in our definition, a trust level can in fact be zero, represented by tr (Agi , Agj ) = 0, which means that Agi
does not trust Agj . This is different from cases where Agi has no trust value assigned to Agj , represented by
tr (Agi , Agj ) = null. We use (Agi , Agj ) ∈/ τ to denote that Agi has no acquaintance with Agj , i.e., it is not able
to assess how trustworthy Agj is. Both cases can be seen in Figure 1, where there we have tr (Ag4, Ag5) = 0 and
tr (Ag1, Ag4) ∈  / τ.
    Trust is a transitive relation, so an agent Agi can trust Agj directly or indirectly. Direct trust occurs when
agent Agi directly assigns a trust value to Agj . Indirect trust occurs when, continuing the previous example,
Agj trusts another agent Agk : in this case we could say that Agi indirectly trusts Agk .
    We say there is a path between agents Agi and Agj if it is possible to create sequence of nodes of length n,
n ≥ 1:

                                              hAg0 , Ag1 , Ag2 , . . . , Agn−1 , Agn i
  so that




                                                                 3
                                    τ (Ag0 , Ag1 ), τ (Ag1 , Ag2 ), . . . , τ (Agn−1 , Agn )
   with Ag0 = Agi and Agn = Agj . In order to measure the trust from one particular path from Agi to Agj
we need to use an operator to consider all the direct trust values in that path. Following the idea proposed
in [PTS+ 11], a general operator ⊗tr can be defined as follows:

                               tr (Agi , Agj ) = tr (Ag0 , Ag1 ) ⊗tr ... ⊗tr tr (Agn−1 , Agn )
   which will define the trust value that Agi has on Agj according to the path Ag0 , . . . , Agn from Agi to Agj ,
constructed as defined above. If it happens that there are m different paths between Agi and Agj , a first possible
path having a trust value of tr (Agi , Agj )1 and the mth having tr (Agi , Agj )m , following [PTS+ 11] we can define
a generic operator ⊕tr so that:

                               tr (Agi , Agj ) = tr (Agi , Agj )1 ⊕tr . . . ⊕tr tr (Agi , Agj )m
    For simplicity, in this paper we use those generic operators instantiated as:

    • The trust of a path operator ⊗tr is the minimum trust value along the path. That is, it is defined as:

                                 tr (Agi , Agj ) = min{tr (Ag0 , Ag1 ), . . . , tr (Agn−1 , Agn )}

      given a path Ag0 , . . . , Agn from Agi to Agj as defined above.
    • The ⊕tr over trust paths is defined as:

                                 tr (Agi , Agj ) = max{tr (Agi , Agj )1 , . . . , tr (Agi , Agj )m }

      where m is the number of different possible paths between Agi and Agj .

   In practical terms, the trust framework makes the agent explicitly aware of how much the other agents in
that multi-agent system are trustworthy, and this info available for the agent by means of predicates such as
trust(ag1,0.8), meaning that the trust value that this agent places on ag1 is 0.8.

3     Trust on Beliefs
In this section, we introduce an way to calculate trust applied to beliefs, which is based on the trust value
applied to the sources of these beliefs. We consider not only other agents as sources of information, but also
perception of the environment, artifacts, and “mental notes” (beliefs created by an agent itself). For trust values
for information received from other agents, we assume that these values are explicitly asserted in the belief
base of agents (but calculated dynamically) based on the approach presented in the previous section. For trust
values of information perceived from the environment, these values depend on the application domain, where,
for example, multiple sensors could have varying degrees of trustworthiness.
   For the purpose of a running example, we use the following trust values:
                                                Source        Trust Value
                                                  ag1             0.3
                                                  ag2             0.4
                                                  ag3             0.5
                                                  ag4             0.8
                                                  self            1.0
                                                percept1          0.9
                                                percept2          0.6

                           Table 1: Values of Trust on Individual Sources of Information
   Therefore, we expand trust to be a relation between an agent and the possible sources of information. So
function tr (Agi , Agj ) that returns the trust level of Agi on Agj is generalised to:

                                                         tr (Agi , sj )




                                                               4
   where sj represents one of the sources of information for agent Agi . This way, an agent Agi has a trust level
on other kinds of sources, percepts or mental notes. This is interesting for cases when, using a similar example
to the one presented in [AB14], an agent Agi has a sensor st which is known to have an accuracy of 80%. This
way, the trust Agi has on st is defined as tr (Agi , st ) = 0.8, associating the known percentage of accuracy with
the trust value on st .
   Further, the trust value of a particular sensor could be learned from experience, which seems more appropriate
to the concept of trust used in this work.
   It is important to emphasise that function tr returns the value of trust an agent has on some source. Now we
can define a trust value associated to beliefs using function tr.
   As a belief ϕ of an agent Agi can come from multiple sources, in order to know how much Agi trusts ϕ, we
must consider the tr value associated with each source of ϕ for Agi . For this, we introduce the function trb i
below:

                                                       trb i : ϕ → R
   where trb i (ϕ) returns the trust value that Agi has on belief ϕ based on the trust level Agi has on the
sources that asserted information ϕ. The operation that calculates trb i (ϕ) varies according to agent profiles,
corresponding to different attitudes towards one’s sources of information.
   We introduce two agent profiles for calculating trust values over beliefs. They both may be interesting in
different domains, depending on whether we are interested in credulous or sceptical agents.

Definition 1 (Credulous Agent) A credulous agent considers only the most trustworthy source of informa-
tion, and does not look for an overall social value.

  The formula used by a credulous agent to consider the most trusted source is:

                                     trb i (ϕ) = max{tr (Agi , s1 ), ..., tr (Agi , sn )}

  where {s1 , ..., sn } is the set of sources that informed ϕ to Agi .

Definition 2 (Sceptical Agent) A sceptical agent considers the number of sources from which it has received
the information, and the trust value of each such source, in order to have some form of social trust value.

   A sceptical agent considers the quantity of sources that the information ϕ comes from. Therefore, we use a
formula that sums the trust values of sources that information ϕ has been received from by Agi , determining a
social trust value as follows:
                                                      X
                                                           tr (Agi , s)
                                                                +
                                                             s∈Sϕ
                                               trb i (ϕ) =
                                                              |Sϕ+ | + |Sϕ− |
   where Sϕ+ = {s1 , ..., sn } is the set of n different sources of ϕ and Sϕ− is the set of sources for ϕ.
   For example, considering an agent Agi with the trust values presented in Table 1, if Agi receives an information
ϕ from a set of sources Sϕ+ = {Ag1, Ag2, Ag3} and receives ϕ from Sϕ− = {Ag4}, then:

  • A credulous agent will consider only the maximum trust values in Sϕ+ and Sϕ− , then it will get trb i (ϕ) = 0.5
    and trb i (ϕ) = 0.8.

  • A sceptical agent will consider all the various sources. In particular, it will get trb i (ϕ) = 0.3+0.4+0.5
                                                                                                         4      = 0.3
    and trb i (ϕ) = 0.8
                     4  = 0.2.

   As another example, when Agi receives an information ϕ where the sources of ϕ are Sϕ+ = {percept1 } and
receives ϕ with sources Sϕ− = {Ag2, Ag3, Ag4}, then:

  • A credulous agent will have trb i (ϕ) = 0.9, and trb i (ϕ) = 0.8, having greater trust in ϕ than in its negation.

  • A sceptical agent however will have trb i (ϕ) = 0.9
                                                     4 = 0.2 and trb i (ϕ) =
                                                                             0.4+0.5+0.8
                                                                                  4      = 0.4, preferring to believe
    ϕ instead.




                                                               5
   There are cases when, for an information ϕ received by an agent Agi , trb i (ϕ) is equal to trb i (ϕ). For a
credulous agent, it is easy to note that this occurs when the maximum trust value tr (Agi , sv ), for a source
sv ∈ Sϕ+ equals the maximum trust value tr (Agi , sw )Pfor a source sw ∈ Sϕ− .       P
   Differently, for sceptical agents, this occurs when s∈Sϕ+ tr (Agi , s) is equal to s∈Sϕ− tr (Agi , s).
   For these cases, we can consider other meta-information such as the time the beliefs were acquired (e.g., giving
preference to more recent information) in order to decide what to believe. In next sections, we describe some
possibilities for such extra criteria.

3.1   Expanding Trust Assessment
In Section 2.2, we defined the ⊕tr operator, which calculates the trust level that an agent Agi has on another
agent Agj when there are n paths between them, as the maximum operator. However, considering the agent
profiles, such as the credulous and sceptical profiles presented, a sceptical agent could consider the number n of
paths between the two agents Agi and Agj to calculate tr (Agi , Agj ).
    For example, consider two agents, Agi and Agj , where, using the max operator, we have tr (Agi , Agj ) = 0.6,
while np(Agi , Agj ) = 1 where np is a function returning the number of all different paths between Agi and Agj
in the trust network. This way we have tr (Agi , Agj ) = tr (Agi , Agj )1 . Now consider another agent Agk , where
tr (Agi , Agk ) = 0.6 is defined using the max operator too, but np(Agi , Agk ) = 4. Then we have tr (Agi , Agk ) =
tr (Agi , Agk )1 ⊕tr ... ⊕tr tr (Agi , Agk )4 . This way, it is possible for Agi to consider Agk as more reliable than Agj
taking into consideration the various different paths between them.
    In organisational-based multi-agent systems/societies such as [HSB07], agents could assign trust values to
groups of agents (or entire organisations) in the multi-agent system, in order to avoid some undesired bias.
For example, consider an agent Ag receiving information from a set of agents Ags = {Ag1 , Ag2 , . . . , Agn }, all
belonging to the same organisation B. Then, it might be misleading if Ag considers those agents from Ags as
different sources, thereby giving a social weight to that information, as those agents could spread some untrue
information of interest to organisation B, or be biased in any way simply for belonging to the same organisation.
For those cases, there may be a way for agent Ag to keep the information that the agents in Ags represent the
same organisation, and this way Ag may not consider them as different sources. We will investigate that in
future work.
    More importantly, in future work we aim to combine our platform with existing trust systems (such as [KSS13])
so that we use realistic forms of update of levels of trust in sources of information while agents interact with
each other and the environment, building on the extensive work done in the area of trust and reputation in
multi-agent systems [PSM13].

4     Using Time
As it was presented in the last section, even using trust the agent can have contradictory information with the
same trust values, and for these cases, other meta-information are needed. An example of such meta-information
is time. There can exist scenarios where the time a piece of information was acquired can be even more important
than the trust on its sources, and this will be explored below, with the definition of two different profiles, which
may be interesting in different domains.
    The first thing to consider is the way that an agent can maintain the information about time. As it was
presented before, in Jason [BHW07], a belief can be annotated with each source that informed it. Besides,
annotations can also be easily used for recording the time that the belief was informed by each source.
    Considering that an environment is constantly changing, the search for the most recent information is often
related to the search for the most accurate information. Sometimes it can be even possible that an agent Agj
informs ϕ to Agi , and some time later, Agj informs ϕ. Considering that the only source of ϕ and ϕ is Agj , using
time, Agi can easily decide for ϕ, as it is the most recent belief informed and the trust level of the source Agj is
the same. This way, it is possible to realise that there exists a timeline of acquired beliefs. For example, consider
the discrete time structure of Table 2 representing the instants of time at which beliefs are acquired by an agent
Agi :
    This way, Agi acquired two beliefs, being ϕ the latest acquired belief. Considering that the trust level of Ag1
and Ag2 are the same, at time 1 and 2, Agi will believe in ϕ, while at time 3 and 4, Agi will believe in ϕ.
    The function that returns the most recent time that a belief ϕ was received by a source sj is time(ϕ, sj ). This
way, considering the discrete representation of time in table 2, for the belief ϕ acquired by Agi from agent Ag1 ,
it is used time(ϕ, Ag1 ) = 1.




                                                            6
                                   Time        time 1      time 2              time 3       time 4
                                   Belief         ϕ                               ϕ
                                   Source        Ag1                             Ag2

                                       Table 2: Discrete time representation
   Note that an information ϕ can be received from multiple sources at multiple times. For example, for an
agent Agi , if Ag1 inform ϕ at time 1, and Ag2 inform ϕ at time 3. This way, it is interesting to realise that each
source will be annotated with its own time, for example, as it is in Jason, the belief that a block is blue could
be:

                                   blue(block)[source([ag1, ag2]), time([t1, t3])].
   Considering the discrete time structure represented before, we can define as outdated time a value acquired
from the difference between the actual time and the time that an information was informed by a source. This
difference, for an information received by an agent Agi , we call Ti .

Definition 3 Considering an agent Agi , Ti (ϕ, sj ) is the difference between the actual time and the time that ϕ
was acquired from source sj .

  The formula of Ti could be, considering a belief ϕ received by an agent Agi from a source sj , and a variable
now which represents the actual time:

                                           Ti (ϕ, sj ) = now − time(ϕ, sj )
   For a more complex example, it is interesting to note that a belief can come from different sources. Consider
S(ϕ) = {s1 , ..., sn } and S(ϕ) = {s1 , ..., sm } as the sets of sources for the information ϕ and ϕ, respectively. In
this case, an agent could compare a source sv1 ∈ S(ϕ) with another source sw1 ∈ S(ϕ), where sv1 and sw1 are
the sources with the minimal Ti function value from their sets. If Ti is the same for sv1 and sw1 , then another
source sv2 ∈ S(ϕ) and sw2 ∈ S(ϕ) can be selected, where the time annotated in sv2 is the most recent in the set
S(ϕ) \ {sv1 } and the time of sw2 is the most recent in S(ϕ) \ {sw1 }. This could be applied recursively until it
finds some svi ∈ S(ϕ) greater or lower than an swi ∈ S(ϕ), or if there are no more sources for ϕ or ϕ to compare,
the agent would remain uncertain about ϕ.

5   Trust and Time
To combine trust and time, it is important to realise how one affects the other. Here trust is the focus, with the
time being used to set the trust level adequately, prioritising the most recent information. This way, the older
an information is, the less trusted it should be, for the reasons presented in the previous section. So we have:

Definition 4 (Outdated Information) Considering a belief ϕ acquired by an agent Agi from a source sj , the
greater the Ti (ϕ, sj ) value is, the lower the trust level for this belief ϕ should be.

   Note that naturally the trust level of ϕ will decrease as time passes, unless some sources keep informing ϕ by
an amount that equalises the ϕ trust loss.
   Considering an agent Agi , we can define a function trsi (ϕ, sj ) that returns the trust of ϕ considering just the
information received by the source sj at time time(ϕ, sj ). Considering S(ϕ) = {s1 , s2 , ..., sn } the set of sources
that informed ϕ, so Agi may have a different trs value for ϕ associated with each source. Here we use a generic
operator trs to relate the trust of the source with the time since the belief was informed by it:

                                                                               trs
                                      trsi (ϕ, sj ) = tri (Agi , sj )                 Ti (ϕ, sj )
   Now, another function can be defined. Considering the same set of sources S(ϕ), the function trt has just ϕ
as parameter. As in Jason each belief has annotated all the sources that informed this belief, it is not needed to
pass as parameter the set of sources of ϕ. We use a generic operator trt , making a relation between all the trs
assigned to each source. So we have:

                                                                   trt          trt
                                    trti (ϕ) = trsi (ϕ, s1 )             ...          trsi (ϕ, sn )




                                                               7
   Now there is a generic operator to relate the trust of the sources with the time that they informed a belief.
   We defined two agent profiles in section 3, credulous and sceptical, where each one calculates trbi (ϕ) according
to it owns attitude. Now we define other profiles for calculating trti (ϕ), again, they both may be interesting in
different domains, depending on whether we are interested in meticulous or conservative agents. It is interesting
to note that an agent may be credulous meticulous, sceptical meticulous, credulous conservative or sceptical
conservative.

Definition 5 (Conservative Agent) A conservative agent uses time just when the trust on conflict beliefs are
the same.

   For contradictory beliefs, a conservative agent will calculates the trbi (ϕ) and trbi (ϕ). The way that trbi will
be calculated depends on if the agent is credulous or sceptical too. If the trust values of the beliefs are the
same, it will calculate the trti (ϕ) and trti (ϕ) to determinate the trust of each one considering the time they were
informed.

Definition 6 (Meticulous Agent) A meticulous agent uses trust just when the time of the conflict beliefs are
the same.

    Differently from conservative agents, a meticulous agent Agi will first calculate the trti (ϕ) and trti (ϕ), and
if it is the same, Agi will consider just the trust, ignoring the time that they were acquired, calculating trbi (ϕ)
and trbi (ϕ).
    As example, consider an agent Agi and the beliefs acquired according to the timeline of table 3:
                                     Time       time 1    time 2    time 3    time 4
                                     Belief        ϕ         ϕ                   ϕ
                                     Source       Ag1       Ag2                 Ag3

                                        Table 3: Time discrete representation
   And consider the trust that Agi has in each source according to the table 4:
                                                Source     Trust Value
                                                 Ag1           0.6
                                                 Ag2           0.4
                                                 Ag3           0.5

                           Table 4: Values of Trust on Individual Sources of Information
                                                                                                         tr (Ag ,s )
   Considering, for simplicity, that the trs is a fraction operator, then we have trsi (ϕ, sj ) = Tii (ϕ,si j )j . This
definition keeps the idea that how bigger Ti is, less trusted a belief should be. Now, consider the trt operator as
a max operator, then we have trti (ϕ) = max{trsi (ϕ, s1 ), ..., trsi (ϕ, sn )}, for a set of sources {s1 , ..., sn }. Then,
consider that Agi is a credulous agent and that the actual time in the timeline presented is time 5, so:

  • A credulous conservative agent will consider trbi (ϕ) = 0.6 and trbi (ϕ) = 0.5, opting to believe in ϕ.
                                                                  0.6                                  0.4
  • A credulous meticulous agent will consider trsi (ϕ, Ag1 ) = 5−1   = 0.15, and trsi (ϕ, Ag2 ) = 5−2     = 0.13.
                                                                                             0.5
    Then, we have trti (ϕ) = max{0.15, 0.13} = 0.15. And for ϕ, we have trsi (ϕ, Ag3 ) = 5−4 = 0.5. This way,
    trti (ϕ) = max{0.5} = 0.5. So, a meticulous agent will believe in ϕ, as trti (ϕ) = 0.15 and trti (ϕ) = 0.5.

   As it was presented, the trust of a belief depends on the trust of its sources. A natural approach is, when a
belief ϕ is shown to be true or false, the trust of the sources of ϕ changes, increasing in case of true or decreasing
otherwise. Considering time, we can improve this idea. The older an information is, the more time it had to
change in the environment. Thus, there can be cases when an information ϕ is acquired, it is true, but after
some time, ϕ becomes false. Thus, some of ϕ sources might not have informed something false, but as it had
time to change in the environment, it became false. Thus, the idea is, considering an agent Agi , for a belief ϕ
received by a source sj , the longer Ti (ϕ, sj ) is, the less trust sj will lose in case of ϕ shows itself to be false to
Agi .




                                                            8
6     Considering the Expertise of a Source
Another interesting criteria to combine, or even to generate trust values for beliefs, is to consider the expertise
of the source in regards to specific kinds of information. For example, when a friend tells you that it is going to
rain today, and you watch on television that it is going to be a sunny day, although you have more trust in your
friend, it is reasonable to consider that the weatherperson is an expert in that subject (i.e., weather) and it is
more reasonable to assume that it will be a sunny day.
    A way to consider the expertise of the source is to use patterns of reasoning, for example, the so-called
argumentation schemes [WRM08]. In particular, regarding the expertise of the source, Walton [Wal96] introduces
the argumentation scheme called Argument from position to know, described below2 :
      Major Premise: Source a is in a position to know about things in a certain subject domain S
      containing proposition A.
      Minor Premise: a asserts that A (in domain S) is true (or false).
      Conclusion: A is true (or false).
    The associated critical questions (CQs) for the Argument from position to know are:
    • CQ1: Is a in a position to know whether A is true (or false)?
    • CQ2: Is a an honest (trustworthy, reliable) source?
    • CQ3: Did a assert that A is true (or false)?
   Therefore, the pattern of reasoning can be analysed in a dialectical way, where its conclusion is evaluated
through the critical questions, due to its defeasible nature, and if the pattern of reasoning is valid, the trust
value of that information can be incremented, considering that it comes from an expert source (i.e., someone in
a position to know), and there are no reasons to doubt that.
   Of course, as we are dealing with a value-based framework, it is necessary to attribute some kind of values for
expert sources, including how much critical questions are correctly answered. Again, these values could depend
on the application domain, where safety-critical applications such as the ones related to health could give greater
consideration to the expertise of the source. For example, it is more reasonable to consider the opinion of a
doctor who is expert/specialised on the particular health problem than the opinion of a general practitioner.
   On the other hand, in some domains the source expertise may not be so important, for example, in our
previous example about the weather: the consequences of taking or not the umbrella are not as strong as in the
case of a wrong diagnosis of a serious illness.
   We can observe that the argumentation scheme of position to know itself considers how much the source is
trustworthy. Further, it considers whether the source is in a position to know such subject, and if it was that
same source that provided such information directly.
   In order to exemplify these ideas, imagine the following scenario related to the stock market: an agent, named
ag2 , has informed (to agent ag1 ) that an expert, named ex1 , said that a particular kind of stock, named st1 , has
a great investment potential. Further, consider the following trust values related to the argumentation scheme
for the ag1 :
                                             Source/Belief        Trust Value
                                                   ag2                0.6
                                                   ex1                0.8
                                             expert(ex1 , st1 )       0.9

                                      Table 5: Values of Trust and Beliefs.
   Considering the profiles introduced in Section 3, the credulous and sceptical agents consider the trust value
for the information of st1 having great investment potential, great invest potential (st1), as 0.6, because there is
only one source for great invest potential (st1), named ag2 , with trust value of 0.6.
   However, as we argued, there are some application domains in which it is reasonable to consider the expertise
of the source, giving extra weight to such information when the agent believe the source is an expert. With
this idea in mind, we introduce the following profiles based on the argumentation scheme (reasoning pattern)
described.
   2 For simplicity, we use the more general argumentation scheme from position to know instead of the argumentation scheme for

expert opinion, which is a subtype of the argument from position to know [WRM08, p. 14].




                                                              9
Definition 7 (Suspicious Agent) A suspicious agent considers only the trust value for the source who provided
the information to it, and ignores the trust on the original source who provided the information to that agent
who informed the suspicious agent.

   In our scenario, the trust value of great invest potential (st1) for a suspicious agent is 0.6. However, the agent
that received that information is able to ask directly for the source of that information, in this case the expert
ex1 , about the investment and, when receiving the information great invest potential (st1) from ex1 , the trust
value becomes to 0.8.
   As observed, for a suspicious agent, even when receiving the information directly from the source, it aggregates
the trust it has over the source, and not over the expertise of the source. To consider the expertise rather than
the trust over the source can be very useful in some application domains, mainly due to the fact that trust values
can be learned from experience, while the expertise of that particular source could be acquired from a reliable
newspaper, web-page, etc.

Definition 8 (Expertise-recogniser Agent) An expertise-recogniser agent considers the trust value of the
information based on how much the source is an expert in that subject.

    Considering our scenario, the trust value for great invest potential (st1) becomes to 0.9.

7    Related Work
Tang et al., in [TCM+ 11], combine argumentation and trust, taking into account trust on information used in
argument inferences. That work is based on work presented by Parsons et al. [PTS+ 11], which proposes a formal
model for combining trust and argumentation, aiming to find relationships between these areas.
   Our work differs from [PTS+ 11, TCM+ 11] in some points. We introduce an approach for computing trust
values for beliefs that differs from [PTS+ 11, TCM+ 11], where trust on a piece of information is assumed to be
more directly available to argumentation. Different from those approaches, we allow for different sources for the
same information (which is often the case in Jason agents) a propose ways to combine them into a single trust
value for that information. We also define some agent profiles to facilitate the development of agents that require
different social perspectives on the trust values of multiple sources; this is not considered in [PTS+ 11, TCM+ 11]
either.
   Another difference from [PTS+ 11, TCM+ 11] is that we consider other meta-information available in the multi-
agent system (e.g., time annotation), which is inspired by agent-oriented programming languages that have such
meta-information easily available.
   Alechina et al., in [ABH+ 06], introduce a well-motivated and efficient algorithm for belief revision for AgentS-
peak. The authors do not use trust or reputation in that work though. Therefore, the main point where our
work differs from [ABH+ 06] is in the use of trust. We also argue that our approach could be used in order to
improve the belief process presented in [ABH+ 06], as the trust values and the reasoning pattern introduced by
us could play an important role in belief revision process too.
   Amgoud and Ben-Naim, in [ABN15], propose a new family of argumentation-based logics (built on top of
Tarskian logic) for handling inconsistency. An interesting aspect in [ABN15] is that the work defines an ap-
proach in which the arguments are evaluated using a “ranking semantics”, which orders the arguments from
the most acceptable to the least acceptable ones. The authors argue that, with a total order of arguments,
the conclusions that are drawn are ranked with regards to plausibility. Although [ABN15] does not use trust,
the proposed approach provides ordered arguments thus avoiding undecided conflicts. Our approach follows
the same principles, but considering different criteria for ranking the information which agents have available
in theirs belief bases, considering different meta-information available in agent-oriented languages. To use such
meta information in argumentation-based approaches is part of our ongoing work [MPB16].
   Parsons et al., in [PAH+ 12], identify ten different patterns of argumentation, called schemes, through which
an individual/agent can acquire trust on another. Using a set of critical questions, the authors show a way to
capture the defeasibility inherent in argumentation schemes and are able to assess whether an argument is good
or fallacious. Our approach differs from [PAH+ 12] in that we are not interested in agents arguing about the
trust they have on each other. We are interested in using such trust values and combining them with other
meta-information available in the multi-agent system in order to use trust to resolve conflicts between beliefs,
what might be interesting in domains where it might be important for the agents to not be undecided about
some information.




                                                         10
   Similarly we used the reasoning pattern based in an argumentation schemes, we argue that argumentation
schemes from [PAH+ 12] could be used in order to evolve the trust values of different sources. In our future work
we intend to investigate such integration.
   Biga and Casali, in [AB14], present an extension of Jason, called G-Jason, to allow the creation of more flexible
agents to reason about uncertainty, representing belief degrees and grades using the annotation feature provided
by Jason. The authors define degOfCert(X), where X is a value between 0 and 1, as a value associated with
certainty of a belief and planRelevance(LabelDegree) as a value associated with plans, where the LabelDegree
value is based on its context and its triggering event’s degOfCert level. Our approach differs from [AB14] in that
we use the notion of trust on agents and sensors in order to infer a level of certainty on beliefs. Further, we
consider other meta-information, and we define profiles that combine different uses of such information, which
is not considered in [AB14] .

8   Final Remarks
In this work, we described how different sources of information available to an agent in a multi-agent system
and the trust of each of those sources can be combined to generate trust values for beliefs. Further, considering
multi-agent shared environments, which are constantly changing, we combine also the time that information was
stored/received by the agent, allowing agents to take into consideration how outdated a piece of information is.
    In additional, we discussed how dialectical pattern of reasoning (i.e., argumentation scheme) can play an
interesting role in our approach. We showed that argumentation schemes could guide agents in order to consider
the expertise of the source instead of only the trust the agent has over that source. This idea seems interesting,
considering that the trust values normally are learned from the experience, while the expertise of a particular
source can be acquired from, also, reliable sources of information like the specification of the multi-agent system.
    Finally, considering the different weights given to the meta-information discussed in this work, we introduce
some agent profiles that can be useful for different application domains, as discussed in our brief examples.
    In our future work, we intend to evaluate the impact of each profile introduced in this work for different kinds
of applications, keeping an open mind for new (or even middle ground ) profiles. This will allow us to identify the
best profiles for each application domain, depending on the overall behaviour desired for the multi-agent system.
    Furthermore, we also intend to combine the different profiles introduced in this work, as well as new pro-
files we will investigate, with practical argumentation-based reasoning mechanisms such as [PMVB14], where
trust relations among agents may play an important role in decisions over conflicting arguments. Some initial
investigation in this direction can be found [MPB16].

References
[AB14]      Ana Calasi Adrián Biga. G-jason: An extension of jason to engineer agents capable to reason under
            uncertainty. In G-JASON: An Extension of JASON to Engineer Agents Capable to Reason under
            Uncertainty, 2014.

[ABH+ 06] Natasha Alechina, Rafael H Bordini, Jomi F Hübner, Mark Jago, and Brian Logan. Automating
          belief revision for agentspeak. In Declarative Agent Languages and Technologies IV, pages 61–77.
          Springer, 2006.

[ABN15]     Leila Amgoud and Jonathan Ben-Naim. Argumentation-based ranking logics. In Proceedings of the
            2015 International Conference on Autonomous Agents and Multiagent Systems, pages 1511–1519.
            International Foundation for Autonomous Agents and Multiagent Systems, 2015.

[BDDS09] Rafael H. Bordini, Mehdi Dastani, Jrgen Dix, and Amal El Fallah Seghrouchni. Multi-Agent Pro-
         gramming: Languages, Tools and Applications. Springer Publishing Company, Incorporated, 1st
         edition, 2009.

[BHW07]     Rafael H. Bordini, Jomi Fred Hübner, and Michael Wooldridge. Programming Multi-Agent Systems
            in AgentSpeak using Jason (Wiley Series in Agent Technology). John Wiley & Sons, 2007.

[CFP03]     Cristiano Castelfranchi, Rino Falcone, and Giovanni Pezzulo. Trust in information sources as a
            source for trust: a fuzzy approach. In Proceedings of the second international joint conference on
            Autonomous agents and multiagent systems, pages 89–96. ACM, 2003.




                                                        11
[HSB07]   Jomi F Hubner, Jaime S Sichman, and Olivier Boissier. Developing organised multiagent systems
          using the moise+ model: programming issues at the system and agent levels. International Journal
          of Agent-Oriented Software Engineering, 1(3-4):370–395, 2007.

[KSS13]   Andrew Koster, W. Marco Schorlemmer, and Jordi Sabater-Mir. Opening the black box of trust:
          reasoning about trust models in a BDI agent. J. Log. Comput., 23(1):25–58, 2013.
[MPB16]   Victor S. Melo, Alison R. Panisson, and Rafael H. Bordini. Argumentation-based reasoning using
          preferences over sources of information. In fifteenth International Conference on Autonomous Agents
          and Multiagent Systems (AAMAS), 2016.
[PAH+ 12] Simon Parsons, Katie Atkinson, Karen Haigh, Karl Levitt, Peter McBurneye Jeff Rowe, Munindar P
          Singh, and Elizabeth Sklar. Argument schemes for reasoning about trust. Computational Models of
          Argument: Proceedings of COMMA 2012, 245:430, 2012.
[PMVB14] Alison R. Panisson, Felipe Meneguzzi, Renata Vieira, and Rafael H. Bordini. An Approach for
         Argumentation-based Reasoning Using Defeasible Logic in Multi-Agent Programming Languages. In
         11th International Workshop on Argumentation in Multiagent Systems, 2014.
[PSM12]   Simon Parsons, Elizabeth Sklar, and Peter McBurney. Using argumentation to reason with and about
          trust. In Argumentation in multi-agent systems, pages 194–212. Springer, 2012.

[PSM13]   Isaac Pinyol and Jordi Sabater-Mir. Computational trust and reputation models for open multi-agent
          systems: a review. Artificial Intelligence Review, 40(1):1–25, 2013.
[PTS+ 11] Simon Parsons, Yuqing Tang, Elizabeth Sklar, Peter McBurney, and Kai Cai. Argumentation-based
          reasoning in agents with varying degrees of trust. In The 10th International Conference on Au-
          tonomous Agents and Multiagent Systems-Volume 2, pages 879–886. International Foundation for
          Autonomous Agents and Multiagent Systems, 2011.
[Rao96]   Anand S. Rao. AgentSpeak(L): BDI agents speak out in a logical computable language. In Pro-
          ceedings of the 7th European workshop on Modelling autonomous agents in a multi-agent world :
          agents breaking away: agents breaking away, MAAMAW ’96, pages 42–55, Secaucus, NJ, USA, 1996.
          Springer-Verlag New York, Inc.

[TCM+ 11] Yuqing Tang, Kai Cai, Peter McBurney, Elizabeth Sklar, and Simon Parsons. Using argumentation
          to reason about trust and belief. Journal of Logic and Computation, page 38, 2011.
[TRU]     Trust definition. http://www.merriam-webster.com/dictionary/trust. Accessed: 2016-02-11.
[Wal96]   Douglas Walton. Argumentation schemes for presumptive reasoning. Routledge, 1996.

[WRM08] D. Walton, C. Reed, and F. Macagno. Argumentation Schemes. Cambridge University Press, 2008.




                                                    12