=Paper= {{Paper |id=Vol-1867/w4 |storemode=property |title= How can Subjective Impulsivity play a role among Information Sources in Weather Scenarios? |pdfUrl=https://ceur-ws.org/Vol-1867/w4.pdf |volume=Vol-1867 |authors=Rino Falcone,Alessandro Sapienza |dblpUrl=https://dblp.org/rec/conf/woa/FalconeS17 }} == How can Subjective Impulsivity play a role among Information Sources in Weather Scenarios?== https://ceur-ws.org/Vol-1867/w4.pdf
                                                                       19



    How can Subjective Impulsivity play a role among
      Information Sources in Weather Scenarios?

                                               Rino Falcone and Alessandro Sapienza
                                           Institute of Cognitive Sciences and Technologies,
                                                        ISTC – CNR, Rome, Italy
                                            {rino.falcone, alessandro.sapienza}@istc.cnr.it"


    Abstract— The topic of critical hydrogeological phenomena,              an authority, and iii) on the possibility of being influenced by
due to flooding, has a particular relevance given the risk that it          the neighbors’ behaviors.
implies. In this paper we simulated complex weather scenarios in
                                                                            So given this picture, our simulations inquired several
which forecasts coming from different sources become relevant.              interactions among different kinds of agents, testing different
Our basic idea is that agents can build their own evaluations on            weather scenarios with different levels of impulsivity. We
the future weather events integrating these different information           also considered the role that both expertise and information
sources also considering how trustworthy each single source is              play on the impulsivity factor.
with respect to each individual agent. These agents learn the
sources’ trustworthiness in a training phase. Moreover, agents              The results of these simulations show that, thanks to a proper
are differentiated on the basis of their own ability to make direct         trust evaluation of their sources made through the training
weather forecasts, on their possibility to receive bad or good              phase, the different kinds of agents are able to better identify
forecasts from the authority, and on the possibility of being               the future events. Some particular and interesting result
influenced by the neighbors’ behaviors. Quite often in the real             concerns the fact that impulsivity can be considered, in
scenarios some irrational behaviors rise up, whereby individuals            specific situations, as a rational and optimizing factor, in
tend to impulsively follow the crowd, regardless of its reliability.        some way contradicting the nature of the concept itself. In
To model that, we introduced an impulsivity factor that measures            fact, as in some human cases, it can be possible that we have
how agents are influenced by the neighbors’ behavior, a sort of             learned specific behaviors based on just one information
“crowd effect”. The results of these simulations show that, thanks
                                                                            source that is enough for the more efficient behavior although
to a proper trust evaluation of their sources made in the training
                                                                            we could access to other different and trustworthy sources. In
                                                                            that case we consider as impulsive a behavior that is in fact
phase, the different kinds of agents are able to better identify the
                                                                            fully effective.
future events.

   Keywords— trust; social simulation; cognitive agents.
                                                                                               II. THE TRUST MODEL

                       I.   INTRODUCTION                                      According to the literature [1][2][10][11][17] trust is a
                                                                            promising way to deal with information source. In particular
The role of the impulsivity in human behaviors has relevant                 in this work we are going to use the computational model of
effects in the final evaluations and decisions of both                      [13], which is in turn based on the cognitive model of trust of
individuals and groups. Although we are working in the huge                 Castelfranchi and Falcone [3]. It exploits the Bayesian theory,
domain of social influence [4][7][8] we consider here                       one of the most used approaches in trust evaluation
impulsivity as an attitude of taking a decision just basing on a            [9][12][18], representing all the information as a probability
partial set of evidence, although further evidence is easily                distribution function (PDF).
reachable and acquirable. Sometimes this kind of behavior
can produce unpredictable consequences that were not taken                  In this model each information source S is represented by a
in consideration while deciding [16]. Impulsivity is a                      trust degree called TrustOnSource [6], with 0
multifactorial concept [5], however we are interested in                    ≤TrustOnSource ≤ 1, plus a bayesian probability distribution
identifying the role that it can play in a specific set of                  PDF that represents the information reported by S. The
scenarios.                                                                  TrustOnSource parameter is used to smooth the information
                                                                            referred by S: the more I trust the source, the more I consider
In particular, in this paper we simulated complex weather                   the PDF; the less I trust it, the more the PDF is flattened.
scenarios in which there are relevant forecasts coming from                 Once an agent gets the contribution from all its sources, it
different sources. Our basic idea is that agents can build their            aggregates the information to produce the global evidence
own evaluations on the future weather events integrating these              (GPDF), estimating the probability that each event is going to
different information sources, also considering how                         happen.
trustworthy the single source is with respect to each
individual agent. These agents learn the sources’                           A. Feedback On Trust
trustworthiness in a training phase. They are differentiated i)                We want to let agents adapt to the context in which they
on the basis of their ability to make direct weather forecasts,             move. This means that, starting from a neutral trust level (that
ii) on their possibility to receive bad or good forecasts from
                                                                               20


does not imply trust or distrust) agents will try to understand                     weather event on the basis of the information sources they
how much to rely on each single information source                                  have and of the trustworthiness they attribute to these different
(𝑇𝑟𝑢𝑠𝑡𝑂𝑛𝑆𝑜𝑢𝑟𝑐𝑒), using direct experience for trust evaluations                      sources.
[14][15]. To do that, they need a way to perform feedback on                        We provided the framework with five possible events, going
trust. We propose to use weighted mean. Given the two                               from 1 to 5, with increasing level of criticality: level 1 stands
parameters α and β1, the new trust value is computed as:                            for no events, there is no risk at all for the citizens; level 5
                                                                                    means that there will be a tremendous event due to a very high
𝑛𝑒𝑤𝑇𝑟𝑢𝑠𝑡𝑂𝑛𝑆𝑜𝑢𝑟𝑐𝑒=α∗𝑇𝑟𝑢𝑠𝑡𝑂𝑛𝑆𝑜𝑢𝑟𝑐𝑒+β∗𝑝𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒𝐸𝑣𝑎𝑙𝑢𝑎𝑡𝑖𝑜𝑛                (1)         level of rain, with possible risks for the agents sake. The other
    α+β=1                                                                           values represent intermediate events with increasing
TrustOnSource is the previous trust degree and                                      criticality.
performanceEvaluation is the objective evaluation of the                            In addition to citizens, there is another agent called authority.
source performance. This last value is obtained comparing                           Its aim is to inform promptly the citizens about the weather
what the source said with what actually happened.                                   phenomena. The problem is that, for their nature, weather
Considering the PDF reported by the source (that will be split                      forecasts improve their precision nearing to the event.
into five parts as we have 5 possible events), we will have that                    Consequently, while the time passes the authority is able to
the estimated probability of the event that actually occurred is                    produce a better forecast, but it will not be able to inform all
completely taken into account and the estimated probability of                      the citizens, as there will be less time to spread information.
the events immediately near to it is taken into account for just
1/3. We in fact suppose that even if the evaluation is not right,                   A. Information Sources
it is not, however, entirely wrong. The rest of the PDF is not                      To make a decision, each citizen can consult a set of
considered. Let’s suppose that there was the most critical                          information sources, reporting to it some evidence about the
event, which is event 5. A first source reported a 100%                             incoming meteorological phenomenon.
probability of event 5, a second one a 50% probability of
event 5 and a 50% of event 4 and a third one asserts 100% of                        We considered the presence of three kinds of information
event 3. Their performance evaluation will be:                                      sources (whether active or passive) for citizens:
Source1=100%; Source2=66.67% (50% + (50/3)%); Source3:                              1.   Their personal judgment, based on the direct observation
0%. Figure 1 shows the corresponding PDFs.                                               of the phenomena. Although this is a direct and always
                                                                                         true (at least in that moment) source. In general, a
                                                                                         common citizen is not always able to understand the
                                                                                         situation, maybe because it is not able, it does not possess
                                                                                         any instrument or it is just not in the condition to properly
                                                                                         evaluate a weather event. So we have introduced two
                                                                                         kinds of agents: the expert ones and the inexpert ones.
Fig. 1. (a) A source reporting a 100% probability of event 5. (b) A source
                                                                                    2.   Notification from authority: the authority distributes into
reporting a 50% probability of event 5 and 50% probability of event 4. (c) A
source reporting a 100% probability of event 3.                                          the world weather forecast, trying to prepare citizens to
                                                                                         what is going to happen. While the time pass, it is able to
                                                                                         produce a better forecast, but it will not be able to inform
                         III. THE PLATFORM                                               everyone. In this sense we have two kinds of agents: the
                                                                                         well-informed ones and the ill-informed ones.
  Exploiting NetLogo [19], we created a very flexible
platform, where a lot of parameters are taken into account to                       3.   Others’ behavior: agents are in some way influenced by
model a variety of situations.                                                           community logics, tending to partially or totally emulate
Given a population distributed over a wide area, some weather                            their neighbors’ behavior (other agents in the radius of 3
phenomena happen in the world with a variable level of                                   NetLogo patches). The probability of each event is
criticality.                                                                             directly proportional to the number of neighbors making
The world is made by 32x32 patches, which wraps both                                     each kind of decision. This source can have a positive
horizontally and vertically where agents are distributed in a                            influence if the neighbors behave correctly, otherwise it
random way and is populated by a number of cognitive agents                              represents a drawback.
(citizens) that have to evaluate which will be the future
                                                                                    None of these sources is perfect. In any situation there is
1
  Of course changing the values of α and β will have an impact on the               always the possibility that a source reports wrong information.
trust evaluations. With high values of α/β, agents will need more time
to get a precise evaluation, but a low value (below 1) will lead to an              B. Agents’ Description
unstable evaluation, as it would depend too much on the last                        At the beginning of the simulation, the world is populated by a
performance. We do not investigate these two parameters in this
                                                                                    number of citizens, having the same neutral trust value 0.5 for
work, using respectively the values 0.9 and 0.1. In order to have good
evaluations, we let agents make a lot of experience with their                      all their information sources. This value represents a situation
information sources.                                                                in which citizens are not sure if to trust or not a given source
                                                                     21


(a value of 1 represents complete trust and 0 complete                    D. Citizens’ Impulsivity
distrust).                                                                Sometimes impulsivity overcomes logic and rationality. This
There are two main differences between citizens. The first one            is more evident in case of critical situations, but it is still
relies on how able they are in seeing and reading the                     plausible in the other cases. Maybe the authority reports a light
phenomena. In fact, in the real world not all the agents have             event, but the neighbors are escaping. In this case it is easy to
the same abilities. For representing these different abilities we         be influenced by the crowd decision, to make a decision solely
associated to the citizens’ evaluations different values of               based on the social effect, letting “irrationality” emerge. Let us
standard deviation related to the meteorological events.                  explain better this concept of "irrationality": in fact we
                                                                          consider that an agent follow an "irrational" behavior when it
In order to shape this, we divided agents in two sets:                    takes a decision considering just one of its own information
1.   Class 1: good evaluators; they have good capabilities to             sources although it has also other available sources to consult.
     read and understand what is going to happen. They will               In this work we consider just the social source as subjected to
     be quit always able to detect correctly the event (90% of            the impulsivity conditioning.
     times; standard deviation of 0.3), and then we expect                Impulsivity is surely a subjective factor so our citizens are
     them to highly trust their own opinion.                              endowed with an impulsivity threshold, which measures how
2. Class 2: bad evaluators; they are not so able to understand            prone they are to irrational choice due to the crowd effect.
     what is going on (20% of times, that is the same                     This threshold is affected by the other two sources, the
     performance of a random output; standard deviation of                authority and the experience, as they add rationality in the
     100). In order to understand which weather event is going            decisional process.
     to happen in the near future they have to consult other              The threshold goes from 0 to 1, and given a value of this
     information sources.                                                 threshold, being well informed or an expert gives a plus 0.2 to
The second difference is due to how easily they are reached by            it (it an agents is both informed and expert, it is a plus 0.4).
the authority. The idea is that the authority reaches everyone,           Therefore it is important for individual to be informed, so that
but while the time passes it produces new updated                         they are less sensible to irrationality and they are able to
information. There will be agents able to get update                      produce decisions based on more evidence. In our experiments
information, but not all of them will be able to do it. To model          we consider a common impulsivity threshold (IthCom) that is
this fact, we defined two agent classes:                                  the same for all the agents and two additional factors (AddInf
                                                                          and AddExp) due to the information and the expertise each
1.   Class A: they possess the newest information produced                agent has that determine the individual impulsivity threshold
     by the authority; the information they receive has a 90%             (IthAgent). In practice, given an agent A, we can say that:
     probability to be correct;
2.   Class B: they are only able to get the first prevision of the                          IthA = IthCom + AddInf + AddExp             (2)
     authority; the information they receive has a 30%
     probability to be correct.                                           The threshold is compared with the PDF reported by the social
C. The authority                                                          source. If there is one event that has a probability to happen
                                                                          (according to this source) greater than the impulsivity
The authority’s aim is to inform citizens about what is going             threshold, then the agents act impulsively.
to happen. The best case would be the one in which it is able
to produce a correct forecast and it has the time to spread this          E. Platform Input
information through all the population. However reaching                  The first thing that can be customized is the number of
everyone with correct information is as desirable as unreal.              citizens in the world and how they are distributed between the
The truth is that weather forecast’s precision increases while            performance categories and the reachability categories.
the event is approaching.                                                 Then, one can set the value of the two parameters α and β,
In the real world the authority does not stop making prediction           used for updating the sources’ trust evaluation. It is possible to
and spreading it. As already said, in the simulations we                  change the authority reliability concerning each of the
modeled this dividing the population into two classes. Agents             reachability categories. Concerning the training phase, it is
belonging to the class B will just receive the old information.           possible to change its duration. Finally, it is possible to set
This is produced with a standard deviation of 1.5, which                  the impulsivity threshold and how much it will be modified
means that this forecast will be correct in the 30% of times.             by each rational source.
Then the authority will spread updated information. Being
closer to the incoming event, this forecast has a higher                  F. Workflow
probability to be correct. It is produced with a standard
                                                                          The simulation is divided into two steps. The first one is called
deviation of 0.3, so that it will be correct in the 90% of times.
                                                                          “training phase” and has the aim of letting agents make
As a choice, we made that in the simulation it is more
                                                                          experience with their information sources, so that they can
convenient to use as a source the authority rather than personal
                                                                          determine how reliable each source is.
evaluations, except for experts that are as good as a reliable
authority.                                                                At the beginning of this phase, we generate a world containing
                                                                          an authority and a given number of citizens, with different
                                                                      22


abilities in understanding weather phenomena and different                 The first one is agents’ performance. Concerning a single
possibility to be informed by the authority. Then citizens start           event, the performance of an agent is considered correct (and
collecting information, in order to understand which event is              assumes value 1) if it correctly identified the event or wrong
going to happen. The authority gives forecast reporting its                (and assumes value 0) if it made a mistake with the events.
estimated level of criticality. As already explained, it produces          The second dimension we analyze is the decisional distance.
two different forecasts. All the citizens will receive the first           Suppose that there will be event 5. An agent X foresees event
one, but it is less precise as it is not close enough to the event.        4, while another agent Y supposes there will be event 1. Both
The second one is much more precise, but being close to the                this decision are wrong, but the decision of agent Y is much
event it is not possible for the authority to inform all the               more wrong that the one of X. Practically speaking, in case of
citizens.                                                                  a critical event (represented in fact by event 5) agent X could
                                                                           take some important measure to prevent damages to it and its
In any case, being just forecasts, it is not sure that they are
                                                                           properties, while agents Y just does nothing. Maybe both the
really going to happen. They will have a probability linked to
                                                                           agents suffer damages, but probably X manages to reduce
the precision of the authority (depending on its standard
                                                                           damages or at least the probability to be damaged, while Y
deviation).
                                                                           does not.
Then citizens evaluate the situation on their own and also                 For a single agent its decisional distance is defined as the
exploit others’ evaluations (by the effect of their decisions).            difference between the event that is going to happen and the
Remember that the social source is the result of the process               agent’s forecast. For instance, agent X’s decisional distance is
aggregating the agents’ decisions in the neighborhood: if a                1, while Y’s is 4. We want this dimension to be the lowest;
neighbor has not yes decided, it is not considered. If according           ideally in a perfect world it should be 0, meaning that the
to the others’ evaluation there is one event that has a                    agent makes the right prediction.
probability to happen greater than the impulsivity threshold,              A third dimension is represented by the percentage of
then they act impulsively. This means that they are not going              impulsive decision.
to consider the three sources they have, but just the social one.          The last dimension that we investigate is the trust on the
If this does not happen, then they consider all the information            information sources. The section “Feedback on Trust”
they can access and they aggregate each single contribution                explains how agents produce their trust evaluations, based on
according to the corresponding trust value. Finally they                   the source performance. They possess a trust value for each of
estimate the possibility that each event happens and select the            their three sources.
choice that minimizes the risk.                                            We introduced these four metrics for individual agents.
                                                                           Actually in the results they will be presented aggregating the
While citizens collect information they are considered as                  values of a category of agents and mediating them for the
“thinking”, meaning that they have not decided yet. When
                                                                           number of times that the experiment is repeated (500 times).
they reach the decisional phase, the citizens have to make a
                                                                           In particular, in order to provide a better analysis of the
decision, which cannot be changed anymore. This information
                                                                           results, we are not going to simply consider the category of
is then available for the others (neighborhood), which can in
                                                                           agents previously described, but their combinations: 1A = well
turn exploit it for their decisions. At the end of the event,
                                                                           informed and expert agents; 2A = well informed and not
citizens evaluate the performance of the source they used and
                                                                           expert agents; 1B = less informed and expert agents; 2B = less
adjust the corresponding trust values. This phase is repeated
                                                                           informed and not expert agents.
for 100 times (then there will be 100 events) so that agents can
make enough experience to judge their sources.
                                                                           B. Simulations’ Scenario
After that, there is the “testing phase”. Here we want to                  In the scenarios we investigated, the percentage of well-
understand how agents perform, once they know how reliable                 informed citizens and the percentage of expert citizens is the
their sources are. In this phase, we will compute the accuracy             same, as we are mainly interested in increasing/decreasing the
of their decision (1 if correct, 0 if wrong).                              quantity of good information and expertise that the population
                                                                           possesses. Of course, as the assignment of citizens to
                      IV. SIMULATIONS                                      categories is random, it is possible an overlap between these
In the simulations we tested the effect of impulsivity on a                categories: a well-informed citizen can also be an expert.
population with different abilities to interpret the events and            Simulation settings:
with different possibility to be informed by the authority. It is          1. number of agents: 200;
worth noting that impulsivity affects everyone, even the more              2. α and β: respectively 0.9 and 0.1;
expert or informed can be misled by their neighbors’                       3. authority reliability: we used a standard deviation of 1.5 to
decisions.                                                                    produce the first forecast reported by the authority (it is
                                                                              correct about 90% of the time) and 0.3 for the second one
A. Simulations’ Outputs                                                       (its forecasts are correct about 30% of the time);
In this section we describes the metrics we used in order to               4. percentage of well informed citizens and percentage of
understand and analyze each simulation.                                       expert citizens: {10-10,20-20,30-30,45-45,60-60,75-75}.
                                                                           5. training phase duration: 100 events;
                                                                                23


6. Impulsivity threshold: we experimented the four cases                             Differently form the others, if we focus on the 2B category
   {0.3,0.5,0.7,0.9}                                                                 (both bad evaluators and misinformed) we notice an
                                                                                     interesting effect: in all the cases, increasing the impulsivity
For sake of simplicity, as the percentage of well informed
                                                                                     threshold the performance of 2B citizens decreases. This is
citizens and of expert citizens is the same in each experiment,
                                                                                     due to the fact that, being less impulsive will have more
we will use this value to identify the specific case. For
                                                                                     weight on their own information and on their own expertise in
instance, the “case 10-10” is the one with 10% of well-
                                                                                     their final evaluations. But not being well informed or experts,
informed citizens and of expert citizens.
                                                                                     there is a higher probability that they will be wrong.
                                                                                     Concerning agents’ decision, it is interesting not just to see the
                                                                                     percentage of success, but also how they differ from the
                                                                                     correct decision. The decisional distance reports this
                                                                                     information.
                                                                                     From the graphs in Figure 2d we can clearly see that
                                                                                     increasing the quantity of information in the world (experts
                                                                                     and informed agents) the decisional distance decreases. It also
                                                                                     seems to decrease increasing the impulsivity threshold: in
                                                                                     practice, the forecasts are more correct when the agents are
                                                                                     more informed or expert and less impulsive.
                                                                                     C. Trust Analysis
                                                                                     Talking about trust, analyzing the four categories 1A, 1B, 2A
                                                                                     and 2B the components of self-trust and authority trust do not
                                                                                     change. They in fact assume a fixed value in all the cases, not
Fig. 2. (a) Agents’ correctness in the case 10-10. (b) Agents’ correctness in
                                                                                     being influenced by the impulsivity threshold or by the
the case 30-30. (c) Agents’ correctness in the case 75-75. (d) Agents’
decisional distance                                                                  quantity of information in the world (just by its quality).
                                                                                     Figure 3a, 3b, 3c and 3d show these values respectively to the
It is worth noting that when the impulsivity threshold (IthCom)                      categories 1A, 1B, 2A and 2B.
is 0.9 then well informed or expert agents are not impulsive
for sure (given that for those agents IthAgent saturates the max
value 1). When the impulsivity threshold (IthCom) is 0.7, it is
necessary to be both informed and expert to not be impulsive
in any case. In the other cases agents could act impulsively,
according to the modality explained above. This is clearly
visible with an impulsivity threshold of 0.7, especially in
Figure 2a but also in Figure 2b: there is a big difference
between 1A agents’ performance and the others. In practice, in
the given composition of agents showed in Figure 2a and 2b,
impulsive agents are penalized. Let us explain in detail.
Figure 2a shows the case 10-10 (10% of well informed
citizens and 10% of expert citizens). Here the majority of the
citizens, approximately the 81%, belongs to the category 2B
(not well informed and not expert) represented in violet. They
                                                                                     Fig. 3. (a) Trust degrees of the agents belonging to the 1A category in the
are so many that their evaluation of the events when socially                        case 30-30. (b) Trust degrees of the agents belonging to the 1B category in the
transmitted to their neighbors will have a negative influence                        case 30-30. (c) Trust degrees of the agents belonging to the 2A category in the
on them, especially when there is a low value of common                              case 30-30. (d) Trust degrees of the agents belonging to the 2B category in the
impulsivity threshold. Increasing the percentage of                                  case 30-30
informed/expert citizens this effect tends to disappear, as
showed by Figure 2b and 2c.                                                          What changes is of course the social trust. In fact, event if it is
                                                                                     completely independent from the agent’s nature, it strictly
From Figure 2a, 2b and 2c it clearly results that the                                depends on its neighborhood: the more performative they are,
performance of 1A, 1B and 2A agents increases when we                                the higher the social trust will be. This is clearly visible in
increases the value of the impulsivity threshold (agents are                         Figure 4. We can see how the social trust increases increasing
less impulsive). In fact increasing this component, these                            the percentage of expert/informed citizens.
agents will not be influenced by the crowd effect and they will
be able to decide on the basis of all their sources.
                                                                      24


                                                                           [2]  Barber, K. S., & Kim, J. (2001). Belief revision process based on trust:
                                                                                Agents evaluating reputation of information sources. In Trust in Cyber-
                                                                                societies (pp. 73-82). Springer Berlin Heidelberg.
                                                                           [3] Castelfranchi C., Falcone R., Trust Theory: A Socio-Cognitive and
                                                                                Computational Model, John Wiley and Sons, April 2010.
                                                                           [4] Cialdini, Robert B. (2001). Influence: Science and practice (4th ed.).
                                                                                Boston: Allyn & Bacon. ISBN 0-321-01147-3
                                                                           [5] Evenden, J. L. (1999). "Varieties of impulsivity". Psychopharmacology.
                                                                                146 (4): 348–61. doi:10.1007/PL00005481
Fig. 4. Social trust of all the agents in the six cases                    [6] Falcone, R., Sapienza, A., & Castelfranchi, C. (2015). The relevance of
                                                                                categories for trusting information sources. ACM Transactions on
                                                                                Internet Technology (TOIT), 15(4), 13.
                          V. CONCLUSIONS                                   [7] Genter, K., & Stone, P. (2016, May). Adding Influencing Agents to a
In this work we analyzed the effect of subjective impulsivity                   Flock. In Proceedings of the 2016 International Conference on
                                                                                Autonomous Agents & Multiagent Systems (pp. 615-623). International
inside critical weather scenarios. We proposed some                             Foundation for Autonomous Agents and Multiagent Systems.
simulations in which a population of citizens (modeled                     [8] Latané, B. (1981). The psychology of social impact. American
through cognitive agents) has to face weather scenarios and                     Psychologist, 36, 343-356.
needs to exploit its information sources to understand what is             [9] Melaye, D., & Demazeau, Y. (2005). Bayesian dynamic trust model. In
going to happen. In these situation agents can act “rationally”                 Multi-agent systems and applications IV (pp. 480-489). Springer Berlin
(basing their choice in the global evidence they possess) or                    Heidelberg.
impulsively, just emulating their neighbors due to a sort of               [10] Melo, Victor S., Alison R. Panisson, and Rafael H. Bordini. "Trust on
                                                                                Beliefs: Source, Time and Expertise.", in Proceedings of the 18th
“crowd effect”.                                                                 International Workshop on Trust in Agent Societies co-located with the
First of all, we proved that even if impulsivity has a strongly                 15th International Conference on Autonomous Agents and Multiagent
negative impact on informed or expert agents, on the contrary                   Systems (AAMAS 2016), Singapore, May 10, 2016, Ceur Workshop
                                                                                Proceedings, vol 1578, paper 6.
it is useful for the remaining 2B agents. Further, we showed
                                                                           [11] Parsons, S., Sklar, E., Singh, M. P., Levitt, K. N., & Rowe, J. (2013,
that it is not good to have a high percentage of 2B agents, as                  March). An Argumentation-Based Approach to Handling Trust in
they have a negative impact also on the agents belonging to                     Distributed Decision Making. In AAAI Spring Symposium: Trust and
the other categories. This is a quite predictable effect, even if               Autonomous Systems.
it is interesting appreciate the various levels of impulsivity that        [12] Quercia, D., Hailes, S., & Capra, L. (2006). B-trust: Bayesian trust
determine the different impacts.                                                framework for pervasive computing. In Trust management (pp. 298-
                                                                                312). Springer Berlin Heidelberg.
Then we analyzed the role played by social trust. Given a                  [13] Sapienza, A., & Falcone, R. A Bayesian Computational Model for Trust
value for the impulsivity threshold and a percentage of                         on Information Sources, in proceedings of the conferenze WOA 2016,
                                                                                Catania, Ceur workshop proceedings, vol 1664, pp. 50-55.
informed and expert citizens, we showed that it assumes a
                                                                           [14] Schmidt, S., Steele, R., Dillon, T. S., & Chang, E. (2007). Fuzzy trust
fixed value for all the citizens, as it is independent by the                   evaluation and credibility development in multi-agent systems. Applied
agent’s category. However, it as a positive impact on agents                    Soft Computing, 7(2), 492-505.
with less information (2B agents) while it tends to have a                 [15] Theodorakopoulos, G., & Baras, J. S. (2006). On trust models and trust
negative effect on the on increasing the correctness of the                     evaluation metrics for ad hoc networks. IEEE Journal on selected areas
information that agents own.                                                    in Communications, 24(2), 318-328.
                                                                           [16] VandenBos, G. R. (2007). APA dictionary of psychology. Washington,
A last point regards the decisional distance, which provide a                   DC: APAVillata, S., Boella, G., Gabbay, D. M., & Van Der Torre, L.
much more precise analysis of the decisions’ correctness. We                    (2011, June). Arguing about the trustworthiness of the information
                                                                                sources. In European Conference on Symbolic and Quantitative
saw that it tends to decrease increasing the impulsivity                        Approaches to Reasoning and Uncertainty (pp. 74-85). Springer Berlin
threshold. This means that less impulsive agents can produce a                  Heidelberg.
better evaluation: even if they are wrong, their decisions are             [17] Villata, S., Boella, G., Gabbay, D. M., & Van Der Torre, L. (2011,
nearer to the correct decision.                                                 June). Arguing about the trustworthiness of the information sources. In
                                                                                European Conference on Symbolic and Quantitative Approaches to
                                                                                Reasoning and Uncertainty (pp. 74-85). Springer Berlin Heidelberg.
                           ACKNOWLEDGMENTS                                 [18] Wang, Y., & Vassileva, J. (2003, October). Bayesian network-based
                                                                                trust model. In Web Intelligence, 2003. WI 2003. Proceedings.
This work is partially supported by the project CLARA—                          IEEE/WIC International Conference on (pp. 372-378). IEEE.
CLoud plAtform and smart underground imaging for natural                   [19] Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/.
Risk Assessment, funded by the Italian Ministry of Education,                   Center for Connected Learning and Computer-Based Modeling,
University and Research (MIUR-PON).                                             Northwestern University, Evanston, IL.


                                 REFERENCES
[1]   Amgoud L., Demolombe R., An Argumentation-based Approach for
      Reasoning about Trust in Information Sources , In Journal of
      Argumentation and Computation, 5(2), 2014