<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Valorizing Prejudice in MAS: A Computational Model</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Rino Falcone ISTC-CNR Rome</string-name>
          <email>rino.falcone@istc.cnr.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Italy</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Alessandro Sapienza Cristiano Castelfranchi ISTC-CNR ISTC-CNR Rome</institution>
          ,
          <addr-line>Italy Rome</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Experimentation, Human Factors</institution>
          ,
          <addr-line>Reliability, Theory</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>In MAS studies on Trust building and dynamics the role of direct/personal experience and of recommendations and reputation is proportionally overrated; while the importance of inferential processes in deriving the evaluation of trustees' trustworthiness is underestimated and not exploited. In this paper we focus on the importance of generalized knowledge: agents' categories. The cognitive advantage of generalized knowledge can be synthesized in this claim: "It allows us to know a lot about something/somebody we do not directly know". At a social level this means that I can know a lot of things on people that I never met; it is social "prejudice" with its good side and fundamental contribution to social exchange. In this study we experimentally inquire the role played by categories' reputation with respect to the reputation and opinion on single agents: when it is better to rely on the first ones and when are more reliable the second ones. Our claim is that: the larger the population and the ignorance about the trustworthiness of each individual (as it happens in an open world) the more precious the role of trust in categories. This powerful inferential device has to be strongly present in WEB societies supported by MAS.</p>
      </abstract>
      <kwd-group>
        <kwd>Trust and reputation</kwd>
        <kwd>Cognitive models</kwd>
        <kwd>Social simulation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Categories and Subject Descriptors
I.2.11 [Artificial Intelligence] :
Intelligence - multiagent systems
Distributed</p>
      <p>Artificial
1.</p>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>In MultiAgent Systems (MAS) and Online Social Networks
(OSN) studies on Trust building and dynamics the role of
direct/personal experience and of recommendations and reputation
(although important) is proportionally overrated; while the
importance of inferential processes in deriving the evaluation of
trustee's trustworthiness is underestimated and not sufficiently
exploited (a part from the so called “transitivity”, which is also,
very often, wrongly founded).</p>
      <p>
        In particular, generalization and instantiation from classes and
categories [
        <xref ref-type="bibr" rid="ref1">8</xref>
        ], and analogical reasoning (from task to task and
from agent to agent) really should receive much more attention. In
this paper we focus on the importance of generalized knowledge:
agents' categories. The cognitive advantage of generalized
knowledge (building classes, prototypes, categories, etc.), can be
synthesized in this obvious claim: "It allows us to know a lot
about something/somebody we do not directly know" (for
example, I never saw Mary's dog, but - since it is a dog - I know
hundreds of things about it).
      </p>
      <p>At a social level this means that I can know a lot of things on
people that I never met; it is social "prejudice" with its good side
and fundamental contribution to social exchange. How can I trust
(for drugs prescription) a medical doctor that I never met before
and nobody of my friends knows? Because he is a doctor!
Of course we are underlining the positive aspects of generalized
knowledge, its essential role for having information on people
never met before and about whom no one gave testimony. The
more rich and accurate this knowledge is, the more it is useful. It
offers huge opportunity both for realizing productive cooperation
and for avoiding risky interactions. The problem is when the
uncertainty about the features of the categories is too large or it is
too wide the variability of the performers within them. In our
culture we attribute a negative sense to the concept of prejudice,
and this because we want underline how generalized knowledge
can produce unjust judgments against individuals (or groups)
when superficially applied (or worst, on the basis of precise
discriminatory intents). Here we want rather point out the positive
aspects of the prejudice concept.</p>
      <p>In this study we intend to explain and experimentally show the
advantage of trust evaluation based on classes' reputation with
respect to the reputation and opinion on single potential trustees
(partners). In an open world or in a broad population how can we
have sufficient direct or reported experience on everybody? The
quantity of potential trustees in that population or net that might
be excellent partners but that nobody knows enough can be high.
Our claim is that: the larger the population and the ignorance
about the trustworthiness of each individual the more precious the
role of trust in categories. If I know (through signals, marks,
declaration, ...) the class of a given guy/agent I can have a reliable
opinion of its trustworthiness derived from its class-membership.
It is clear that the advantages of such cognitive power provided by
categories and prejudices does not only depend on
recommendation and reputation about categories. We can
personally build - by generalization - our evaluation of a given
category from our direct experience with its members (this is fact
happens in our experiments for the agents that later have to
propagate their recommendation about). However, in this
simulation we have in the trustor (which has to decide whom rely
on) only a prejudice based on recommendations about that
category and not its personal experience.</p>
      <p>After a certain degree on direct experiences and circulation of
recommendations, the performance of the evaluation based on
classes will perform better; and in certain cases there will be no
alternative at all: we do not have any evaluation on that
individual, a part from its category; either we work on inferential
instantiation of trustworthiness or we loose a lot of potential
partners. This powerful inferential device has to be strongly
present in WEB societies supported by MAS. We simplify here
the problem of the generalization process, of how to form
judgement about groups, classes, etc. by putting aside for example
inference from other classes (higher or sub); we build opinion
(and then its transmission) about classes on the bases of
experience with a number of subjects of a given class.
First of all, we want to clarify that here we are not interested in
steretypes, but in categories. We define steretypes as the set of
features that, in a given culture/opinion, characterize and
distinguish that specific group of people.</p>
      <p>Knowing the stereotype of an agent could be expensive and time
consuming. Here we are just interested in the fact that an agent
belongs to a category: it has not to be a costly process and the
recognition must be well discriminative and not-cheating. There
should be visible and reliable "signals" of that membership. In
fact, the usefulness of categories, groups, roles, etc. makes
fundamental the role of the signs for recognizing or inferring the
category of a given agent. That's why in social life are so
important coats, uniforms, titles, badges, diplomas, etc. and it is
crucial their exhibition and the assurance of their authenticity
(and, on the other side, the ability to falsify and deceive). In this
preliminary model and simulation let us put aside this crucial
issue of indirect competence and reliability signaling; let us
assume that the membership to a given class or category is true
and transparent: the category of a given agent is public, common
knowledge.</p>
      <p>
        Differently from [2][
        <xref ref-type="bibr" rid="ref4">11</xref>
        ][
        <xref ref-type="bibr" rid="ref11">18</xref>
        ], in this work we do not address the
problem of learning categorical knowledge and we assum that the
categorizzation process is objective.
      </p>
      <p>Similarly to [3], we give agents the possibility to recommend
categories and this is the key point of this paper.</p>
      <p>
        In the majority of the cases available in the literature, the concept
of recommendation is used concerning recommender systems [1].
These ones can be realized using both past experience
(contentbased RS) [
        <xref ref-type="bibr" rid="ref7">14</xref>
        ] or collaborative filtering, in which the contribute
of single agents/users is used to provide group recommendations
to other agents/users.
      </p>
      <p>
        Focusing on collaborative filtering, the concepts of similarity and
trust are often exploited (together or separately) to determine
which contributes are more important in the aggregation phase
[
        <xref ref-type="bibr" rid="ref8">15</xref>
        ][
        <xref ref-type="bibr" rid="ref12">19</xref>
        ]. For instance, in [7] authors provide a system able to
recommend to users group that they could join in Online Social
Network. Here it is introduced the concepts of compactness of a
social group, defined as the weighted mean of the two dimensions
of similarity and trust.
      </p>
      <p>
        Even in [
        <xref ref-type="bibr" rid="ref5">12</xref>
        ] authors present a clustering-based recommender
system that exploits both similarity and trust, generating two
different cluster views and combining them to obtain better
results.
      </p>
      <p>Another example is [6] where authors use information regarding
social friendships in order to provide users with more accurate
suggestions and rankings on items of their interest.</p>
      <p>
        A classical decentralized approach is referral systems [
        <xref ref-type="bibr" rid="ref14">21</xref>
        ], where
agents adaptively give referrals to one another.
      </p>
      <p>
        Information sources come into play in FIRE [
        <xref ref-type="bibr" rid="ref6">13</xref>
        ], a trust and
reputation model that use them to produce a comprehensive
assessment of an agent’s likely performance. Here authors take
into account open MAS, where agents continuously enter and
leave the system. Specifically, FIRE exploits interaction trust,
role-based trust, witness reputation, and certified reputation to
provide trust metrics.
      </p>
      <p>The described solutions are quite similar to our work, although we
contextualized this problem to information sources. However we
do not investigate recommendations with just the aim of
suggesting a particular trustee, but also for inquiring categories’
recommendations.
2. RECOMMENDATION AND
REPUTATION: DEFINITIONS
Let us consider a set of agents Ag1, ..., Agn in a given world (for
example a social network). We consider that each agent in this
world could have trust relationships with anyone else. On the
basis of these interactions the agents can evaluate the trust degree
of their partners, so building their judgments about the
trustworthiness of the agents with whom they interacted in the
past.</p>
      <p>
        The possibility to access to these judgements, through
recommendations, is one of the main sources for trusting agents
outside the circle of closer friends. Exactly for this reason
recommendation and reputation are the more studied and diffused
tools in the trust domain [
        <xref ref-type="bibr" rid="ref9">16</xref>
        ].
where
x, y, z ∈ {Ag , Ag2,...., Agn } , we call D the
1
specific domain: D ≡ {Ag , Ag2,...., Agn }
      </p>
      <p>1
and   0 ≤ Re cx,y,z (τ ) ≤ 1
τ, as established in the trust model of [4], is the task on which the
recommender expresses the evaluation about y.</p>
      <p>In words: Re cx,y,z (τ ) is the value of x’s recommendation
about y performing the task τ, where z is the agent receiving this
recommendation. In this paper, for sake of simplicity, we do not
introduce any correlation/influence between the value of the
recommendations and the kind of the agent receiving it: the value</p>
      <sec id="sec-2-1">
        <title>We define</title>
        <p>Re cx,y,z (τ )
(1)
of the recommendation does not depend from the agent to whom
it is communicated.</p>
        <p>So (1) represents the basic expression for recommendation.
We can also define a more complex expression
recommendation, a sort of average recommendation:
of
Agn
x=Ag1
∑ Re cx,y,z (τ ) / n
in which all the agents in the domain express their individual
recommendation on the agent y with respect the task   τ   and the
total value is divided by the number of agents.</p>
        <p>We consider the expression (2) as the reputation of the agent y
with respect to the task τ   in the domain D.</p>
        <p>
          Of course the reputation concept is more complex than the
simplified version here introduced [5][
          <xref ref-type="bibr" rid="ref10">17</xref>
          ].
        </p>
        <p>It is in fact the value that would emerge in the case in which we
receive from each agent in the world its recommendation about y
(considering each agent as equally reliable).</p>
        <p>In the case in which an agent has to be recommended not only on
one task but on a set of tasks  (τ1  ,  ..., τ k),  we could define instead of
(1) and (2) the following expressions:
k
i=1
∑ Re cx,y,z (τ i ) / k
that represents the x’s recommendation about y performing the set
of tasks (τ1,...,τk), where z is the agent receiving this
recommendation.</p>
        <p>Imagine having to assign a meta-task (composed of a set of task)
to one of several agents. In this case the information given from
the formula (3) could be useful for selecting on average (with
respect to the tasks) the more performative one.</p>
        <p>Agn
∑
x=Ag1 i=1
k
∑ Re cx,y,z (τ i ) / nk
that represents a sort of average recommendation from the set of
agents in D, about y performing the set of tasks (τ1 , ..., τk). We
consider the expression (4) as the reputation of the agent y with
respect the set of tasks (τ1 , ...,τk), in the domain D.</p>
        <p>Having to assign the meta-task proposed above, the information
given from the formula (4) could be useful for selecting on
average (with respect to both the tasks and the agents) the more
performative one.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>2.1 Using Categories</title>
      <p>As described above, an interesting approach for evaluating agents
is to classify them in specific categories already pre-judged/rated
and as a consequence to do inherit to the agents the properties of
their own categories.</p>
      <p>So we can introduce also the recommendations about categories,
not just about agents (we discuss elsewhere how these
recommendations are formed). In this sense we define:
Re cx,Cy,z (τ )
(2)
(3)
(4)
(5)
where x ∈ {Ag , Ag2,...., Agn } and</p>
      <p>1
Cy ⊆ {Ag1 , Ag2,...., Agn } ,
0 ≤ Re cx,Cy,z (τ ) ≤ 1
In words:   Re cx,Cy,z (τ )   is the value of x’s recommendation
about the agents included in category Cy when they perform the
task τ, (as usual z is the agent receiving this recommendation).
We again define a more complex expression of recommendation,
a sort of average recommendation:</p>
      <p>Agn
x=Ag1
∑ Re cx,Cy,z (τ ) / n
(6)
in which all the agents in the domain express their individual
recommendation on the category Cy with respect the task  τ  and the
total value is divided by the number of agents.</p>
      <p>We consider the expression (6) as the reputation of the category
Cy with respect the task τ  in the domain D.</p>
      <p>Now we extend to the categories, in particular to Cy, the
recommendations on a set of tasks  (τ1, ...,τk):
k
∑ Re cx,Cy,z (τ i ) / k
i=1</p>
      <p>(7)
that represents the value of x’s recommendation about the agents
included in category Cy when they perform the set of tasks
(τ1,...,τk).</p>
      <sec id="sec-3-1">
        <title>Finally, we define:</title>
        <p>k
Agn
∑
x=Ag1 i=1</p>
        <p>(8)
∑ Re cx,Cy,z (τ i ) / nk
that represents the value of the reputation of the category Cy (of
all the agents y included in Cy) with respect the set of tasks
(τ1,...,τk), in the domain D.  
 
2.2 Definitions of Interest for this Work
In this paper we are in particular interested in the case in which z
(a new agent introduced in the world) asks for recommendation to
x ( x ∈ D ) about an agent belonging to its domain D (the set of
all the agents in the world) for performing the task  τ.  x will select
the best evaluated  y,  with   y ∈ Dx on the basis of formula:
max y∈Dx (Re cx,y,z (τ ))</p>
        <p>(9)
where Dx ≡ {Ag1 , Ag2,...., Agm } , Dx includes all the
agents evaluated by x. They are a subset of D: Dx ⊆ D .
In general D and Dx are different because x does not necessarily
know (has interacted with) all the agents in D.
z asks for recommendations not only to one agent, but to a set of
different agents: x ∈ Dz , and selects the best one on the basis of
the value given from the formula:
max x∈Dz (max y∈Dx (Re cx,y,z (τ )))
Dz ⊆ D , z could ask to all the agents in the world or to a
defined subset of it (see later).</p>
        <p>We are also interested to the case in which z ask for
recommendations to x about a specific agents’ category for
performing the task τ.  x has to select the best evaluated Cy  among
the different Cy x has interacted with (we are supposing that each
agent in the world D, belongs to a category Cy in the set
{Cy1 , Cy 2,...., Cyn } ).</p>
        <p>In this case we have the following formulas:
maxCy∈Dx (Re cx,Cy,z (τ ))
that returns the category best evaluated from the point of view of
an agent (x). And
max x∈Dz (maxCy∈Dx (Re cx,Cy,z (τ )))
that returns the category best evaluated from the point of view of
all the agents included in</p>
        <p>
          Dz .
3. COMPUTATIONAL MODEL
3.1 NetLogo
In order to realize our simulations, we exploited the software
NetLogo [
          <xref ref-type="bibr" rid="ref13">20</xref>
          ]. It is an open source agent-based programming
environment written in Java, particularly suited for modeling
natural and social phenomena.
        </p>
        <p>In NetLogo everything is an agent (also the patches that compose
the world in which the other agents move) and it is possible to
create and model many kind of them, specifying how they relate
to each other and giving individual instructions. It is also possible
to modify the world at run time, to further answer those "what if"
questions that pop up while investigating the models.</p>
        <p>It splits the programming part, in which the programmer can set
up the environment of the simulation and specify the behavior of
turtles, and the visual part, in which the user can start the
simulation, control it changing its parameters and see the result at
run time, through the view representing the world, plots and
output monitors.</p>
        <p>Although NetLogo is an excellent instrument for simulation's
tasks, it is devoid of adequate computational libraries to
implement the computational model of trust on information's
sources. Then it has proved necessary to expand it with a Java
plug-in made by us, able to fill these gaps. In practice, this trust
plug-in implements all the model of trust on information's sources.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3.2 General Setup</title>
      <p>In every scenario there are four general categories, called A,B,C
and D, each one characterized by:
1.
2.</p>
      <p>an average value of trustworthiness, in range [0,100];
an uncertainty value, in range [0,100].</p>
      <p>Those two values are exploited to generate the objective
trustworthiness of each trustee, defined as the probability that,
concerning a specific kind of required information, the trustee will
communicate the right information.
(11)
(12)</p>
      <p>Of course, the trustworthiness of categories and trustees is
strongly related to the kind of requested information/task. In these
simulations we use just one kind of information in which the
categories A, B, C and D have 80, 60, 40 and 20% of average
value of trustworthiness respectively. The uncertainty value is
fixed to 20% for all of them.</p>
      <p>The simulations were carried out using two different numbers of
trustee: 20 trustees for each category and 100 trustees for each
category. In both cases we used just one trustor.
3.3</p>
    </sec>
    <sec id="sec-5">
      <title>How the simulations work</title>
      <p>Simulations are mainly composed by two main steps that repeat
continuously. In the first step, called exploration phase, agents
move into the world asking to their neighbors (other agents with a
distance of less than 3 NetLogo patches) for the information P.
Then they memorize the performance of each neighbor both as
individual element and as a member of its own category.
The performance of a agent can assume just the two values 1 or 0,
with 1 meaning that the agent is supporting the information P and
0 meaning that it is opposing to P. For sake of simplicity, we
assume that P is always true.</p>
      <p>We also choose to let agents move with a probability of 10%
(each agent moves, with a probability of 10%, one patch in a
random direction) so, on the one hand we can say that the agents
change their neighbors after each tick, but, on the other hand this
change is quite slow and, given the number of ticks realized they
are not able to know all the other agents in the world, but they
know properly just a subset of them.</p>
      <p>We call the set of neighbors with whom agents interact in each
tick: their neighborhood.</p>
      <p>The exploration phase has a variable duration, going from 100
ticks to 1 tick. Depending on this value, agents will have a better
or worse knowledge of their neighborhoods.</p>
      <p>Then, in a second step (querying phase) we introduce in the
world a trustor (a new agent with no knowlegde about the
trustworthiness of other agents and categories, and that has the
necessity to trust someone reliable for a given task). It will select
a given subset of the population and it will query them. In
particular, the trustor will ask them for the best category and the
best trustee they have experienced.</p>
      <p>In this way, the trustor is able to collect information about the best
recommended category and agent.</p>
      <p>It is important to underline that the trustor is collecting
information from the agents considering them as equally
trustworthy with respect to the task of "providing
recommendations". Otherwise it should weigh differently these
recommendations.</p>
      <p>Then it will select the nearest agent belonging to the best
recommended category and it will compare it, in terms of
objective trustworthiness, with the best recommended individual
agent (trustee).</p>
      <p>The possible responses are:
•
•
trustee wins: the trustee selected with individual
recommendation is better than the one selected by the
means of category; then this method gets one point;
category wins: the trustee selected by the means of
category is better than the one selected with individual
recommendation; then this method gets one point;
These two phases are repeated 500 times.</p>
    </sec>
    <sec id="sec-6">
      <title>3.4 Outputs</title>
      <p>In every simulation we use some different indexes to analyze its
results:
equal result: if the difference between the two
trustworthiness values is not enough (it is under a
threshold), we consider it as indistinguishable result. In
particular, we considered the threshold of 3%.
trustee wins: number of times in which the trustee
selected with individual recommendation is better than
the one selected by the means of categorial
recommendation;
category wins: number of times in which the trustee
selected by the means of categorial recommendation
(the nearest agent belonging to it) is better than the one
selected with individual recommendation;
equal result: number of times in which the difference
between the two trustworthiness values is less than 3%;
trustee mean: average value of trustees’ trustworthiness
chosen with individual recommendation in the 500 run;
category mean: average value of the trustees’
trustworthiness chosen with the categorial
recommendation in the 500 run.
4. SIMULATIONS RESULT
In these simulations we present a series of scenarios with
different settings to show when it is more convenient to exploit
recommendations about categories rather than recommendations
about individuals, and vice versa.</p>
      <p>We also present the “all-in-one” scenario, whose peculiarity is
that the exploration lasts just 1 tick and in that tick every trustee
experiences all the others. Although this is a limit case, very
unlikely in the real world, it is really interesting as each trustee
has not a good knowledge of the other trustees as individual
elements (it has experienced them just one time), but it is able to
get a really good knowledge of their categories, as it has
experienced them as many times as the number of trustees for
each category. So this is an explicit case in which the
recommendations of the trustees about categories are surely more
informative than the ones about individuals.</p>
      <p>Simulations’ results are presented in a tabular and graphical way.
In particular, we have chosen to highlight in tables, with a yellow
color, cases in which category’s performance overtakes or
equalizes individual’s one.</p>
    </sec>
    <sec id="sec-7">
      <title>4.1 First Simulation</title>
      <p>In this first set of simulations we use 20 trustees for category and
analyze what happens when both the duration of exploration
phase and the percentage of queried trustees change.</p>
      <p>leg : cases in which category’s performance overtakes
or equalizes individual’s one.</p>
      <sec id="sec-7-1">
        <title>Trustees queried by the trustor: 100%</title>
        <p>T win
C win
Equal
C Av
T Av
all-in-one</p>
      </sec>
      <sec id="sec-7-2">
        <title>Second scenario:</title>
        <p>100
•</p>
      </sec>
      <sec id="sec-7-3">
        <title>Trustees queried by the trustor: 50%</title>
        <p>T win
C win
Equal
155
134
137
120
115
78
116
150
146
148
114
116
107
75
93
139
124
115
119
96
91
76
81
0,799
0,802
50
25
10
5
3
1
50
25
10
5
3
1
50
25
10
5
3
1
252
226
184
189
158
133
118
277
227
182
176
159
150
145
94
248
218
193
159
160
145
169
83
93
140
179
191
227
289
266
73
127
170
210
225
243
280
313
113
158
192
222
244
264
255
336
all-in-one</p>
      </sec>
      <sec id="sec-7-4">
        <title>Third scenario:</title>
        <p>100
•</p>
      </sec>
      <sec id="sec-7-5">
        <title>Trustees queried by the trustor: 25% all-in-one</title>
      </sec>
      <sec id="sec-7-6">
        <title>Fourth scenario:</title>
        <p>100
•</p>
      </sec>
      <sec id="sec-7-7">
        <title>Trustees queried by the trustor: 10%</title>
        <p>100
all-in-one
156
197
248
230
241
247
259
346
184
225
220
219
228
248
235
337
135
119
93
95
83
84
71
71
127
87
106
93
98
76
75
72
C Av
0,794
0,774
0,754
0,691
0,671
0,661
0,615
0,796
C Av
0,772
0,751
0,700
0,659
0,648
0,637
0,603
0,769
T Av
0,784
0,742
0,685
0,642
0,614
0,600
0,548
0,619
T Av
0,757
0,709
0,649
0,616
0,611
0,589
0,559
0,606
•</p>
      </sec>
      <sec id="sec-7-8">
        <title>Trustees queried by the trustor: 5%</title>
        <p>Below we synthetize these results in two graph (one for the “t
win” dimension and the other for the “c win” dimension).</p>
        <p>In the first graph it is easy to see how the value of “trustee wins”
decreases when decreases the number of ticks in the exploratory
phase, that is when is reduced the number of interactions among
the agents before being queried; on the contrary, the value of
“category wins” increases proportionally with this reduction (first
effect).</p>
        <p>At the same time, there is a direct proportionality between the
value of “trustee wins” and the number of trustees queried in the
querying phase; while the value of “category wins” increases
proportionally with the reduction of the number of trustees
queried (second effect).</p>
        <p>In practice, both these effects seem suggest how the role of
categories becomes relevant when either decreases and degrades
the knowledge within the analyzed system (before the interaction
with the trustor) or is reduced the transferred knowledge (to the
trustor).</p>
        <p>Let us explain better. The first effect can be described with the
fact that each agent, reducing the number of interactions with the
other agents in the explorative phase, will have relevantly less
information with respect to the individual agents. At the same
time its knowledge with respect to categories does not undergo a
significant decline given that categories' performances derive
from several different agents.</p>
        <p>The second effect can be explained with the fact that reducing the
number of queried trustees, the trustor will receive with
decreasing probability information about the more trustworthy
individual agents in the domain, while information on categories,
maintains a good level of stability also reducing the number of
queried agents, thanks to greater robustness of these structures.
Resuming, the above pictures clearly show how, when the
quantity of information (about the agents' trustworthiness
exchanged in the system) decreases, it is better to rely on the
categorial recommendations rather than individual
recommendations.</p>
        <p>This result reaches the point of highest criticality in the
“all-inone” case in which, as expected, “trustee wins” returns the
minimal value and “category wins” returns the maximal value.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>4.2 Second Simulation</title>
      <p>In the second set of simulations we try to increment the number of
trustees to 100 for category. It means that each trustee has much
more neighbors than before.
leg : cases in which category’s performance overtakes
or equalizes individual’s one.
Again, we summarize the results into two graph.</p>
      <p>In this second set of simulations, are confirmed the two effects
detected in the first simulations. However it is possible observe a
greater difficulty of recommendations about categories to prevail
on the recommendations about individuals: just strongly reducing
the trustees queried by the trustor it is possible value a role for
categories' recommendations.</p>
      <p>This result could be explained with the fact that increasing the
number of the agents in the neighborhood of each agent, it
increases the possibility to have in it highly trustworthy agents
and as a consequence more agents reporting information about
them.</p>
    </sec>
    <sec id="sec-9">
      <title>5. CONCLUSIONS</title>
      <p>
        In other works [
        <xref ref-type="bibr" rid="ref2">9</xref>
        ][
        <xref ref-type="bibr" rid="ref3">10</xref>
        ][2] were shown the advantages of using
reasoning about categorization for selecting trustworthy agents. In
particular, how it were possible to attribute to a certain unknown
agent, a value of trustworthiness with respect to a specific task, on
the basis of its classification in, and membership to, one (/or
more) category/ies. In practice, the role of generalized knowledge
and prejudice (in the sense of pre-established judgment on the
agents belonging to that category) has proven to determine the
possibility to anticipate the value of unknown agents.
      </p>
      <p>In this paper we have investigated the different roles that can play
recommendations about individual agents and about categories of
agents.</p>
      <p>
        In this case the new agent introduced (called trustror) has a whole
world of agents completely unknown to it, and ask for
recommendations to a (variable) subset of agents for selecting an
agent to whom delegate a task. The information received regards
both individual agents and agents' categories. The informative
power of these two kinds of recommendations is dependent from
the previous interactions among the agents and also from the
number agents queried by the trustor. However, there are cases in
which information about categories is more useful that
information towards individual agents. In some sense this result
complements the results achieved in [
        <xref ref-type="bibr" rid="ref2">9</xref>
        ][
        <xref ref-type="bibr" rid="ref3">10</xref>
        ][2] because here we
have a more strict match between information on individual
agents and information about categories of agents: We are
measuring the quantity of information, about individual agents
and categories, for evaluating when is better using direct
information rather than generalized information or, vice versa,
when is better using the positive power of prejudice. Our results
show how in certain cases becomes essential the use of categorial
knowledge for selecting qualified partners.
      </p>
      <p>In this work we have in fact considered a closed world, with a
fixed set of agents. This choice was based on the fact that we were
interested to evaluate the relationships between knowledge about
individual and knowledge about categories, for calibrating their
roles and reciprocal influences. In future works we have to
consider how, starting from the analysis of this study, could
change the role of knowledge about categories in a situation of
open world. In particular, we could experiment the dynamic of
this role with respect to the stability of the performances of the
different agents becoming to a category.</p>
    </sec>
    <sec id="sec-10">
      <title>6. REFERENCES</title>
      <p>[1] Adomavicius, G., Tuzhilin, A. Toward the next generation of
recommender systems: A survey of the state-of-the-art and
possible extensions. IEEE Transactions on Knowledge and
Data Engineering (TKDE) 17, 734–749, 2005
[2] Burnett, C., Norman, T., and Sycara, K. 2010. Bootstrapping
trust evaluations through stereotypes. In Proceedings of the
9th International Conference on Autonomous Agents and
Multiagent Systems (AAMAS'10). 241248.
[3] C. Burnett, T. J. Norman, and K. Sycara. Stereotypical trust
and bias in dynamic multiagent systems. ACM Transactions
on Intelligent Systems and Technology (TIST), 4(2):26,
2013.
[4] Castelfranchi C., Falcone R., Trust Theory: A
SocioCognitive and Computational Model, John Wiley and Sons,
April 2010.
[5] Conte R., and Paolucci M., 2002, Reputation in artificial
societies. Social beliefs for social order. Boston: Kluwer
Academic Publishers.
[6] P. De Meo, E. Ferrara, G. Fiumara, and A. Provetti.</p>
      <p>Improving Recommendation Quality by Merging
Collaborative Filtering and Social Relationships. In Proc. of
the International Conference on Intelligent Systems Design
and Applications (ISDA 2011) , Córdoba, Spain, IEEE
Computer Society Press, 2011
[7] P De Meo, E Ferrara, D Rosaci, and G Sarné. Trust and
Compactness of Social Network Groups. IEEE Transactions
on Cybernetics, PP:99, 2014</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Falcone</surname>
            <given-names>R.</given-names>
          </string-name>
          , Castelfranchi C.
          <article-title>Generalizing Trust: Inferencing Trustworthiness from Categories</article-title>
          . In: TRUST 2008 - Trust in Agent Societies, 11th International Workshop,
          <string-name>
            <surname>TRUST</surname>
          </string-name>
          <year>2008</year>
          .
          <article-title>Revised Selected and Invited Papers (Estoril</article-title>
          , Portugal,
          <fpage>12</fpage>
          -
          <lpage>13</lpage>
          May
          <year>2008</year>
          ). Proceedings, pp.
          <fpage>65</fpage>
          -
          <lpage>80</lpage>
          . R. Falcone,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Barber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sabater-Mir</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. P.</surname>
          </string-name>
          Singh (eds.).
          <source>(Lecture Notes in Artificial Intelligence</source>
          , vol.
          <volume>5396</volume>
          ). Springer,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Falcone</surname>
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piunti</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Venanzi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Castelfranchi</surname>
            <given-names>C.</given-names>
          </string-name>
          , (
          <year>2013</year>
          ),
          <article-title>From Manifesta to Krypta: The Relevance of Categories for Trusting Others</article-title>
          , in R. Falcone and M.
          <source>Singh (Eds.) Trust in Multiagent Systems, ACM Transaction on Intelligent Systems and Technology</source>
          , Volume
          <volume>4</volume>
          Issue 2,
          <string-name>
            <surname>March</surname>
            <given-names>2013</given-names>
          </string-name>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Falcone</surname>
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sapienza</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Castelfranchi</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <article-title>The relevance of Categories for trusting Information Sources,“Transactions on Internet Technology”</article-title>
          , submitted
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>H.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sensoy</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N. M.</given-names>
            <surname>Thalmann</surname>
          </string-name>
          .
          <article-title>A generalized stereotypical trust model</article-title>
          .
          <source>In Proceedings of the 11th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)</source>
          , pages
          <fpage>698</fpage>
          -
          <lpage>705</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Zhang and N.</given-names>
            <surname>Yorke-Smith</surname>
          </string-name>
          ,
          <article-title>Leveraging Multiviews of Trust and Similarity to Enhance Clusteringbased Recommender Systems, Knowledge-Based Systems</article-title>
          , accepted,
          <year>2014</year>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Huynh</surname>
            ,
            <given-names>T.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jennings</surname>
            ,
            <given-names>N. R.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Shadbolt</surname>
            ,
            <given-names>N.R.</given-names>
          </string-name>
          <article-title>An integrated trust and reputation model for open multi-agent systems</article-title>
          .
          <source>Journal of Autonomous Agents and Multi-Agent Systems</source>
          ,
          <volume>13</volume>
          , (
          <issue>2</issue>
          ),
          <fpage>119</fpage>
          -
          <lpage>154</lpage>
          .,
          <year>2006</year>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lops</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gemmis</surname>
          </string-name>
          , and G. Semeraro, “
          <article-title>Content-based recommender systems: State of the art and trends,” in Recommender Systems Handbook</article-title>
          . Springer, pp.
          <fpage>73</fpage>
          -
          <lpage>105</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>P.</given-names>
            <surname>Massa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Avesani</surname>
          </string-name>
          ,
          <article-title>Trust-aware recommender systems</article-title>
          ,
          <source>RecSys '07: Proceedings of the 2007 ACM conference on Recommender systems</source>
          ,
          <year>2007</year>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ramchurn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Jennings</surname>
          </string-name>
          , Carles Sierra, and
          <string-name>
            <given-names>Lluis</given-names>
            <surname>Godo</surname>
          </string-name>
          .
          <article-title>Devising a trust model for multi-agent interactions using confidence and reputation</article-title>
          .
          <source>Applied Artificial Intelligence</source>
          ,
          <volume>18</volume>
          (
          <fpage>9</fpage>
          -10):
          <fpage>833</fpage>
          -
          <lpage>852</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Sabater-Mir</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2003</year>
          .
          <article-title>Trust and reputation for agent societies</article-title>
          .
          <source>Ph.D. thesis</source>
          , Universitat Autonoma de Barcelona.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sensoy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yilmaz</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T. J.</given-names>
            <surname>Norman</surname>
          </string-name>
          . STAGE:
          <article-title>Stereotypical trust assessment through graph extraction</article-title>
          .
          <source>Computational Intelligence</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>C.</given-names>
            <surname>Than</surname>
          </string-name>
          and S. Han,
          <article-title>Improving Recommender Systems by Incorporating Similarity, Trust and Reputation</article-title>
          ,
          <source>Journal of Internet Services and Information Security (JISIS)</source>
          , volume:
          <volume>4</volume>
          , number: 1, pp.
          <fpage>64</fpage>
          -
          <lpage>76</lpage>
          ,
          <year>2014</year>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Wilensky</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          (
          <year>1999</year>
          ). NetLogo. http://ccl.northwestern.edu/netlogo/.
          <article-title>Center for Connected Learning</article-title>
          and
          <string-name>
            <surname>Computer-Based</surname>
            <given-names>Modeling</given-names>
          </string-name>
          , Northwestern University, Evanston, IL.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Yolum</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>M. P.</given-names>
          </string-name>
          <year>2003</year>
          .
          <article-title>Emergent properties of referral systems</article-title>
          .
          <source>In Proceedings of the 2nd International Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS'03).</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>