<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>WOA</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Accurate Colluding Agents Detection by Reputation Measures</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Attilio Marcianò</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Information Engineering, Infrastructure and Sustainable Energy (DIIES), Mediterranea University of Reggio Calabria</institution>
          ,
          <addr-line>via Graziella snc, loc. Feo di Vito - 98123 Reggio Calabria</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>23</volume>
      <fpage>1</fpage>
      <lpage>2</lpage>
      <abstract>
        <p>Software agents can form multidimensional, relationship based networks potentially able to realize not trivial forms of interaction and cooperation among agents. In similar contexts, honest agents could be exposed to malicious behaviors acted by unqualified potential partners. Trust and Reputation Systems are efective tools able to mitigate such risks by providing the agent community with suitably information about the trustworthiness of the potential partners in order to allow a good partner choice. In such a framework, we propose: (i) a method to preliminarily identify the best promising candidates as malicious assigning them the role of pre-untrusted entities and (ii) a novel reputation model capable to accurately identify malicious agents without introducing collateral efects on the reputation scores of honest ones.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Agent System</kwd>
        <kwd>Colluding</kwd>
        <kwd>Malicious</kwd>
        <kwd>Pre-untrusted</kwd>
        <kwd>Reputation System</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>malicious behaviors, from simple ones (e.g. selfishness, misjudgment) to more sophisticated
ones (e.g. colluding). In addition, in presence of large agent environments, nomadic agents, or
other events, the probability of interacting with partners whose trustworthiness is still unknown
might increase, in such a way agents will be exposed to greater risks of deceptions. Obviously,
minimizing such risks is a main aspect for enhancing social aspect in agent communities.</p>
      <p>Trust (i.e. reputation) is a powerful means to achieve this goal and making social interactions
more satisfying as possible. Therefore, to minimize risks each agent could be provided with
appropriate trust measures about the other agents belonging to the same community in order
to improve the probability of interacting with reliable partners or, conversely, deciding of not
interacting with anyone.</p>
      <p>However, the concept of trust cannot be uniquely defined and measured because it is
inlfuenced by measurable and not measurable properties involving multiple dimensions (like
competence, honesty, security, reliability, etc.) and depending on the specific situational context
under which the interactions between the two agents happen. Due to this multifaceted nature,
several meanings have been associated with the term “trust”. From our viewpoint, we have
interest in addressing subjectivity and situational risks, two aspects playing a fundamental role
in the agent societies. To this purpose, in the following we will define trust as:
• “Trust is the subjective probability by which an individual, A, expects that another
individual, B, performs a given action on which its welfare depends” [10];
• “Trust is the extent to which one party is willing to depend on something or somebody in
a given situation with a feeling of relative security, even though negative consequences
are possible” [11].</p>
      <p>Trust and Reputation Systems (TRSs) are equipping a great number of applications belonging
to a large variety of scenarios [12, 13, 14, 15]. From a practical viewpoint, TRSs are Decision
Support Tool because they enable agents’ choices providing them with information about the
trustworthiness of their potential partners on the basis of (i) direct information derived by a
direct knowledge of the trustor about a trustee and/or (ii) indirect information considering
ratings and/or opinions provided by other members of the own community about that trustee.
Generally, TRSs represent the agent’s trustworthiness by means of a unique score which can be
used to identify dishonest actors from honest ones.</p>
      <p>However, a main problem of TRSs in detecting malicious agents is that of adopting
computational processes that not penalize the reputation scores of honest agents. Unfortunately, this is
not unusual as, for instance, in the well known reputation system EigenTrust [16]. To this aim,
our contribution is focused on proposing a novel reputation model, designed for agent-based
social communities, which careful preserves the reputation scores of honest agents in detecting
malicious ones. Moreover, we developed a technique to preliminarily select the most promising
agent candidates to be identified as colluding and using them as pre-untrusted agents.</p>
      <p>The rest of the paper is organized as follows. Section 2 introduces some related work. The
Section 3 describes the reference agent scenario adopted for our reputation model, presented in
Section 4. In Section 5, a case study provides a practical example for our proposed reputation
model and, finally, some conclusions and future works are drawn in Section 6.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        In social agent communities a relevant issue is represented by recognizing malicious actors [17,
18, 19, 20, 21, 22]). Trust and Reputation Systems (TRSs) provide a defense against cheaters that
might perform various misleading behaviors [
        <xref ref-type="bibr" rid="ref7 ref8">23, 24, 25, 26</xref>
        ] and the greater the degree of TRSs
robustness to malicious attacks, the more reliable their trustworthiness measures will be [
        <xref ref-type="bibr" rid="ref9">27</xref>
        ].
      </p>
      <p>
        In more detail, Trust Systems (TSs) combine information both direct raising by own past
experiences (i.e. ”reliability”) and indirect given by opinions provided by other members of
the community (i.e. ”reputation”), as in [
        <xref ref-type="bibr" rid="ref10">28</xref>
        ]. Diferently, Reputation Systems (RSs) rely only on
indirect information [
        <xref ref-type="bibr" rid="ref11">29</xref>
        ]. TRSs can also exploit single or multiple information sources, adopt
centralized or distributed architectures or consider global or local approaches [
        <xref ref-type="bibr" rid="ref12 ref13 ref14 ref15">30, 31, 32, 33</xref>
        ].
      </p>
      <p>
        Even though it is dificult comparing systems developed for specific scenarios, a number of
studies addressed this issue with respect to a more or less wide range of malicious behaviors.
To this aim, in [
        <xref ref-type="bibr" rid="ref16 ref17 ref7">25, 34, 35</xref>
        ] the defense mechanisms implemented by those TRSs against some
common malicious attacks were compared. However, these studies are lack in providing
welldefined quantitative approaches to assess the TRSs robustness.
      </p>
      <p>
        To test TRSs, malicious attacks can be simulated on diferent scenarios, also in the form of
competition among TRSs [
        <xref ref-type="bibr" rid="ref18 ref19">36, 37</xref>
        ]. In particular, some testbeds have been proposed, among
which ART (Agent Reputation and Trust testbed) [
        <xref ref-type="bibr" rid="ref20">38</xref>
        ] is well known, other examples of testbeds
can be found in [
        <xref ref-type="bibr" rid="ref21 ref22 ref23">39, 40, 41</xref>
        ]. Alternative mechanisms to testbeds exploit mathematical/analytical
approaches [42, 43] that, from one hand, allow a more comprehensive verification of a TRS but,
on the other hand, each TRS requires to develop specific test modalities.
      </p>
      <p>eBay [44] is a popular and simple, but not robust RS [45, 44, 46], particularly with respect to
collusive activities. Its reputation model consists of summing the single feedback provided by
counterpart to increase or decrease a reputation score, leaving to each user its evaluation on
the basis of their risk attitude. All newcomers receive a null reputation score, i.e. the minimum
rating in eBay.</p>
      <p>Both PeerTrust [47] and Hypertrust [48] are robust, distributed RS adopting a peer-to-peer
overlay network, To identify the most suitable peers to interact, the former exploits several
information referred to the specific context, direct feedback, credibility of the indirect feedback
sources, number and nature of the transactions performed by each peer. The other one was
conceived for large, competitive federations of utility computing infrastructures. In Hypertrust
the nodes, linked via overlay, form clusters and a distributed algorithm discovers and allocates
resources associated with trusted nodes to limit deceptive activities. The search of potentially
interesting resource is reduced to an eligible region based on reputation information.</p>
      <p>The well known EigenTrust [16] computes the global reputation of each peer, assuming the
reputation transitivity, on the basis of the local trust matrix, storing the (normalized) trust scores
that each peer has about the trustworthiness of the other peers in its community, weighted by
the trustworthiness of each trustor peer. Based on their reputation scores, the peers will be
diferentiated in colluding and not colluding, but its computational process flatten the reputation
scores of all the agents including honest ones.</p>
      <p>A number of TRSs in distributed, semi-distributed, centralized and/or blockchain-based
IoT environment has been presented in [12, 13, 49, 50]. In particular, open IoT environments
are particularly risky for the increased possibilities to realize malicious behaviors. In [19], a
distributed RS is proposed. It implements some countermeasures to detect malicious or cheating
actors. A simulation dealing with vehicular mobility was proposed to test the efectiveness of the
reputation model in quickly detecting malicious devices. RESIOT [20] is a framework to form
groups of reliable IoT devices based on the reputation scores of their associated agents. In this
proposal, a novel reputation model is presented and the results of a set of experiments simulating
malicious attacks acting diferent, concomitant cheating strategies, have been compared with
those of other RSs. Finally, in [51] a TS for a SIOT scenario is suggested; it adopts a machine
learning technique to realize a resilient system towards a significant number of attacks and
capable to detect the most part of cheaters at the number of transactions increases.</p>
      <p>The TRSs presented in this section implement diferent and efective approaches to identify
malicious actors. However, to the best of our knowledge none of them was explicitly designed
to preserve the trust scores of honest actors in looking for malicious, unlike the TRS presented
in Section 4.</p>
    </sec>
    <sec id="sec-3">
      <title>3. The Agent-based Reference Scenario</title>
      <p>In this section, we describe the agent-based reference scenario which we will refer to in the
following. This scenario involves a potentially large number of agents, which can mutually
interact among them, on behalf of their associated devices. Within this agent community, we
assume that the interactions carried out by agents satisfy the following desirable properties [52]:
• agents are long living entities so that past behaviors will provide information about
expected, future behaviors;
• new agent interactions are driven only by the counterpart’s past behaviors;
• agents’ reputation scores are spread into the community.</p>
      <p>In particular, we consider rational agents, i.e. artificial intelligence software developed to
make autonomous and rational choices on the basis of a system of rules, knowledge and data
available. The actions to be taken are chosen on the basis of the information collected, previous
knowledge and basic knowledge available to the agent.</p>
      <p>The rational agent is composed of interacting elements and is equipped with special devices,
such as sensors or actuators, able to (a) capture information from the surrounding environment
and (b) intervene to modify it.</p>
      <p>In our reference scenario, the information coming from the external environment perceived
by the agents can be the reputations of the individual agents belonging to the whole community.
As regards the methods of intervention, we refer to the possibility of attributing a high or low
trust value based on one’s experience with the individual agent.</p>
      <p>With the development of Multi-Agent Systems, several organizational paradigms have been
developed. These organizations establish a framework for relationships and interactions
between agents. The community we consider is a collection of various agents who interact and
communicate. They have diferent goals, they do not have the same level of rationality, nor the
same skills, but they are all subject to common laws.</p>
      <p>Among the agents of the community there are also the malicious agents (colluding): they
gang up and collaborate as their individual interests are shared. Their goal is to maximize the
interests of the whole coalition at the expense of other agents.</p>
    </sec>
    <sec id="sec-4">
      <title>4. The Reputation Model</title>
      <p>This section describes in the detail the reputation model we designed to detect colluding agents
by preserving the reputation scores of honest agents, wherever other well known approaches
decrease them as a collateral efect of their computational processes.</p>
      <p>To this end, let  an agent community of  agents, where each pair of agents is uniquely
identified as  and  , with  ̸=  ∈ [1, ], and let   be a real number ranging in [0; 1] that
represents the trust perceived by  about , and let   = 0 be the trust of an agent about itself
for all the agents in .</p>
      <p>In  the reputation   of the generic agent  (i.e. the trustee) is computed as the ratio between
the sum of the trust values   perceived by the other agents of  (i.e. the trustors) about ,
with  = 1, . . . ,  and  ̸= , weighted by their own reputation scores, and the sum of the
reputation scores of all the trustor agents. More formally,   is computed as:
,
with  = 1, . . . , 
(1)

∑︁    
  = =1
∑︁  
=1
To represent the agents’ reputation in , we define the trust matrix T = [  ] as:
By assuming T as the transpose of the weighted adjacency matrix A = [ ] which, in turn,
corresponds to the directed graph , where each node is associated with an agent and each link
(, ) is associated with a non-negative value representing the trust value perceived by  about
 , then we can reformulate the equations (1) as:</p>
      <p>T = R,
‖R‖1 = 1,
(2)
where the -th element of the reputation vector R = ( 1, . . . ,  ) is the reputation of the
agent . Note that ‖R‖1 is the sum of the absolute values of the elements of R and its closure
to 1 warranties to (2) the uniqueness of the solution. Besides, requiring that the sum of the
trust value   , given by each agent  to the other  agents of , be 1, then the matrix T will
be column-stochastic. The solution of the eigensystem problem (2) can be reformulated as the
computation of the stationary distribution for a Markov chain represented by the matrix T.</p>
      <p>For the Perron Frobenius Theorem,  = 1 =  (T) is an eigenvalue of T (the other
eigenvalues will be &lt; 1 in modulus) and if  &gt; 0, then a unique vector R ∈ R, ‖R‖1 = 1 there exists,
such that T R =  (A) R = R, where  (A) is the spectral radius that is a unique positive
reputation.</p>
      <p>A modified version of the eigensystem (2) is the PageRank model that can be formulated as:
︀(  T + (1 − ) V U )︀ R = R
(3)
where the parameter  ∈ R ranges in 0 ≤  ≤ 1, U is a unitary vector and V (generally named
teleportation vector) is a non-negative vector with unitary 1-norm, i.e. U V = 1. If  ̸= 0, 1,
the solution (3) exists, and it is unique.</p>
      <p>In the PageRank algorithm all the elements of V are set to  = 1 . In [53], to decrease the
reputation of malicious agents it is proposed to introduce some agents, whose opinions are
considered highly reliable. Let ℳ be the set of such mentor agents so that  = 1/|ℳ| if the
agent belongs to ℳ and 0 otherwise.</p>
      <p>To detect colluding agents, the vector R is computed from the matrix T.</p>
      <p>To this aim, the agents  and  are considered as malicious when the following three conditions
on the trust scores are verified at the same time:
(i)   and   are high;
(ii)   and   are similar;
(iii) the sum of the remaining trust scores from rows  and , respectively associated with 
and  , is low.</p>
      <p>In other words, we consider all those cases where the trust values of  and  are high, while
the majority of the other agents of  consider them untrustworthy.</p>
      <p>
        Therefore, let , ,  ∈ R ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ], with  = 1, 2, 3, be three thresholds suitably set to detect
colluding agents and let K be a vector representing the output degree of the nodes of the graph
 or, analogously, the vector of the sums of the rows of T, i.e. K = T U. Besides, let Z be
an auxiliary matrix built from T by identifying the quasi-symmetric high-valued elements as
follows:
˜ =
{︃
  if |  −  | ≤  and   ≥ 
0 otherwise
The matrix Z corresponds to a weighted adjacency matrix of the (undirected) sub-graph of 
whose arcs connect potential colluding agents.
      </p>
      <p>The real colluding agents are present among these potential collusive agents and to identify
them correctly, we use the third threshold  . In order to do this, we first build the vector
K = K − Z U. Finally, based on the vector K, we classify as malicious an agent  belonging
to  if  ≤  , where  is the sum of the remaining trust scores from row  associated with ,
without afecting reputation scores of honest agents.</p>
      <p>If we don’t consider the third threshold, we could obtain as malicious colluding agents some
honest agents, in particular the good honest agents.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Colluding Agent Detection</title>
      <p>The knowledge of supposed malicious agents can be exploited to provide appropriate reputation
vectors.</p>
      <p>For instance, let ℬ be the index set of the malicious agents previously identified and let w
be the definitive weighted adjacency matrix of the subgraph of  consisting of all the both
colluded agents and edges connecting them.</p>
      <p>We can proceed with two diferent approaches.</p>
      <p>For the approach A, consider (3) and a vector L, where its elements  are set to 0 if  ∈ ℬ or
1/( − |ℬ| ) otherwise. By adopting this setting all the honest agents will receive the same trust
score regardless from their starting trust score. This is exactly the same algorithm as EigenTrust,
however we exploit here the additional information about the pre-trusted mentors.</p>
      <p>The last approach B, consists of building a new matrix T as follows:
• Set a threshold  &gt; 0;
• make the T columns stochastic by normalizing each of them to 1.</p>
      <p>• For ,  ∈ ℬ then let   =  , otherwise   =   ;
In the following, we present a simple example by considering an agent community formed
by twelve agents that is represented in Fig. 1. Moreover, we assume that suspected malicious
agents correspond to 30 percent of the total agents and they are identified as 4, 5, 6 and 7.</p>
      <p>For simplicity, we choose null the trust that an agent assigns to himself, as it is irrelevant.
The corresponding trust matrix is:
Determined the threshold  and  , we can build the following matrix
⎛</p>
      <p>0
⎜ 0.382
⎜
⎜ 0.46
⎜⎜ 0.001
⎜⎜ 0.002
⎜
T = ⎜⎜ 0.003
⎜ 0.002
⎜
⎜ 0.01
⎜
⎜ 0.02
⎜
⎜ 0.03
⎜
⎜ 0.04
⎝
0.05</p>
      <p>Remember that the matrix Z corresponds to a weighted adjacency matrix of the (undirected)
subgraph of  whose arcs connect potential collusive agents.</p>
      <p>With an appropriate choice of  , ℬ = {4, 5, 6, 7} is obtained.</p>
      <p>Then, applying the approach , set the parameter  to 0.2 and considering L =
(1/8, 1/8, 1/8, 0, 0, 0, 0, 1/8, 1/8, 1/8, 1/8, 1/8) , the updated trust matrix is computed as:</p>
      <p>Finally, by adopting the threshold strategy (approach ) the trust matrix, in correspondence</p>
      <p>Note that with approach  only the columns of the matrix  corresponding to the colluding
agents are modified, leaving the columns of the honest agents unchanged. While, with approach
, all columns of the matrix  are modified and this implies that the final reputation obtained
of the agents will be diferent based on the approach used. In Fig. 2 the reputation vectors
corresponding to cases A and B are plotted.</p>
      <p>Indeed, as concern honest agents, the approach  provides very similar reputation, while
our approach  maintains some diference between the reputations of honest good agents and
low honest agents, i.e. it does not penalize the reputation scores of good honest agents.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>By interacting with the world around them, agents can form multidimensional, relationship
based networks potentially rich of social interactions. In a such agent scenario, the presence
of malicious actors should be considered in order to minimize the risks of their deception.
Such risks can be increased, for instance in presence of large communities and/or nomadic
agents, and their minimization is of primary importance to promote satisfactory social agent
interactions. To this purpose, an efective solution is to provide each player with appropriate
trust measures about potential its partners and, in this respect, we presented: i) a method to
preliminarily identify the best candidates as malicious (colluding) in order to use such agents
as pre-trusted mentors; (ii) a novel reputation model to detect colluding malicious agents that
does not penalize reputation scores of honest agents in detecting malicious ones.</p>
      <p>Our forthcoming researches will be focused on realizing a more complete campaign of
experiments on real and simulated data including also diferent malicious behaviors. In particular,
it will be of our interest to use a new metric that identifies the colluding agents of the network,
analyzing the starting trust matrix  . An interesting feature that real networks present is
the property of clustering or community structure, according to which the graph is called
community or cluster. The peculiarity is that the nodes of the same community (honest agents
or colluding agents) are very similar while, on the contrary, the nodes between the communities
have a low similarity. Therefore, we will start to analyze the fundamental concepts and the
methodological bases on which the graph clustering algorithms are based. Later, we will try
to identify potential colluding groups, through new metrics and strategies. Finally, we will
evaluate the property of a good cluster in a direct graph.
[7] M. Uhl-Bien, R. Marion, B. McKelvey, Complexity leadership theory: Shifting leadership
from the industrial age to the knowledge era, The leadership quarterly 18 (2007) 298–318.
[8] F. Amin, A. Ahmad, G. Sang Choi, Towards trust and friendliness approaches in the social
internet of things, Applied Sciences 9 (2019) 166.
[9] G. Fortino, F. Messina, D. Rosaci, G. M. L. Sarné, Using blockchain in a reputation-based
model for grouping agents in the internet of things, IEEE Transactions on Engineering
Management 67 (2019) 1231–1243.
[10] D. Gambetta, et al., Can we trust trust, Trust: Making and breaking cooperative relations
13 (2000) 213–237.
[11] D. H. McKnight, N. L. Chervany, The meanings of trust (1996).
[12] A. Altaf, H. Abbas, F. Iqbal, A. Derhab, Trust models of internet of smart things: A survey,
open issues, and future directions, Journal of Network and Computer Applications 137
(2019) 93–111.
[13] G. Fortino, L. Fotia, F. Messina, D. Rosaci, G. M. L. Sarné, Trust and reputation in the internet
of things: state-of-the-art and research challenges, IEEE Access 8 (2020) 60117–60125.
[14] Z. Yan, P. Zhang, A. V. Vasilakos, A survey on trust management for internet of things,</p>
      <p>Journal of network and computer applications 42 (2014) 120–134.
[15] M. N. Postorino, G. M. L. Sarné, An agent-based sensor grid to monitor urban trafic, in:
Proceedings of the 15th Workshop dagli Oggetti agli Agenti, WOA 2014, volume 1260 of
CEUR Workshop Proceedings, CEUR-WS.org, 2014.
[16] S. D. Kamvar, M. T. Schlosser, H. Garcia-Molina, The eigentrust algorithm for reputation
management in p2p networks, in: Proc. of the 12th international conference on World
Wide Web, ACM, 2003, pp. 640–651.
[17] F. Messina, G. Pappalardo, D. Rosaci, C. Santoro, G. M. L. Sarné, A trust model for
competitive cloud federations, in: 2014 Eighth International Conference on Complex,
Intelligent and Software Intensive Systems, IEEE, 2014, pp. 469–474.
[18] A. Ahmed, K. A. Bakar, M. I. Channa, K. Haseeb, A. W. Khan, A survey on trust based
detection and isolation of malicious nodes in ad-hoc and sensor networks, Frontiers of
Computer Science 9 (2015) 280–296.
[19] P. De Meo, F. Messina, M. N. Postorino, D. Rosaci, G. M. L. Sarné, A reputation framework
to share resources into iot-based environments, in: IEEE 14th Int. Conf. on Networking,
Sensing and Control, IEEE, 2017, pp. 513–518.
[20] G. Fortino, F. Messina, D. Rosaci, G. M. L. Sarné, Resiot: An iot social framework resilient
to malicious activities, IEEE/CAA Journal of Automatica Sinica 7 (2020) 1263–1278.
[21] H. Jnanamurthy, S. Singh, Detection and filtering of collaborative malicious users in
reputation system using quality repository approach, in: 2013 International Conference
on Advances in Computing, Communications and Informatics (ICACCI), IEEE, 2013, pp.
466–471.
[22] S. M. Sajjad, S. H. Bouk, M. Yousaf, Neighbor node trust based intrusion detection system
for wsn, Procedia Computer Science 63 (2015) 183–188.
[23] A. J. Bidgoly, B. T. Ladani, Benchmarking reputation systems: A quantitative verification
approach, Computers in Human Behavior 57 (2016) 274–291.
[24] W. Fang, W. Zhang, W. Chen, T. Pan, Y. Ni, Y. Yang, Trust-based attack and defense in
wireless sensor networks: a survey, Wireless Communications and Mobile Computing
pp. 1–5.
[42] A. J. Bidgoly, B. T. Ladani, Modelling and quantitative verification of reputation systems
against malicious attackers, The Computer Journal 58 (2015) 2567–2582.
[43] S. A. Ghasempouri, B. T. Ladani, Modeling trust and reputation systems in hostile
environments, Future Generation Computer Systems 99 (2019) 571–592.
[44] S. C. Hayne, H. Wang, L. Wang, Modeling reputation as a time-series: Evaluating the risk
of purchase decisions on ebay, Decision Sciences 46 (2015) 1077–1107.
[45] L. Cabral, A. Hortacsu, The dynamics of seller reputation: Evidence from ebay, The Journal
of Industrial Economics 58 (2010) 54–78.
[46] P. Resnick, R. Zeckhauser, Trust among strangers in internet transactions: Empirical
analysis of ebay’s reputation system, in: The Economics of the Internet and E-commerce,
Emerald Group Publishing Limited, 2002.
[47] L. Xiong, L. Liu, Peertrust: Supporting reputation-based trust for peer-to-peer electronic
communities, IEEE transactions on Knowledge and Data Engineering 16 (2004) 843–857.
[48] F. Messina, G. Pappalardo, D. Rosaci, C. Santoro, G. M. L. Sarné, A trust-aware,
selforganizing system for large-scale federations of utility computing infrastructures, Future
Generation Computer Systems (2015).
[49] W. Abdelghani, C. A. Zayani, I. Amous, F. Sèdes, Trust management in social internet of
things: a survey, in: Conference on e-Business, e-Services and e-Society, Springer, 2016,
pp. 430–441.
[50] I. U. Din, M. Guizani, B.-S. Kim, S. Hassan, M. K. Khan, Trust management techniques for
the internet of things: A survey, IEEE Access 7 (2018) 29763–29787.
[51] C. Marche, M. Nitti, Trust-related attacks and their detection: a trust management model
for the social iot, IEEE Transactions on Network and Service Management (2020).
[52] P. Resnick, R. Zeckhauser, F. E., K. Kuwabara, Reputation systems, Communication of</p>
      <p>ACM 43 (2000) 45–48.
[53] S. Kamvar, M. Schlosser, H. Garcia-Molina, The eigentrust algorithm for reputation
management in P2P networks, in: Proc. of World Wide Web, 12th International Conference
on, ACM, 2003, pp. 640–651.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Castelfranchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. D.</given-names>
            <surname>Rosis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Falcone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pizzutilo</surname>
          </string-name>
          ,
          <article-title>Personality traits and social attitudes in multiagent cooperation</article-title>
          ,
          <source>Applied Artificial Intelligence</source>
          <volume>12</volume>
          (
          <year>1998</year>
          )
          <fpage>649</fpage>
          -
          <lpage>675</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Hortensius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hekele</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. S.</given-names>
            <surname>Cross</surname>
          </string-name>
          ,
          <article-title>The perception of emotion in artificial agents</article-title>
          ,
          <source>IEEE Transactions on Cognitive and Developmental Systems</source>
          <volume>10</volume>
          (
          <year>2018</year>
          )
          <fpage>852</fpage>
          -
          <lpage>864</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rheu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Y.</given-names>
            <surname>Shin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Huh-Yoo</surname>
          </string-name>
          ,
          <article-title>Systematic review: Trust-building factors and implications for conversational agent design</article-title>
          ,
          <source>International Journal of Human-Computer Interaction</source>
          <volume>37</volume>
          (
          <year>2021</year>
          )
          <fpage>81</fpage>
          -
          <lpage>96</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>Fotia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Messina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rosaci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M. L.</given-names>
            <surname>Sarné</surname>
          </string-name>
          ,
          <article-title>Using local trust for forming cohesive social structures in virtual communities</article-title>
          ,
          <source>The Computer Journal</source>
          <volume>60</volume>
          (
          <year>2017</year>
          )
          <fpage>1717</fpage>
          -
          <lpage>1727</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.</given-names>
            <surname>Misselhorn</surname>
          </string-name>
          ,
          <article-title>Collective agency and cooperation in natural and artificial systems</article-title>
          ,
          <source>in: Collective agency and cooperation in natural and artificial systems</source>
          , Springer,
          <year>2015</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>O.</given-names>
            <surname>Perrin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Godart</surname>
          </string-name>
          ,
          <article-title>A model to support collaborative work in virtual enterprises</article-title>
          ,
          <source>Data &amp; Knowledge Engineering</source>
          <volume>50</volume>
          (
          <year>2004</year>
          )
          <fpage>63</fpage>
          -
          <lpage>86</lpage>
          .
          <year>2020</year>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>K.</given-names>
            <surname>Hofman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zage</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Nita-Rotaru</surname>
          </string-name>
          ,
          <article-title>A survey of attack and defense techniques for reputation systems</article-title>
          ,
          <source>ACM Computing Surveys (CSUR) 42</source>
          (
          <year>2009</year>
          )
          <fpage>1</fpage>
          -
          <lpage>31</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>F. G.</given-names>
            <surname>Mármol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Pérez</surname>
          </string-name>
          ,
          <article-title>Security threats scenarios in trust and reputation models for distributed systems</article-title>
          ,
          <source>computers &amp; security 28</source>
          (
          <year>2009</year>
          )
          <fpage>545</fpage>
          -
          <lpage>556</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>A.</given-names>
            <surname>Jøsang</surname>
          </string-name>
          ,
          <article-title>Robustness of trust and reputation systems: Does it matter?</article-title>
          ,
          <source>in: IFIP International Conference on Trust Management</source>
          , Springer,
          <year>2012</year>
          , pp.
          <fpage>253</fpage>
          -
          <lpage>262</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>D.</given-names>
            <surname>Rosaci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M. L.</given-names>
            <surname>Sarnè</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Garruzzo</surname>
          </string-name>
          ,
          <article-title>Integrating trust measures in multiagent systems</article-title>
          ,
          <source>International Journal of Intelligent Systems</source>
          <volume>27</volume>
          (
          <year>2012</year>
          )
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>F.</given-names>
            <surname>Hendrikx</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Bubendorfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chard</surname>
          </string-name>
          ,
          <article-title>Reputation systems: A survey and taxonomy</article-title>
          ,
          <source>Journal of Parallel and Distributed Computing</source>
          <volume>75</volume>
          (
          <year>2015</year>
          )
          <fpage>184</fpage>
          -
          <lpage>197</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [30]
          <string-name>
            <surname>P. De Meo</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Messina</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Rosaci</surname>
            ,
            <given-names>G. M. L.</given-names>
          </string-name>
          <string-name>
            <surname>Sarné</surname>
          </string-name>
          ,
          <article-title>An agent-oriented, trust-aware approach to improve the qos in dynamic grid federations</article-title>
          ,
          <source>Concurrency and Computation: Practice and Experience</source>
          <volume>27</volume>
          (
          <year>2015</year>
          )
          <fpage>5411</fpage>
          -
          <lpage>5435</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [31]
          <string-name>
            <surname>P. De Meo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Fotia</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Messina</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Rosaci</surname>
            ,
            <given-names>G. M. L.</given-names>
          </string-name>
          <string-name>
            <surname>Sarné</surname>
          </string-name>
          ,
          <article-title>Providing recommendations in social networks by integrating local and global reputation</article-title>
          ,
          <source>Information Systems</source>
          <volume>78</volume>
          (
          <year>2018</year>
          )
          <fpage>58</fpage>
          -
          <lpage>67</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>A.</given-names>
            <surname>Jøsang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ismail</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Boyd</surname>
          </string-name>
          ,
          <article-title>A survey of trust and reputation systems for online service provision, Decision support systems 43 (</article-title>
          <year>2007</year>
          )
          <fpage>618</fpage>
          -
          <lpage>644</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. S.</given-names>
            <surname>Pilli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. P.</given-names>
            <surname>Mazumdar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gera</surname>
          </string-name>
          ,
          <article-title>Towards trustworthy internet of things: A survey on trust management applications and schemes</article-title>
          ,
          <source>Computer Communications</source>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>A.</given-names>
            <surname>Jøsang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Golbeck</surname>
          </string-name>
          ,
          <article-title>Challenges for robust trust and reputation systems</article-title>
          ,
          <source>in: Proceedings of the 5th International Workshop on Security and Trust Management (SMT</source>
          <year>2009</year>
          ), Saint Malo, France, volume
          <volume>5</volume>
          ,
          <string-name>
            <surname>Citeseer</surname>
          </string-name>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>S.</given-names>
            <surname>Vavilis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Petković</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Zannone</surname>
          </string-name>
          ,
          <article-title>A reference model for reputation systems</article-title>
          ,
          <source>Decision Support Systems</source>
          <volume>61</volume>
          (
          <year>2014</year>
          )
          <fpage>147</fpage>
          -
          <lpage>154</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>G.</given-names>
            <surname>Lax</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M. L.</given-names>
            <surname>Sarné</surname>
          </string-name>
          ,
          <article-title>CellTrust: a reputation model for C2C commerce</article-title>
          ,
          <source>Electronic Commerce Research</source>
          <volume>8</volume>
          (
          <year>2006</year>
          )
          <fpage>193</fpage>
          -
          <lpage>216</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>Y.-F.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sakurai</surname>
          </string-name>
          ,
          <article-title>Characterizing economic and social properties of trust and reputation systems in p2p environment</article-title>
          ,
          <source>Journal of Computer Science and Technology</source>
          <volume>23</volume>
          (
          <year>2008</year>
          )
          <fpage>129</fpage>
          -
          <lpage>140</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [38]
          <string-name>
            <surname>K. K. Fullam</surname>
          </string-name>
          , T. B.
          <string-name>
            <surname>Klos</surname>
            , G. Muller,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Sabater</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Schlosser</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Topol</surname>
            ,
            <given-names>K. S.</given-names>
          </string-name>
          <string-name>
            <surname>Barber</surname>
            ,
            <given-names>J. S.</given-names>
          </string-name>
          <string-name>
            <surname>Rosenschein</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Vercouter</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Voss</surname>
          </string-name>
          ,
          <article-title>A specification of the agent reputation and trust (art) testbed: experimentation and competition for trust in agent societies</article-title>
          ,
          <source>in: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems</source>
          ,
          <year>2005</year>
          , pp.
          <fpage>512</fpage>
          -
          <lpage>518</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Adamopoulou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Symeonidis</surname>
          </string-name>
          ,
          <article-title>A simulation testbed for analyzing trust and reputation mechanisms in unreliable online markets</article-title>
          ,
          <source>Electronic Commerce Research and Applications</source>
          <volume>13</volume>
          (
          <year>2014</year>
          )
          <fpage>368</fpage>
          -
          <lpage>386</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>R.</given-names>
            <surname>Kerr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <article-title>Treet: The trust and reputation experimentation and evaluation testbed</article-title>
          ,
          <source>Electronic Commerce Research</source>
          <volume>10</volume>
          (
          <year>2010</year>
          )
          <fpage>271</fpage>
          -
          <lpage>290</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>F. G.</given-names>
            <surname>Mármol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Pérez</surname>
          </string-name>
          ,
          <article-title>Trmsim-wsn, trust and reputation models simulator for wireless sensor networks</article-title>
          ,
          <source>in: 2009 IEEE International Conference on Communications, IEEE</source>
          ,
          <year>2009</year>
          ,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>