=Paper= {{Paper |id=Vol-3261/paper7 |storemode=property |title=Modeling dynamic web polarization and proximity depolarization processes by compactness measures |pdfUrl=https://ceur-ws.org/Vol-3261/paper7.pdf |volume=Vol-3261 |authors=Domenico Rosaci,Simona Sacchi,Giuseppe M.L. Sarné |dblpUrl=https://dblp.org/rec/conf/woa/RosaciSS22 }} ==Modeling dynamic web polarization and proximity depolarization processes by compactness measures== https://ceur-ws.org/Vol-3261/paper7.pdf
Modeling Dynamic Web Polarization and Proximity
Depolarization Processes by Compactness Measures
Domenico Rosacia , Simona Sacchib and Giuseppe M. L. Sarnéc
a
  Department DIIES, University Mediterranea of Reggio Calabria, via Graziella, loc. Feo di Vito - 98123 Reggio Calabria
b
  Department of Psychology, University of Milan Bicocca, Piazza dell’Ateneo Nuovo, 1, 20126 Milan, Italy
c
  Department of Psychology, University of Milan Bicocca, Piazza dell’Ateneo Nuovo, 1, 20126 Milan, Italy


                                         Abstract
                                         In this paper, we deal with the possibility of simulating dynamic polarization and proximity depolarization
                                         processes in a software multi-agent community, modeling homophily and trust relationships usually
                                         present in human processes. Group polarization involves various disciplines, such as economics, social
                                         psychology, political science, sociology and many others, and it can be considered a critical process
                                         underlying relevant behaviors as, for instance, voting and conflictual intergroup relations in the society.
                                         Moreover, being a human social phenomenon, the polarization processes are subject to change over time
                                         or also effect overturning (i.e., depolarization). Our contribution consists of proposing a compactness-
                                         based model for equipping agents in order to simulate the complexity of such processes. We have
                                         simulated two case studies where polarization is ruled by compactness measures, combining similarity
                                         and trust with different percentages. We evaluated the results provided by compactness measures in
                                         order to verify the role of similarity and trust in the agent polarization processes.

                                         Keywords
                                         Multiagent System, Compactness, Polarization, Similarity, Trust




1. Introduction
Human beings are intrinsically a social species characterized by complex and sophisticated
social skills devoted to increase cooperation and well adapted group living. They originate from
perceptual, cognitive, motivational and emotional processes [1, 2] and reflected in important
acts of our everyday lives such as personal choices, market behaviors, political preferences,
leadership, etc. [3]. However, such social processes are not stable over time.
   Two main (but not exclusive) factors, deeply influencing the dynamics of formation and
evolution of social relationships, can be identified in both homophily and trust [4, 5]. They
play an important role in inducing us to interact with a potential counterpart tailoring our
expectations to be engaged in reliable interactions, as well as in changing our disposition toward
a partner over time, either strengthening or undermining it [6].

WOA 2022: 23rd Workshop “From Objects to Agents”, September 1–2, 2022, Genova, Italy
$ domenico.rosaci@unirc.it (D. Rosaci); simona.sacchi@unimib.it (S. Sacchi); giuseppe.sarne@unimib.it
(G. M. L. Sarné)
€ https://www.unirc.it/scheda_persona.php?id=696 (D. Rosaci); https://www.unimib.it/simona-sacchi (S. Sacchi);
https://www.unimib.it/giuseppe-maria-luigi-sarne (G. M. L. Sarné)
 0000-0002-9256-9995 (D. Rosaci); 0000-0003-0028-7462 (S. Sacchi); 0000-0003-3753-6020 (G. M. L. Sarné)
                                       © 2020 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
   More in detail, homophily relies on supposing the existence of affinities, usually represented
by means a similarity measure, while trust (commonly intended as an interpersonal relationship,
named reputation when considered at the community level) entails beliefs and attitudes about
the degree to which other people are likely to be reliable, cooperative, or helpful with respect
to specific situational contexts or in general terms [7]. However, providing a definition of trust
is an elusive and hard task given that this construct is characterized by several measurable and
unmeasurable dimensions (such as expertise, honesty, safety, dependability, etc.) and relies on
the specific situational context in which the interactions take place. Therefore, because of its
multi-faceted nature and context dependability, the term "trust" (as well as the related term
"reputation") is associated with several different meanings [8, 9, 10, 11]. In our perspective, we
consider trust also as an individual predisposition based on subjective attitudes and perceptual
abilities regarding the behavior of another person, group, device or virtual entity about the
possibility of a defection or, conversely, the ability to meet our expectations [12, 13]. To this
aim a large number of trust and reputation systems (and measures) have been proposed in the
literature [14, 15, 16]. In particular, trust measures aim to provide the actors with information
about the trustworthiness of their potential partners and the probability of having satisfactory
social interactions [17]. Trust measures can rely on (i) direct information derived by a direct
knowledge of the trustor about a trustee and/or (ii) indirect information considering ratings
and/or opinions provided by others about a trustee [18]. Similarity and trust measures can be
combined in a single measure, often named compactness [19].
   Homophily and trust are implicated in a multitude of human processes, including dynamic
polarization processes. The polarization processes - increased in number and relevance with the
advent of Internet - involve various disciplines, such as economics, political science, sociology
and many others [20, 21].
   More specifically, a polarization process may bring individuals, or groups, to advocate more
extreme attitude and behaviors than their initial inclinations, for example, in terms of voting,
religious beliefs, political decision-making, racial prejudice, intergroup relations, etc. The
relevance of polarization processes is also represented by the fact that they can be considered
a critical process underlying conflicting intergroup relations in the society. Moreover, being
a human social phenomenon, polarization processes are subject to change over time or also
effect overturning (i.e., depolarization). Although nature and effects of these phenomena in real
interactions have been widely investigated by social sciences, the impact of new technology on
such processes requires further investigation.
   The advent of Internet first, and social media later, impacted our society in a technological,
social and economic way. Internet has given the opportunity to break down physical barriers
such as where, how or when and has provided new opportunities for social interactions making
easier commenting opinions and sharing multimedia contents. Moreover, such interactions
are likely to involve humans as well as virtual (i.e. software) entities without necessarily
foreshadowing a dystopian future.
   We should also consider the bidirectionality of such processes: even if originated over the
virtual dimension of Internet, they can easily reverse their effects on the real society and vice
versa [22]. We can also observe an increasing number of dynamic polarization phenomena,
typical over Internet, characterized from a sudden propagation/attenuation and, unfortunately,
often by aggressiveness [23]. Therefore, it is important to investigate these phenomena even
when they occur on the Internet, regardless the real or virtual (i.e. software) nature of both
actors and interactions, i.e. human-to-human, human-to-machine or machine-to-machine.
   In such a scenario, the study of polarization processes can take advantage from the exploita-
tion of artificial intelligence techniques and from the agent technology, where an agent is a
software entity autonomously and proactively operating on behalf of a human being, which
delegates to the agent some particularly onerous or annoying tasks. An appropriate approach
to simulate these processes consists of adopting agent and multi-agent software technologies
which, thanks to the possibility of providing agents with social capabilities, allow a wide range
of interactions of our interest to be reproduced with a high accuracy [24, 25]. To this end,
software agents can be endowed with emotions, intentions, and beliefs in order to model cogni-
tive aspects and personality traits for replicating the salient features of human behaviors such
as benevolence, selfishness, honesty, and meanness even in virtual societies. Furthermore, it
is possible to take into account the presence of other competing or cooperating partners in
planning and implementing possible strategies to achieve agents’ goals [26]. For such reasons,
agent-based systems can be engaged to analyze complex forms of social relationships [27, 28],
as cooperation [29, 30], self-coordination [31, 32], group formation [33, 34], as well as in very
wide range of application contexts such as marketing and finance [35, 36], transportation
systems [37, 38], manufacturing and process control [39, 40], IoT [41], etc.
   Within the framework outlined above, this research aims to investigate the different behav-
iors patterns forged by similarity and trust measures, as well as their combination (named
compactness measure), in modeling polarization processes. To this aim, we simulated two
case studies, the former based on a dynamic Web scenario and the other one on a “proximity”
scenario considering the ego-network of each simulated agent (see Section 2), as representative
of polarization and depolarization processes. In both these case studies we assumed that the
relationships among agents are qualitatively similar to those occurring in human societies.
   In the two case studies we analyzed, the obtained experimental results confirmed a different
modeling behavior of the three measures we tested and help us to better understand the
implications linked to their use in terms of dynamic polarization and depolarization events in
designing software agents environments. These results are potentially useful also in the domain
of social science, to better understanding the dynamic of such processes in human communities.
   It is important to highlight that in our study we have not considered the eventual presence of
malicious agents, that is the possibility of agent behaviors trying to artificially direct polarization
processes and, therefore, we do not deal with issues related to the identification of such particular
agents.
   The rest of the paper is organized as follows. Section 2 introduces the reference scenario, while
in Section 3 the result of simulations are presented and discussed. Finally, some conclusions are
drawn in Section 4.


2. The Reference Scenario
Our scenario deals with a set 𝑊 of 𝑁 agents simulating users active in a Web community.
For the first case study, we consider also two agents 𝐴 and 𝐵 playing the role of leaders and
performing, over time, two independent sequences of actions that may or may not meet the
approval of the agents belonging to 𝑊 , based on their inclinations, in order to simulate dynamic
polarization processes. In the second case study, we exploited the individual agent ego-network
to simulate depolarization processes.
   In Section 2.1 we provide the description of the knowledge representation associated with
the agents, while Section 2.2 deals with the leader’s and agents’ tasks and Section 2.3 give our
measures of trust, similarity and compactness.

2.1. The Agents’ Knowledge Representation
To characterize the interests and the preferences of each actor acting in our scenario, we
associate a profile 𝑝 with each agent of 𝑊 and a profile 𝑃 with each leader (i.e., 𝐴 and 𝐵).
   The agent profile stores three properties on which our model is based, namely Inclination
(𝐼), Trust (𝑇 ) and Ego-network (𝐸). We define the profile 𝑝 of an agent as a tuple ⟨𝐼, 𝑇, 𝐸⟩,
where the property 𝐼 is referred to the inclinations of the agent with respect to the categories
considered in 𝑊 . To this aim, we denote with 𝐶 the set of all the possible categories in 𝑊 . Each
element 𝑐 ∈ 𝐶 is an identifier of a given category and 𝐼 is a mapping that, for each category
𝑐 ∈ 𝐶, returns a real value ranging in [0, · · · , 1], where the values 0 and 1 identify the two
extreme, opposite inclinations about a category. Categories play the role of ontological elements
for all the agent community (like Facebook, where users can select categories of interest from
a shared list), this gives us the opportunity of supposing a common, homogeneous semantic
scenario. We also suppose that in a real scenario agents can perform the identification of the
inclination automatically.
   The property 𝑇 is a mapping associated with how much an agent trusts each other agent in
𝑊 and the two leaders. The mapping, 𝑇 returns a real value, ranging in [0, · · · , 1] representing
the trust perceived by the trustor (i.e., an agent) with respect to the trustee (i.e., an agent or
a leader), as detailed in Section 2.3. The property 𝐸 is referred to the relationship between
an agent and other agents belonging to 𝑊 . We can assume the relationships of an agent as
its ego-network and define the trust that it perceives about the other agents belonging to its
ego-network as the strength of their oriented relationships. Remember that, the trust has an
asymmetric nature, see Section 2.3.
   Finally, the profile 𝑃 of a leader only consists of its interests with respect to the common
ontology.

2.2. The Agents’ and Leaders’ Tasks
According to the profiles and the properties defined above, the leader and the agents automati-
cally perform the following basic tasks:
   Leader. Over time, each leader performs a (different) sequence of actions (for example, publish
a post or a comment). As a consequence the leader’s profile will be updated to take into account
the new activities.
   Agents. Each agent will update its trust degree about a leader depending it on the last leader’s
action. The agent’s trust degree about the leader will increase if the agent agrees with the
leader’s action, vice versa otherwise. Then, the new leader’s profile and the corresponding
agent trust will be exploited in computing a new compactness measure (see Section 2.3) that will
define the agent as “polarized” or “not polarized” with respect to the leader. Moreover, based on
their ego-network the agents can change their polarization status (i.e., depolarization) when in
their ego-network is present another agent perceived as trustworthy (i.e., defined as closed to it
or in its proximity) but polarized differently.

2.3. Trust, Similarity and Compactness
In the Trust theory, the level of satisfaction that any agent can directly express about the trustee
is generally called reliability (i.e., direct trust) of the trustee 𝑏 as perceived by the trustor 𝑎. For
example, in Online Social Networks (OSN) the satisfaction about someone is roughly obtained
by clicking on buttons such as “I Like It/ I Do Not Like It” (e.g., on Facebook) or +1/ − 1 (e.g.,
YouTube). Differently, in our approach we represent the reliability of the trustee 𝑏 as perceived
by the trustor 𝑎 with a real value denoted as 𝑟𝑒𝑙𝑎→𝑏 , ranging in the interval [0, · · · , 1], where 0
(resp., 1) is the lower (resp., higher) value of reliability. If the level of satisfaction of an agent is
performed via the buttons “I Like It” and “I Do Not Like It” or similar, then the computation of
𝑟𝑒𝑙𝑎→𝑏 can be expressed as the ratio between the positive and the total evaluations provided by
that agent [42, 43]. Remember that the reliability is an asymmetric measure and this implies
that 𝑟𝑒𝑙𝑎→𝑏 is usually different from 𝑟𝑒𝑙𝑏→𝑎 and, for such a reason, in our notation we have
introduced the symbol → to specify the verse of the trust relationship. Moreover, the trust that
the whole agent community perceives about the trustee 𝑏 is named reputation of 𝑏 (i.e., 𝑟𝑒𝑝𝑏 ) or
indirect trust. We can simply compute 𝑟𝑒𝑝𝑏 as the average of all the reliability values 𝑟𝑒𝑙𝑎→𝑏 ,
for each member of the community with 𝑎 ̸= 𝑏.
   Based on these two measures, each agent can compute a synthetic, global measure of the trust
about each other agent of its community by integrating both the reputation of the trustee and
the reliability from the trustor’s personal viewpoint. Reputation and reliability are combined in
the trust measure depending it on the importance the trustor gives to the reliability versus the
reputation. More formally, we compute the trust of 𝑎 about 𝑏, denoted by 𝑡𝑎→𝑏 as:


                                𝑡𝑎→𝑏 = 𝛼 · 𝑟𝑒𝑙𝑎→𝑏 + (1 − 𝛼) · 𝑟𝑒𝑝𝑏                                   (1)


where 𝛼 is a real coefficient, ranging in [0, · · · , 1], representing how much relevant is for the
trustor the reliability with respect to the reputation. In other words, when 𝛼 = 0 this means
that the trustor does not give relevance to the reliability 𝑟𝑒𝑙𝑎→𝑏 in computing 𝑡𝑎→𝑏 and vice
versa when 𝛼 = 1. Note that 𝑡, 𝑟𝑒𝑙 and 𝑟𝑒𝑝 ∈ [0, · · · , 1] and when 𝛼 ̸= 0 also 𝑡 becomes an
asymmetric measure due to the active presence of the reliability measure. More in general,
when the direct knowledge of the trustor about the trustee is null then 𝛼 can be set to 0 since
the reliability measure will be null and, vice versa, when such a knowledge is sufficient for the
trustor to directly esteem the trustee trustworthiness then 𝛼 can be set to 1. In other words, the
value of the parameter 𝛼 can vary with the degree of direct knowledge of the trustor about the
trustee.
   The reliability an agent perceives about a leader is increased or decreased as a direct conse-
quence of its actions, depending on the extent to which the agent agrees or not agrees with
that action on the basis of its inclination (that we maintained unvaried along our experiments).
Similarly, the reliability that an agent perceives about another agent is increased or decreased
as a direct consequence of its satisfaction level for the last interaction carried out with the other
agent. The reliability updating is performed by applying the following simple, common rule:



                                𝑟𝑒𝑙𝑛𝑒𝑤 = 𝛽 · 𝑟𝑒𝑙𝑜𝑙𝑑 + (1 − 𝛽) · 𝜓                                (2)

where the parameter 𝛽, ranging in [0, · · · , 1], represents the relevance we desire assigning
to the current values of the reliability with respect to the new contribution 𝜓 (belonging to
[0, · · · , 1]). This solution has been widely used in many trust-based approaches for multi-agent
system, obtaining good results in terms of effectiveness (when the parameter 𝛽 is correctly
set) [18, 44, 45].
   The similarity measure 𝜎𝑎,𝑏 measures how much the profiles 𝑎 and 𝑏 are similar. The 𝜎𝑎,𝑏
measure is computed as the complement (with respect to 1) of the average difference between
the inclination values of 𝑎 and 𝑏 for all the categories 𝑐 ∈ 𝐶. More formally:


                                             ∑︀
                                               𝑐∈𝐶 |𝐼𝑎 (𝑐) − 𝐼𝑏 (𝑐)|
                                𝜎𝑎,𝑏 = 1 −                                                       (3)
                                                      |𝐶|


The leaders’ inclination measures are increased or decreased as a direct consequence of their
actions, while the agents’ inclinations currently are not updated. The updating is performed by
applying the same approach adopted in Eq. 2 as:



                                  𝐼𝑛𝑒𝑤 = 𝛾 · 𝐼𝑜𝑙𝑑 + (1 − 𝛾) · 𝜗                                  (4)



where the parameter 𝛾, ranging in [0 · · · 1], represents the relevance we want to assign to the
current values of the inclinations in a category with respect to the new contribution 𝜗.
   Finally, the compactness measure referred to 𝑎 and 𝑏 includes both to their degree of similarity
and trust associated with them. In particular, the compactness between 𝑎 and 𝑏, denoted by
𝛾𝑎→𝑏 , requires to consider both the similarity 𝜎𝑎,𝑏 and the trust 𝑡𝑎→𝑏 . Similarly to trust, also
the compactness 𝛾𝑎→𝑏 is usually an asymmetric measure, i.e. 𝛾𝑎→𝑏 ̸= 𝛾𝑏→𝑎 . Moreover, the
computation of the compactness 𝛾𝑎→𝑏 depends on how much importance is given to the
similarity with respect to the trust. It is modeled by means of a real coefficient 𝜖, ranging in
[0 · · · 1]. Consequently, we define the compactness 𝛾𝑎→𝑏 as:



                                 𝛾𝑎→𝑏 = 𝜖 · 𝜎𝑎,𝑏 + (1 − 𝜖) · 𝑡𝑎→𝑏                                (5)
3. Evaluation
To examine the different behaviors of similarity, trust and compactness measures in modeling
polarization processes, as introduced in Section 1, we simulated two case studies, by exploiting
an in-house software platform and by assuming qualitatively human-like relationships between
agents. In particular, we considered a dynamic scenario and a proximity scenario described and
analyzed in detail in the following.

3.1. Dynamic Web Polarization Case Study
To simulate dynamic Web polarization processes, we considered a scenario formed by a set 𝑊
of 1000 agents, that played the same role of the “followers” or the “friends” in an Online Social
Network (OSN), and two agents as leaders, named 𝐴 and 𝐵, that play a role analogue to that of
the “influencers” in an OSN. A schema of the adopted architecture with agents and leaders (i.e.,
A and B) is represented in Figure 1.

                                     A                     B

                                             agents


               0.0                  0.3                    0.7                 1.0
                                               I(c)
Figure 1: Dynamic Web polarization, schema of the adopted architecture with the agents and the
leaders A and B.

   Agents and leaders have been associated with individual profiles, compliant with the respec-
tive descriptions provided in Section 2.1. For sake of simplicity, we have considered a common
ontology consisting of a single category 𝑐 and the inclination of each agent in that category has
been randomly generated in a uniform manner in the real domain [0, · · · , 1] . Differently, we a
priori set the inclination about 𝑐 of the two leaders, i.e. 𝐴 and 𝐵, to 0.3 and 0.7, respectively.
Finally, the initial trust perceived by each agent about the leaders has been set to 0.5 that is
a neutral value. Based on the agents’ and leaders’ inclinations about 𝑐, then the similarity
measures between each agent and the two leaders have been updated in accordance with the
leaders’ actions.
   In the case study outlined above, each leader performed a sequence of 1000 actions, always
referred to 𝑐, within a context of leaders’ “radicalization” (i.e., the inclination of 𝐴 moved
towards 0 and that of 𝐵 towards 1). In order to study the ability of the considered measures
to model dynamic Web polarization processes, the sets of actions has been exploited to study
the system behavior in presence of radicalization processes implying continuous polarization
updating.
   In fact, after each action performed by a leader, then the leaders’ inclination, together to the
agent-leader similarities, trust and compactness measures have been updated as described in
Section 2.3 by setting the parameters 𝛼 = 1 and 𝛽 = 𝛾 = 0.99. To determine the polarization
of an agent with respect to a leader, we assumed the agent as polarized with respect to that
leader when its compactness measure is greater than 0.75 (that is a reasonable value for this
case study). To deeply analyze the compactness behavior when the relevance of similarity vs
trust measures varies, the parameter 𝜖 in Eq. 5, varied from 0 to 1 with step of 0.1. In such a
way we obtained a family of results in terms of polarization as the parameter 𝜖 varied.
   The results we obtained are synthetically shown in Figures 2 - 4, where in Figure 2 (i.e.,
Figure 3) is depicted how the maximum number of agents polarized on 𝐴 (i.e., 𝐵) changes as
the actions of 𝐴 (resp., 𝐵) and the parameter 𝜖 varied.




Figure 2: How the maximum number of agents polarized on 𝐴 changes as the actions of 𝐴 and the
parameter 𝜖 varied.


   Finally, in Figure 4 is represented how the inclinations of 𝐴 and 𝐵 leaders’ actions move
towards the two extremes.
   It is worthy to note that computations exploiting compactness measure, similarity (i.e., for
𝜖 = 1) or trust (i.e., for 𝜖 = 0) only and their different combinations yielded different results in
terms of capabilities to model dynamic Web polarization processes. Indeed, the analysis of the
results shows as in the initial phase of the experiment the number of polarized agents varies
as 𝜖 varies. However, independently from the value of 𝜖 the maximum number of polarized
agents is always reached within the first 100-th interactions. Then, for all intermediate values
of 𝜖, i.e., 𝜖 ̸= 0 and 1, corresponding to considering only the trust or the similarity measures
in computing the compactness value, the number of polarized agents converges to a common
value. Differently, with 𝜖 = 0 or 1, i.e. calculating compactness only by using trust or similarity
measures, the number of polarized agents will be the maximum or minimum, from the 200-th
action. These results would indicate that on the long term trust is capable of inducing greater
polarization in agents than the use of the similarity only or a combination of both.
   These results differ from those obtained from the use of compactness in other contexts such
as, for example, in a real-world scenario where the best results in group formation processes are
Figure 3: How the maximum number of agents polarized on 𝐵 changes as the actions of 𝐵 and the
parameter 𝜖 varied.

                         1.0
                                                                                   leader A
                                                                                   leader B
                         0.8
    action inclination




                         0.6


                         0.4


                         0.2


                          0
                           0   100   200   300   400    500      600   700   800   900        1000
                                                       actions

Figure 4: Inclinations of 𝐴 and 𝐵 leaders’ actions.


obtained by combining measures of trust and similarity [19]. However, it should always be taken
into account that we simulated dynamic polarization processes and that the purpose of this
experiment was exclusively to evaluate the role played by trust and similarity in modeling them.
It should be also considered that to precisely model real polarization processes the parameter 𝜖
could be determined by using, for instance, machine learning techniques.
3.2. Proximity Depolarization Case Study
In the second case study we simulated a static proximity scenario considering the agents’ ego-
network for reproducing a depolarization activity. In such a scenario, we assumed each agent
(i.e., trustor) provided with an ego-network consisting of 20 agents (i.e., trustees) randomly
chosen in 𝑊 . Profiles and agent polarization states of the trustor’s ego-network agents have
been inherited from the previous case study after the 50-th leaders’ actions. The agent trust
measures have been randomly generated in [0.5, · · · , 1], based on the consideration that usually
an agent trusts the members of its ego-network. Similarly, to the first case study, the parameter
𝜖 in Eq. 5 varied from 0 to 1 with step of 0.1. Finally, the compactness measure threshold to
change polarization has been always set to 0.75 to favor the comparison between the case
studies.
    Figure 5 displays the number of agents depolarized with respect to 𝐴 and 𝐵 as 𝜖 varies.
Two aspects are evident from this case study, namely: i) the number of depolarized agents is
generally low, in fact with 𝜖 = 0.1 for the considered scenario it reaches less than 1.1% of
the overall agent ego-network population (i.e., opportunity of agent proximity), and ii) as the
relevance of the similarity measure is significantly lower than in the previous case study, high
trust measures are required to activate the depolarization process.
                                                       0.05
    N. of depolarization / N. of ego-networks agents




                                                                                                               leader A
                                                                                                               leader B
                                                       0.04


                                                       0.03


                                                       0.02


                                                       0.01


                                                         0
                                                         0.0   0.1   0.2   0.3   0.4   0.5   0.6   0.7   0.8   0.9        1.0


Figure 5: Number of agents depolarized from 𝐴 and 𝐵 when the parameter 𝑎𝑙𝑝ℎ𝑎 change.


  According to the experimental findings presented and discussed here, in the case studies ex-
amined we appreciated different behaviors of the considered measures in modeling polarization
processes. The implication of such results can help us in finding/testing new and more effective
models for simulating dynamic polarization and proximity depolarization events in agent based
environments.
4. Conclusion
An important issue in the organization of human societies, impacting in many crucial activities
related to conflictual intergroup relationships, is the so-called polarization. In this context, we
have analyzed the role of similarity and trust in polarization processes by simulating two case
studies, i.e. dynamic Web polarization processes and proximity depolarization processes, in a
software multi-agent community, modeling similarity and trust relationships usually present in
human scenarios. The use of agents to simulate these processes is commonly adopted to support
human beings in their activities and decisions, and provide them with possibility to observe a
wide range of interactions of social interest. To this end, software agents can be endowed with
emotions, intentions, beliefs, cognitive aspects and personality traits for replicating the salient
features of human behaviors such as benevolence, selfishness, honesty, and meanness in real
and virtual societies. Moreover, such agents model are likely to take into account the presence
of other competing or cooperating agents in planning and implementing possible strategies to
achieve their goals
   We have considered that polarization processes are subject to change over time or also
reversal: thus, our contribution is represented by the comparison of different way to model
these processes in order to equip agents for simulating these relevant issues. We analyzed,
similarity, trust measures and their combination in the compactness measure, with different
percentages, and we evaluated the set of results provided by them with particular attention to
the roles played by similarity and trust in the agent polarization. The obtained results contribute
to clarify the impact of these measures in the polarization events. These findings could be useful
in the domain of social science, to better understanding polarization dynamic in community of
human beings.
   In such a context, with the support of real data, our ongoing research will be devoted to
investigate in which extent the effects evidenced in the multi-agent communities can also be
found in social human communities.


References
 [1] R. S. Lazarus, Thoughts on the relations between emotion and cognition., American
     psychologist 37 (1982) 1019.
 [2] R. Adolphs, Cognitive neuroscience of human social behaviour, Nature Reviews Neuro-
     science 4 (2003) 165–178.
 [3] R. P. Ebstein, S. Israel, S. H. Chew, S. Zhong, A. Knafo, Genetics of human social behavior,
     Neuron 65 (2010) 831–844.
 [4] W. Sherchan, S. Nepal, C. Paris, A survey of trust in social networks, ACM Computing
     Surveys (CSUR) 45 (2013) 1–33.
 [5] M. Beilmann, L. Lilleoja, Social trust and value similarity: The relationship between social
     trust and human values in europe, Studies of transition states and societies 7 (2015).
 [6] G. Fortino, F. Messina, D. Rosaci, G. M.L. Sarnè, Resiot: An iot social framework resilient
     to malicious activities, IEEE/CAA Journal of Automatica Sinica 7 (2020) 1263–1278.
 [7] J. A. Simpson, Foundations of interpersonal trust, Social psychology: Handbook of basic
     principles 2 (2007) 587–607.
 [8] A. M. Evans, W. Revelle, Survey and behavioral measurements of interpersonal trust,
     Journal of Research in Personality 42 (2008) 1585–1593.
 [9] J.-H. Cho, K. Chan, S. Adali, A survey on trust modeling, ACM Computing Surveys (CSUR)
     48 (2015) 1–40.
[10] E. Glikson, A. W. Woolley, Human trust in artificial intelligence: Review of empirical
     research, Academy of Management Annals 14 (2020) 627–660.
[11] M. Siegrist, Trust and risk perception: A critical review of the literature, Risk analysis 41
     (2021) 480–490.
[12] D. H. McKnight, N. L. Chervany, The meanings of trust (1996).
[13] D. Gambetta, et al., Can we trust trust, Trust: Making and breaking cooperative relations
     13 (2000) 213–237.
[14] A. Altaf, H. Abbas, F. Iqbal, A. Derhab, Trust models of internet of smart things: A survey,
     open issues, and future directions, Journal of Network and Computer Applications 137
     (2019) 93–111.
[15] G. Fortino, L. Fotia, F. Messina, D. Rosaci, G. M. L. Sarné, Trust and reputation in the internet
     of things: state-of-the-art and research challenges, IEEE Access 8 (2020) 60117–60125.
[16] Z. Yan, P. Zhang, A. V. Vasilakos, A survey on trust management for internet of things,
     Journal of network and computer applications 42 (2014) 120–134.
[17] A. Abdul-Rahman, S. Hailes, Supporting trust in virtual communities, in: Proceedings of
     the 33rd annual Hawaii international conference on system sciences, IEEE, 2000, pp. 9–pp.
[18] D. Rosaci, G. M. L. Sarnè, S. Garruzzo, Integrating trust measures in multiagent systems,
     International Journal of Intelligent Systems 27 (2012) 1–15.
[19] P. De Meo, E. Ferrara, D. Rosaci, G. M. L. Sarné, Trust and compactness in social network
     groups, IEEE transactions on cybernetics 45 (2014) 205–216.
[20] D. J. Isenberg, Group polarization: A critical review and meta-analysis., Journal of
     personality and social psychology 50 (1986) 1141.
[21] A. E. Wilson, V. A. Parker, M. Feinberg, Polarization in the contemporary political and
     media landscape, Current Opinion in Behavioral Sciences 34 (2020) 223–228.
[22] J. A. Bargh, K. Y. McKenna, The internet and social life, Ann. Rev. Psych. 55 (2004) 573–590.
[23] C. J. Ferguson, Does the internet make the world worse? depression, aggression and
     polarization in the social media age, Bulletin of Science, Technology & Society 41 (2021)
     116–135.
[24] H. Hattori, Y. Nakajima, T. Ishida, Learning from humans: Agent modeling with individual
     human behaviors, IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems
     and Humans 41 (2010) 1–9.
[25] A. Dorri, S. S. Kanhere, R. Jurdak, Multi-agent systems: A survey, Ieee Access 6 (2018)
     28573–28593.
[26] C. Castelfranchi, F. D. Rosis, R. Falcone, S. Pizzutilo, Personality traits and social attitudes
     in multiagent cooperation, Applied Artificial Intelligence 12 (1998) 649–675.
[27] R. Hortensius, F. Hekele, E. S. Cross, The perception of emotion in artificial agents, IEEE
     Transactions on Cognitive and Developmental Systems 10 (2018) 852–864.
[28] M. Rheu, J. Y. Shin, W. Peng, J. Huh-Yoo, Systematic review: Trust-building factors and
     implications for conversational agent design, International Journal of Human–Computer
     Interaction 37 (2021) 81–96.
[29] L. Fotia, F. Messina, D. Rosaci, G. M. L. Sarné, Using local trust for forming cohesive social
     structures in virtual communities, The Computer Journal 60 (2017) 1717–1727.
[30] C. Misselhorn, Collective agency and cooperation in natural and artificial systems, in:
     Collective agency and cooperation in natural and artificial systems, Springer, 2015, pp.
     3–24.
[31] O. Perrin, C. Godart, A model to support collaborative work in virtual enterprises, Data &
     Knowledge Engineering 50 (2004) 63–86.
[32] M. Uhl-Bien, R. Marion, B. McKelvey, Complexity leadership theory: Shifting leadership
     from the industrial age to the knowledge era, The leadership quarterly 18 (2007) 298–318.
[33] F. Amin, A. Ahmad, G. Sang Choi, Towards trust and friendliness approaches in the social
     internet of things, Applied Sciences 9 (2019) 166.
[34] G. Fortino, F. Messina, D. Rosaci, G. M. L. Sarné, Using blockchain in a reputation-based
     model for grouping agents in the internet of things, IEEE Transactions on Engineering
     Management 67 (2019) 1231–1243.
[35] A. Negahban, L. Yilmaz, Agent-based simulation applications in marketing research: an
     integrated review, Journal of Simulation 8 (2014) 129–142.
[36] R. Dieci, X.-Z. He, Heterogeneous agent models in finance, Handbook of computational
     economics 4 (2018) 257–328.
[37] M. N. Postorino, G. M. L. Sarné, Agents meet traffic simulation, control and management:
     A review of selected recent contributions, in: Proceedings of the 17th Workshop “from
     Objects to Agents”, WOA, volume 1664, 2016, pp. 112–117.
[38] M. Saidallah, A. El Fergougui, A. E. Elalaoui, A comparative study of urban road traffic
     simulators, in: MATEC Web of Conferences, volume 81, EDP Sciences, 2016, p. 05002.
[39] A. T. Jones, D. Romero, T. Wuest, Modeling agents as joint cognitive systems in smart
     manufacturing systems, Manufacturing Letters 17 (2018) 6–8.
[40] R. Nian, J. Liu, B. Huang, A review on reinforcement learning: Introduction and applications
     in industrial process control, Computers & Chemical Engineering 139 (2020) 106886.
[41] Z. Maamar, N. Faci, S. Kallel, M. Sellami, E. Ugljanin, Software agents meet internet of
     things, Internet Technology Letters 1 (2018) e17.
[42] T. DuBois, J. Golbeck, A. Srinivasan, Predicting trust and distrust in social networks, in:
     Privacy, Security, Risk and Trust (PASSAT), 2011 IEEE Third International Conference on
     and 2011 IEEE Third International Confernece on Social Computing (SocialCom), IEEE,
     2011, pp. 418–424.
[43] H. Liu, E.-P. Lim, H. W. Lauw, M.-T. Le, A. Sun, J. Srivastava, Y. Kim, Predicting trusts
     among users of online communities: an epinions case study, in: Proceedings of the 9th
     ACM conference on Electronic commerce, ACM, 2008, pp. 310–319.
[44] G. Lax, G. M. L. Sarné, CellTrust: a reputation model for C2C commerce, Electronic
     Commerce Research 8 (2006) 193–216.
[45] S. D. Ramchurn, N. R. Jennings, C. Sierra, L. Godo, Devising a trust model for multi-agent
     interactions using confidence and reputation, Applied Artificial Intelligence 18 (2004)
     833–852.