=Paper=
{{Paper
|id=Vol-1867/w9
|storemode=property
|title=Generating Trust-Based Recommendations for Social Networks organized by Groups
|pdfUrl=https://ceur-ws.org/Vol-1867/w9.pdf
|volume=Vol-1867
|authors=Lidia Fotia
|dblpUrl=https://dblp.org/rec/conf/woa/Fotia17
}}
==Generating Trust-Based Recommendations for Social Networks organized by Groups==
49 1 Generating Trust-Based Recommendations for Social Networks organized by Groups Lidia Fotia† † DIIES, University Mediterranea of Reggio Calabria, Via Graziella, Località Feo di Vito, 89122 Reggio Calabria, Italy, lidia.fotia@unirc.it Abstract—Evidence suggests that people often waver to buy trust is represented by a number of models that rely on global from online vendors because of uncertainty about vendor behav- reputation [4]–[6]: they are based on the evaluation of the ior or the risk of having wrong information about the products. behaviors of the users, that is shared across the entire commu- Trust plays a central role in helping consumers overcome perceptions of risk. Moreover, thematic groups are gaining a nity. These models, however, show an evident limitation due lot of attention and high centrality in online community, as users to the difficulty of taking the effects of malicious or fraudulent share opinions and/or mutually collaborate for reaching their behaviors into account, thus making the feedback themselves. targets. The users can be helped by personal software agents Other approaches, that consider also a local perspective of able to perform activities aimed at supporting the purchase of the trust, are limited by the fact they are supervised, i.e. products. This paper proposes a new trust measure in social networks organized by groups. In particular, we present a they need a training phase in generating recommendations. model to represent this scenario, and we introduce an algorithm In [7], we proposed to integrate the traditional use of the for detecting trust recommendations in virtual communities in global reputation with the local reputation, that is based on presence of groups. We technically formalize our idea and show the recommendations coming by the entourage of the user a complete example of how our approach works. (friends, friends of friends and so on). But this proposal was Index Terms—Recommendation, Online Communities, Trust, limited because it does not consider a group-based structure Group. [8]–[10]. In this paper, we define a new model to represent the I. I NTRODUCTION groups of users linked by trust relationships. Such a model depends on three main parameters: the relevance given to An important issue in Online Social Networks (OSNs) is the reliability with respect to the reputation, the threshold of that of designing recommender systems capable to provide recommendation under which a product can be considered OSN users with useful suggestions regarding other potentially as not interesting and the number of the groups in online promising OSN users to contact as interlocutors or interesting communities. We propose an algorithm for detecting trust content to access. Such an issue leads to the necessity of con- recommendations for a user considering the recommendations sidering the opinions that different users express about other that come from users within his own group and those of other users or OSN content [1]–[3]. However, recommender systems groups weighted with the global reputation. We technically have to face with the general problem of malicious or even formalize our idea and algorithm, and we present a complete fraudulent behaviors of some users, that results in unreliable example of how our approach works. The paper is organized as opinions which can negatively affect the effectiveness of the follows: in Section II we deal with some related work. Section generated recommendations. III provides technical details about our approach for finding The issue of trusting own interlocutors widely emerged in trust recommendations about products, while Section IV de- large online e-Commerce communities as, for instance eBay, scribes a concrete example of application. Finally, in Section V and now it is largely discussed in many OSNs which allow we draw our conclusions and illustrate some possible future their users to create and share contents with other users as well works. as opinions. This is the case, for example, of well-focused OSNs like EPINIONS1 and CIAO2 , in which users provide II. R ELATED W ORK reviews concerning commercial products falling in different A large number of papers in the literature investigated on categories. Almost all of these platforms face this issue by the topic we deal with here, therefore, in this section we cite adopting a reputation system. Reputation is a form of indirect only those approaches which we consider comparable with trust, where a user takes advantage from the opinions coming that discussed in this paper. from other users for evaluating the probable trustworthiness of Concerning the concept of trust, there exist in the literature an interlocutor. Commonly, in the traditional OSN contexts, several proposals. Sherchan et al. [11] present an important the reputation of a user is evaluated by averaging feedbacks review of trust, in which they comprehensively examine trust provided by all the other users belonging to the same commu- definitions and measurements, from multiple fields including nity. In the past literature, a common approach for predicting sociology, psychology, and computer science. Trust models 1 www.epinions.com [12]–[15] allow to exploit information derived by direct expe- 2 www.ciao.it riences and/or opinions of others to trust potential partners by 50 means of a single measure [16], [17]. Xia et al. [18] build a u, i.e. the trustworthiness that all the community has in u. subjective trust management model AFStrust, which considers The reason of this choice, was due to the necessity, when multiple factors including direct trust, recommendation trust, v does not have a sufficient direct knowledge of u, to use incentive function and active degree, and treats those factors the recommendations coming from the other agents of the based on the analytic hierarchy process (AHP) theory and the community. fuzzy logic rule. [19] describes how to build robust reputation 1) Reliability: As for the reliability, we denote it by systems using machine learning techniques, and defines a the mapping∪ relu,v , assuming values ranging in the domain framework for translating a trust modeling problem into a [0 · · · 1] N U LL, while relu,v = NULL means that v did learning problem. not have past interactions with u and thus it is not able to In many disciplines, there is a population of people which evaluate u’s trustworthiness. should be optimally divided into multiple groups based on 2) Reputation: As for the reputation of u, we denote it certain attributes to collaboratively perform a particular task by repu in the interval [0 · · · 1]ϵR. In order to compute the [20], [21]. The problem becomes more complex when some reputation, we adopt the notion of other requirements are also added: homogeneity, heterogene- 1 ∑ ity or a mixture of teams, amount of consideration to the repu = hρ (1) preferences of individuals, variability or invariability of group hmax |REVu | ρ∈REVu size, having moderators, aggregation or distribution of persons, overlapping level of teams, and so forth [5], [22]–[25]. Basu where |REVu | is the set of the reviews made by the user u et al. [26] consider the problem of how to form groups such and h is the helpfulness, i.e., it is associated with each review that the users in the formed groups are most satisfied with that represents the level of satisfaction of the other users for the suggested top-k recommendations. They assume that the that review. To normalize repu , we divide it by the maximum recommendations will be generated according to one of the value of the helpfulness hmax . two group recommendation semantics, called Least Misery and The two trust components reliability and reputation are Aggregate Voting. Rather than assuming groups are given, or integrated in a unique value to compute the mapping trust rely on ad hoc group formation dynamics, their framework tu,v of u about v, producing a input ranging in [0 · · · 1] as allows a strategic approach for forming groups of users in follows: order to maximize satisfaction. In [27], the authors reveal how these problems can be mathematically formulated through a tu,v = α · relu,v + (1 − α) · repv (2) binary integer programming approach to construct an effective where α is a real number, ranging in [0...1], which is set model which is solvable by exact methods in an acceptable by u to weight the relevance he/she assigns to the reliability time. with respect to the reputation. III. O UR S CENARIO B. Product recommendation Our scenario is represented by a virtual community S, The user receives, at the current step, some recommenda- formally denoted as S = ⟨A, G⟩, where A is the set of agents tions about the products present in the community. In other joined with S and G is the set of groups contained in S. We words, recpu is the recommendation that the user u receives also assume that each group g is managed by an administrator about the product p. It is calculated as follows: agent ag . Generally, all such communities are organized in ∑ social structures based on social relationship (like, Facebook p v∈ρ,v̸=u tu,v · ratev p [28], [29] or Twitter [30]). The formation of a group is a recu = ∑ (3) v∈ρ,v̸=u tu,v process based on two main events: a user asks for joining with a group and the administrator of the group accepts or where ratepv is the review of the the user v about the refuses the request. product p (a number between 1 and 6), weighed by the trust of v. This means that his/her opinion about a product is taken A. Trust into account if his/her trustworthiness is high. The weighted average allows us to identify an average value in which the The trust measure tu,v is a mapping that receives as input starting numerical values have their own importance, specified two agents u and v and yields as output a boolean value by its weight. In particular, we can identify the center of representing the degree of trust between two agents u and gravity of the rate. In this way, we give more importance to v: tu,v = 0 (resp. tu,v = 1) means that u assigns the minimum the rate from users that the user u trusts. With a normal mean (resp. maximum) trustworthiness to v. The trust measure is we would lose significant information. asymmetric, in the sense that we do not automatically expect that v trusts u at the same level. As a theoretical proposal, we had introduced a more general C. Groups trust measure, by combining two components relu,v and At this point, we introduce the group’s concept in the repu , where (i) relu,v is the direct reliability of u, i.e. the community. In this context, we define trust t∗u,v in two different trustworthiness that v has in u based on the past interactions ways. We suppose that the trust perceived by an agent u with between u and v while (ii) repu is the global reputation of respect to the component of his/her group is equal to 1 (i.e, 2 51 productID name categoryID 1 Car TomTom, Display 5” Electronics 2 Smartphone Android 5.1 Electronics 3 TV HD Ready 15,6” Format 16:9 Electronics 4 Notebook 15” i7, RAM 8 GB, HDD 500GB Informatics 5 Black and white laser printer Informatics 6 Tablet 7”, Wi-Fi, 8 GB Informatics 7 Microsoft Windows 7 PRO SP1 32/64-bit Software 8 Microsoft Office 365 Personal - 32/64 Bit Software 9 Nuance Power PDF Standard Software TABLE II L IST OF PRODUCTS userID productID categoryID rating helpfulness 1 9 3 5 6 1 5 2 3 5 1 6 2 5 2 Fig. 1. A community associated to the online e-Commerce. 1 7 3 1 6 1 8 3 5 6 2 1 1 3 5 2 2 1 4 6 t∗u,v =1), instead the agent u considers the trust that user has in 2 5 2 4 5 2 8 3 5 2 the whole community (see Equation 2). In this way, we define 3 1 1 4 2 rec∗pg that is the recommendation that the user u receives about 3 8 3 5 3 3 2 2 3 5 the product p in presence of the groups: 3 4 2 5 6 ∑ ∑ 4 1 1 5 1 p p 4 3 1 5 2 ∗p v∈gu ratev + / u tu,v · ratev v ∈g recu = ∑ ∗ (4) 4 6 2 3 6 v∈|REVp | tu,v 4 9 3 2 1 5 1 1 2 6 5 3 1 2 6 where gu is the group to which the agent u belongs 5 6 2 6 6 and |REVp | is the set of the agents who have purchased 5 9 3 6 6 6 6 2 2 6 the product p. It is calculated as the combination of two 6 5 2 4 2 contribution: the average rating of the users that belong to 6 7 3 1 4 6 8 3 0 3 the group of the user u and the score that the other groups 7 1 1 4 0 give to the product multiplied by the trust that u assign to its 7 9 3 2 6 7 5 2 5 3 agents. 7 8 3 4 2 8 1 1 5 2 8 3 1 5 0 IV. A N EXAMPLE OF SCENARIO : E -C OMMERCE 8 6 2 2 5 8 9 3 3 5 Now, we explain how it is possible to use groups to generate 9 2 1 4 4 9 6 2 2 5 the recommendations of products for the users inserted into an 9 9 3 3 5 online e-Commerce communities. As an example, we propose 11 1 1 5 3 11 3 1 3 3 to model each user by a node (see Figure 1). 11 2 2 4 3 We assume that all the elements of a group are trust-related, 11 9 3 4 5 a trust group g determined into G represents a mutual trust TABLE III relationship between its elements. In our case, there are four A N EXAMPLE OF DATABASE groups of users called g1 , g2 , g3 and g4 . All the users in the same group are mutually linked by a trust relationship with the value 1; while the values of the reliability are shown in the Table I. and Software (see Table II). In the Table III, we show an example of datasets. u v reluv 1 2 0.7 In our model, we have associated with each agent a pro- 1 5 1 file contained, as unique feature, the reputation to review 1 8 0.3 1 9 0.2 the products. This reputation has been computed by aver- 1 11 0.1 aging, on all the reviews posted by the agent, the help- 3 11 0.8 5 6 0.1 fulness associated with each review, where the helpfulness 11 7 0.2 is an information available on the dataset and obtained by the opinions expressed by the users of the commu- TABLE I S IMULATION PARAMETERS nity. We obtain that the reputation values (see Equation 1) of the agents belonging to our scenario are as fol- lows: rep1 =0.83; rep2 =0.75; rep3 =0.66; rep4 =0.41; rep5 =1; In particular, in our example, there are nine products that are rep6 =0.62; rep7 =0.45; rep8 =0.5; rep9 =0.79; rep10 =0 and divided in three main categories called Electronics, Informatics rep11 =0.58. At this point, we introduce a new value of trust 3 52 tu,v that is the combination of the reliability and the reputation. u 1 v 2 tuv 0.72 Fixed the agent a1 , we compute the opinion (i.e., trust) that a1 1 3 0.83 has with regard to other agents. Recall that this value changes 1 4 0.20 1 5 1 with α. We consider three values of α. In particular, α=1 1 6 0.31 means that the agent a1 considers only the opinions of the 1 7 0.22 1 8 0.4 agents with whom he/she interacted in the past (contrariwise, 1 9 0.49 α=0). Finally, α=0.5 means that the agent a1 considers in 1 10 0 1 11 0.34 the same way both the opinions of the agents with whom he/she interacted both others. For detail, see Tables V-VII. TABLE VI Let ξ be a threshold fixed by the agent a1 , we suggest only S IMULATION PARAMETERS FOR α=0.5 those products that have recpu greater than ξ (in our case, we fix ξ > 4). In particular, we note that the agent a5 that has u v tuv a high value of reliability for the agent a1 buys the products 1 2 0.7 1 3 1 p8 and p9 . Also, u5 assigns to them a high rate while the 1 4 0 rest of the community gives a very low rate. Surely a1 would 1 5 1 1 6 0 be very interested in these products, because he/she trusts a5 1 7 0 with a high value. At this point, we see how the algorithm 1 8 0.3 1 9 0.2 behaves. The agent a1 receives, at the current step, some 1 10 0 recommendations by the other agents, in response to previous 1 11 0.1 recommendation requests (see Table IV). If α=1, we suggest TABLE VII to a1 the products p6 , p8 and p9 . In this case, we consider S IMULATION PARAMETERS FOR α=1 the recommendations that come from agents that have a high reliability. In fact, these products were acquired and evaluated good by agents a5 and a2 . Comparing these results with truly a3 and a7 , therefore t∗13 and t∗17 are always equal to 1. Now, user-purchased products, it is visible that three out of three we can calculate the recommendations (see Equation 4) to the products were actually purchased by u1 . If α=0, we suggest agent a1 for all the products in the community in presence of to a1 the products p1 , p4 and p5 . This is correct because they the groups. The table VIII shows the results obtained. are products purchased by agents who have a high reputation in the community. But, these agents did not interact directly α p1 p2 p3 p4 p5 p6 p7 p8 p9 with a1 therefore they do not know his/her preferences. Indeed, 0 3.75 3.68 3.3 5 4.42 3.75 1 3.79 3.31 0.5 3.61 3.6 3.10 5 4.49 3.75 1 4.15 3.49 only the product p1 is of interest to a1 . If α=0.5, we suggest 1 3.43 3.5 2.71 0 4.59 4.66 0 5.18 4.57 to a1 the products p4 , p5 , p8 and p9 . This choice is a good compromise, since three of the four products are of liking for TABLE VIII T HE RECOMMENDATIONS TO THE AGENT a1 IN THE PRESENCE OF GROUPS the agent a1 . α p1 p2 p3 p4 p5 p6 p7 p8 p9 0 4.35 3.2 3.3 5 4.26 3.32 1 3.56 3.72 The results in the presence of the groups are best, because 0.5 3.48 3.26 3.10 5 4.2 3.75 1 4.01 4.10 a3 and a7 know better a1 and consequently are able to make 1 3.11 3.4 2.71 0 4 4.66 0 5 4.93 targeted recommendations. TABLE IV T HE RECOMMENDATIONS TO THE AGENT a1 A. The Performance Recommendation Measure In order to model the process of evaluating the performance of recommendation provided by learning agents, we defined u v tuv 1 2 0.75 two indexes. Let be Ri the set of recommendations provided 1 3 0.66 to the agents ai , and Ri ⊂ Ri the set of recommendations 1 4 0.41 1 5 1 relating to the products which ai purchased. Besides, let be 1 6 0.62 Γ∗i the set of purchases made by ai , where Γi is the set of all 1 7 0.45 1 8 0.5 actions executed within the context of agent ai . 1 9 0.79 To provide a performance measure we defined two indexes, 1 10 0 1 11 0.58 called Precision and Recall, as follows: ∩ TABLE V |Γ∗i Ri | S IMULATION PARAMETERS FOR α=0 P re(Ri ) = (5) |Ri | ∩ |Γ∗ Ri | With the introduction of the groups in the community, we Rec(Ri ) = i ∗ (6) |Γi | can consider the assumption made in the Section III. Recall that, for the agents that are in the same group, the trust is By the definition of P re(Ri ), it follows that a high value equal to 1. In our case, a1 is in the group g1 with the agents does not mean to be a good recommender agent. Indeed, it 4 53 is possible that the overall performances of provided recom- communities. Our ongoing research is currently devoted to mendations are not the greatest possible. Rec(Ri ), which is apply the approach to real social networks, in which the the fraction of recommendations successfully suggested by the advantages and limitations introduced by our proposal can be agent ai , allows us to consider the aspect above. In our case, quantitatively and effectively evaluated. we have different values of Precision and Recall to vary by α (see Table IX). It is clear that when ai takes into account only R EFERENCES the opinion of the whole community, we have relatively low [1] F. Buccafurri, L. Fotia, and G. Lax, “Allowing continuous evaluation of performances. Instead, combining the opinion of the whole citizen opinions through social networks,” in International Conference community with that of the agents who had direct interactions on Electronic Government and the Information Systems Perspective. with ai , we obtain high values of Precision and Recall. In Springer, 2012, pp. 242–253. [2] ——, “Allowing privacy-preserving analysis of social network likes,” in particular, when α=0.5 in the absence of the groups we have Privacy, Security and Trust (PST), 2013 Eleventh Annual International a higher value of Recall because the agents who belong to Conference on. IEEE, 2013, pp. 36–43. the community but had no interactions with ai bought many [3] F. Buccafurri, L. Fotia, G. Lax, and V. Saraswat, “Analysis-preserving protection of user privacy against information leakage of social-network products and then their evaluations are appreciated in the likes,” Information Sciences, vol. 328, pp. 340–358, 2016. community. This situation allows to advise the products that [4] P. De Meo, A. Nocera, D. Rosaci, and D. Ursino, “Recommendation are of interest for ai . However, when α=1 in the presence of reliable users, social networks and high-quality resources in a social internetworking system,” Ai Communications, vol. 24, no. 1, pp. 31–50, of the groups, we obtain Rec(Ri )=0.8 that is the highest 2011. value. This means that the recommendations of agents within [5] A. Comi, L. Fotia, F. Messina, G. Pappalardo, D. Rosaci, and G. M. the group joined to those of the agents with which ai has Sarné, “An evolutionary approach for cloud learning agents in multi- cloud distributed contexts,” in 2015 IEEE 24th International Conference interacted in the past, allow us to suggest the products of on Enabling Technologies: Infrastructure for Collaborative Enterprises. his/her interest with very high accuracy (88%). IEEE, 2015, pp. 99–104. [6] P. D. Meo, K. Musial-Gabrys, D. Rosaci, G. M. Sarnè, and L. Aroyo, α P re(Ri ) Rec(Ri ) “Using centrality measures to predict helpfulness-based reputation in 0 0.33 0.2 trust networks,” ACM Transactions on Internet Technology (TOIT), 0.5 0.75 0.6 vol. 17, no. 1, p. 8, 2017. 1 1 0.6 [7] P. De Meo, F. Messina, D. Rosaci, and G. M. Sarné, “Recommending users in social networks by integrating local and global reputation,” TABLE IX in International Conference on Internet and Distributed Computing P RECISION AND R ECALL IN OUR EXAMPLE TO VARY BY α Systems. Springer, 2014, pp. 437–446. [8] P. De Meo, L. Fotia, F. Messina, D. Rosaci, and G. M. Sarné, “Forming classes in an e-learning social network scenario,” in International Symposium on Intelligent and Distributed Computing. Springer, 2016, α P re(Ri ) Rec(Ri ) pp. 173–182. 0 0.33 0.2 [9] D. Rosaci, “Finding semantic associations in hierarchically structured 0.5 0.66 0.4 groups of web data,” Formal Aspects of Computing, vol. 27, no. 5-6, 1 1 0.8 pp. 867–884, 2015. [10] P. De Meo, E. Ferrara, D. Rosaci, and G. M. Sarné, “Trust and com- TABLE X pactness in social network groups,” IEEE transactions on cybernetics, P RECISION AND R ECALL IN OUR EXAMPLE WITH GROUPS vol. 45, no. 2, pp. 205–216, 2015. [11] W. Sherchan, S. Nepal, and C. Paris, “A survey of trust in social networks,” ACM Computing Surveys (CSUR), vol. 45, no. 4, p. 47, 2013. [12] E. Majd and V. Balakrishnan, “A trust model for recommender agent systems,” Soft Computing, pp. 1–17, 2016. V. C ONCLUSION [13] S. Tadelis, “Reputation and feedback systems in online platform mar- kets,” Annual Review of Economics, vol. 8, no. 1, 2016. In this paper, we propose a model capable to integrate [14] A. Comi, L. Fotia, F. Messina, D. Rosaci, and G. M. Sarné, “A reliability and reputation in an OSN organized by groups. partnership-based approach to improve qos on federated computing infrastructures,” Information Sciences, vol. 367, pp. 246–258, 2016. In particular, we considered three important parameters in [15] A. Comi, L. Fotia, F. Messina, G. Pappalardo, D. Rosaci, and G. M. order to characterize the model: the relevance given to the Sarné, “A reputation-based approach to improve qos in cloud service reliability with respect to the reputation, the threshold of composition,” in Enabling Technologies: Infrastructure for Collabora- tive Enterprises (WETICE), 2015 IEEE 24th International Conference recommendation under which a product can be considered on. IEEE, 2015, pp. 108–113. as not interesting and the number of the groups. We have [16] D. Rosaci, G. M. Sarné, and S. Garruzzo, “Integrating trust measures presented a realistic example and the results have shown that in multiagent systems,” International Journal of Intelligent Systems, vol. 27, no. 1, pp. 1–15, 2012. when the agent takes into account only the reputation, we [17] L. Xiong and L. Liu, “Peertrust: Supporting reputation-based trust for have low performances. Instead, combining the opinion of the peer-to-peer electronic communities,” Knowledge and Data Engineering, whole community (reputation) with that of the agents who had IEEE Transactions on, vol. 16, no. 7, pp. 843–857, 2004. [18] H. Xia, Z. Jia, L. Ju, X. Li, and Y. Zhu, “A subjective trust management direct interactions with her/him (reliability), we obtain high model with multiple decision factors for manet based on ahp and fuzzy values of Precision and Recall. However, in the presence of logic rules,” in Green Computing and Communications (GreenCom), the groups, we obtain that Recall has the highest value. In 2011 IEEE/ACM International Conference on. IEEE, 2011, pp. 124– 130. other words, in this letter case, our model allows to suggest [19] X. Liu, A. Datta, and E.-P. Lim, Computational Trust Models and the products with very high accuracy (88%). In this paper, Machine Learning. CRC Press, 2014. we limited ourselves to introduce and formalize the idea, and [20] M. Wessner and H.-R. Pfister, “Group formation in computer-supported collaborative learning,” in Proceedings of the 2001 international ACM we present an example of how the presented approach can SIGGROUP conference on supporting group work. ACM, 2001, pp. found product recommendations in an online e-Commerce 24–31. 5 54 [21] A. Comi, L. Fotia, F. Messina, D. Rosaci, and G. M. Sarné, “Grouptrust: 46–53. Finding trust-based group structures in social communities,” in Interna- [26] S. Basu Roy, L. V. Lakshmanan, and R. Liu, “From group recommen- tional Symposium on Intelligent and Distributed Computing. Springer, dations to group formation,” in Proceedings of the 2015 ACM SIGMOD 2016, pp. 143–152. International Conference on Management of Data. ACM, 2015, pp. [22] L. R. Hoffman and N. R. Maier, “Quality and acceptance of problem 1603–1616. solutions by members of homogeneous and heterogeneous groups.” The [27] A. A. Kardan and H. Sadeghi, “An efficacious dynamic mathematical Journal of Abnormal and Social Psychology, vol. 62, no. 2, p. 401, 1961. modelling approach for creation of best collaborative groups,” Mathe- [23] A. Comi, L. Fotia, F. Messina, G. Pappalardo, D. Rosaci, and G. M. matical and Computer Modelling of Dynamical Systems, vol. 22, no. 1, Sarné, “Forming homogeneous classes for e-learning in a social network pp. 39–53, 2016. scenario,” in Intelligent Distributed Computing IX. Springer, 2016, pp. 131–141. [28] F. Buccafurri, L. Fotia, and G. Lax, “Privacy-preserving resource eval- [24] ——, “Using semantic negotiation for ontology enrichment in e-learning uation in social networks,” in Privacy, Security and Trust (PST), 2012 multi-agent systems,” in Complex, Intelligent, and Software Intensive Tenth Annual International Conference on. IEEE, 2012, pp. 51–58. Systems (CISIS), 2015 Ninth International Conference on. IEEE, 2015, [29] ——, “Allowing non-identifying information disclosure in citizen opin- pp. 474–479. ion evaluation,” in International Conference on Electronic Government [25] F. Buccafurri, L. Fotia, A. Furfaro, A. Garro, M. Giacalone, and and the Information Systems Perspective. Springer, 2013, pp. 241–254. A. Tundis, “An analytical processing approach to supporting cyber [30] ——, “Social signature: Signing by tweeting,” in International Confer- security compliance assessment,” in Proceedings of the 8th International ence on Electronic Government and the Information Systems Perspec- Conference on Security of Information and Networks. ACM, 2015, pp. tive. Springer, 2014, pp. 1–14. 6