1 Supporting Learners-to-Lerners Interactions Basing on Online Social Networks Information Pasquale De Meo and Fabrizio Messina and Domenico Rosaci and Giuseppe M. L. Sarné Abstract—E-Learning students can benefit from proper class experiences [15], [21]–[26] we designed a model to manage formation process based on the student needs. In particular, formation and evolution of e-Learning classes by using user Online Social Networks make available data concerning users’ information available on OSNs. These information are linearly interactions, as skills and trust relationships, that are behind the dynamics of thematic social network groups, and can be combined in a measure, named convenience, used to suggest explouted to form e-Learning classes. To this aim, we propose a the best class (student) to join with or leave (to accept or model based on such information, which are properly combined remove) to a user (to the class itself). First of all, the skills of to support the dynamics of e-Learning classes on Online Social a student with respect to a set of topics of interest represent Networks. The approach provide a way to give suggestions to the basic aspect we considered to give teaching-homogeneity users about the best classes to join with and to class adminastrors the best students to accept. The proposed approach has been to the class [27], in order to balance “supply” and “offer” of tested by simulating an e-Learning scenario within a large social support requests (i.e. interactions). Trust represents the second network by showing its capability to satisfy all the actors. component, which is computed by combining several specific Index Terms—Social Networks; Software Agents; Thematic factors – which are related to specific e-Learning concerns – Groups giving a complete trust model based on reliability and repu- tation criteria and on some countermeasures for erroneous or I. I NTRODUCTION malicious opinions. The model is designed to assists students and classes by means of personal software agents delegated E-Learning (EL) represents a good solution for courses, as to create, manage and update the profiles of their owners on it provides time and location flexibility, low costs and informa- the basis of information found on the OSNs. The convenience tion sharing [1]. In this context, among the factors affecting measure is exploited by a distributed procedure, named Class learners progresses there are personal attitudes, initial skills Formation (CF), that allows learner/class software agents to and the level of mutual trust, which influences the attitudes appropriately cooperate to form classes. of peers to start interactions [2] and minimizes the cold start The experimental trials have shown that running the CF effect. Given that those information are widely available in algorithm allows students and class administrators to improve Online Social Networks (OSNs), EL activities can benefit from the average value of the convenience within classes. synergies with OSNs. Besides, many OSN platforms [3], [4] The rest of the paper is organized as follows: Section II support thematic groups that, for their relevance, have been introduces the context and the Expertize, Trust and Advan- largely investigated [5]–[9]. tage measures. The proposed architecture is described in In addition, software agents can support EL class formation Section III, while Section IV discusses the GF algorithm. processes by suggesting to students (classes) about the best Section V presents the experiments we carried out, Section VI classes (students) to join with (to accept) [10]–[12]. Studies examines related literature and, finally, in Section VII we draw confirmed that, within social communities, users start to inter- our conclusions. act and share information with other peers also based on the level of mutual trust existing among users [13]–[18]. Besides, also in forming OSN groups existing trust relationships can II. E -L EARNING INTERACTIONS AND MEASURES give a significant contribution, in addition to a similarity criterion [13], [19], [20]. Let be N the set of OSN members, (||N || = N ), C the set It is obvious that, due to the huge amount of data of OSNs of classes (||C|| = C), with each class c ∈ C consisting of a and the huge number of thematics groups, examining the number of learners and at least a teacher. We also suppose that entire space of data to suggest suitable solutions for learners’ each user ui ∈ N is associated with a software agent [28] ai needs is impracticable. Therefore, based on previous research able to obtain a view on the ui background and attitudes and Pasquale De Meo is with the Dept. DICAM, University of Messina, Viale to assist him/her in joining with or leaving classes. Similarly, Andrea Doria, 6 - 01010 Messina, Italy, e-mail: p.demeo@dmi.unime.it each manager is assisted by a software agent, denoted as Ai , Fabrizio Messina is with the Dept. DMI, University of Catania, Viale in deciding whether a new member can be accepted in the Andrea Doria, 6 - 01010 Catania, Italy, e-mail: messina@dmi.unict.it Domenico Rosaci is with the Dept. DIIES, Unversity Mediterranea of class. Reggio Calabria, Loc. Feo di Vito - 89123 Reggio Calabria, Italy, e-mail: We also define a behavioral measure, which is related to the domenico.rosaci@unirc.it interactions carried out by a learner (see Section II-A) and a Giuseppe M. L. Sarné is with the Dept. DICEAM, Unversity Mediterranea of Reggio Calabria, Loc. Feo di Vito - 89123 Reggio Calabria, Italy, e-mail: trust measure, which considers the level of mutual trust among sarne@unirc.it OSN members (see Section II-B). 56 2 A. Behavioral Measures other OSN users to compute their respective reputations. Such The principle behind the definition of the behavioral mea- feedbacks refer to the quality of these interactions (remember sures is that, in order to form classes, a balance between re- that users’ skills are evaluated by the Behavioral measures). quired and/or offered skills should be desirable, as each learner Let ηp,r and ρp,r be respectively the measures of reliability is interested to improve his/her knowledge by joining with and reputation that the OSN user up (i.e. agent ap ) computes classes where the other members have suitable capabilities for the OSN user ur (i.e. agent ar ). The trust measure τp,r is and managers of classes are interested to include users holding obtained by combining the reliability (ηp,r ) and the reputation skills and attitude to interact. (ρp,r ) weighted by means a coefficient βp,r ∈ [0, 1] ∈ R: Classes. Let’s define a class c as a tuple hS, W, Vc , oi where:  (i) S = {s1 , s2 , . . . , sm } is the skill set required by the class 0.5 if Ip,r = 0 τp,r = manager of c; (ii) W = {w1 , w2 , . . . , wm } is the weight set βp,r · ηp,r + (1 − βp,r ) · ρp,r if Ip,r > 0 used to evaluate the students’ skills; (iii) Vc is the minimum where Ip,r is the number of interactions occurred be- overall skill grade, computed over the specific skill set S, tween the two actors. Note that for new learners the initial required to join with c; (iv) o is the reference topic or subject or trust/reputation is set to 0.5 to contrast whitewashing strate- goal of c. More formally, for an OSN user uk and a skill set S, gies [31]. For computing βp,r , we consider that its value Pm V (k, S) = wi · g(k, si ), where g(k, si ) ∈ [0, 1] ∈ R is the increases with the number of interactions occurred between i=1 the two learners because their direct knowledge improves knowledge grade of uk for the skill si , while wi ∈ [0, 1] ∈ R P m over time and decreases when the reliability in providing is set by the class manager to weight gi with wi = 1. recommendations decreases (the reputation or some peers may i=1 The User attitude H. The attitude of the user uk to require be affected by malicious behaviors); Therefore, the coefficient and/or offer interactions for his/her skills is computed as: βp,r is computed as: H(k) = α · H(k) + (1 − α) · H(k) βp,r = M ax(β1 , β2 )   where the new value of H ∈ [0, 1] ∈ R combines, weighted Ip,r where β1 = min Imax (t) , 1 and β2 = 1−Ωp,r . The parameter by a system parameter α ∈ [0, 1] ∈ R, the previous value and (t) a contribution H for the new interactions computed as: Ωp,r is the average confidence at time t for the current set of recommenders that provided at least a recommendation to ap |h(k) H(k) = 1 − h(k)req −h(k) of f | if h(k)req + h(k)of f 6= 0 (t) PRp,r (t−1) rec +h(k)of f about ar computed as Ωp,r = ||Rp,r 1 || i=1 |σp,r −τq,r | and or H(k) = 0.5 otherwise, and where h(k)req and h(k)of f , where Rp,r is the set of agents provided an opinion about ar . It with respect to uk , respectively are the evaluation of the minimizes the effect of untrustworthy opinions by giving more interactions for a number Nreq and Nof f of skills subset relevance to those mentors evaluated by ap as the most similar Si ⊂ S requested and offered at the new step obtained by: to it. Ip,r is the number of interaction that is incremented at PNreq each step and when it is greater than the threshold Imax then 1 h(k)req = Nreq i=1 g(k, Sreq,i ) the “knowledge” between two users is considered maximum. PNof f 1 h(k)of f = Nof f i=1 g(k, Sof f,i ) As a result, the contribute of the reputation in computing trust decreases as much as the number of the interactions occurred Therefore, when h(k)req ≈ h(k)of f , then H ≈ 1, i.e. the between the two involved learners constantly increases. user uk asks and provides interactions to the same extent. 1) Computation of Reliability: The reliability measure, Vice versa, his/her attitude is mainly to offer (or require) ηp,r ∈ [0, 1] ∈ R, is computed by up (i.e. ap ) about ur interactions, i.e. H ≈ 0. (i.e. ar ) as ηp,r = ϑp,r · σp,r + (1 − ϑp,r ) · ηp,r , where the Class Behavior. The class behavior for the class cj , denoted parameter ϑp,r weights in a complementary way the feedback as B(j) ∈ [0, 1] characterizes its tendency to offer or require parameter σp,r ∈ [0, 1] ∈ R computed on the last interaction P||cj || interactions and it is defined as B(j) = ||c1j || k=1 H(k). occurred between up and ur at time-step t and the value of ηp,r computed at time-step (t − 1). B. Trust Measure The parameter ϑp,r considers the relevance assigned to the The second measure is based on the concept of trust [29] interaction between up and ur , let it be Ψp,r . In principle, and it is computed by combining two factors, namely reliabil- malicious behaviors aimed to gain good reputation with low ity and reputation. The former measure derives by the direct value interactions (Ψ ≪ 0.5) but high reliability (σ ≫ 0.5) knowledge between truster and trustee due to their interactions can start on interactions of high relevance (Ψ ∼ 1) (due to occurred in the past, while reputation is an indirect knowledge a good reputation) but providing poor performance (σ ∼ 0). derived by the past interactions occurred among the trustee Therefore, the closer the ratio Ψ/σ to 1, the higher the value with other counterparts different from the current truster [30]. of ϑ; the farther the value Ψ/σ from 1, the lower the value of An interaction between two generic OSN learners up and ur ϑ. A possible choice for ϑ is represented by the adoption of 2 2 consists of a process where up starts with one or more learning the Gaussian centered in 1, as ϑ = e−(Ψ/σ−1) /v . ϑ acts as tasks with ur . Consequently, their software agents ap and ar a “filter” for those values of σ which, for the correspondent observe the interactions of their owners to register the interac- values of Ψ, may reflect a malicious behavior, while large tion features (type, topic, duration) and collect feedbacks about values of v will select only those values of σ for which σ/ϑ 57 3 is close to 0 by ensuring that almost the whole history of class and Aj the associated software agent, the following tasks feedbacks σ is considered in computing η. are triggered by the interactions among software learner agents 2) Computation of Reputation: The reputation measure ak and the class agent Aj as: (i) Any message of ak containing ρp,r ∈ [0, 1] ∈ R is computed by up (i.e., ap ) with respect updated Behavioral and/or Convenience measure will trigger to ur (i.e. ar ) as a value ranging in [0, 1] ∈ R: agent Ak to update Behavioral and/or Convenience measures ||Rp,r || for the whole class; (ii) Whenever the Behavioral measure of X the class cj has changed, Aj will send the updated measure 1 ρp,r = ||Rp,r || τq,r q=1 to all the learner agents of the class cj ; (iii) Ak will assist the class manager of cj to take decision about the requests Through the usual meaning of these indexes, 0/1 means that coming from agents ak to join with or leave classes. ur is totally unreliable/reliable. IV. T HE DISTRIBUTED PROCEDURE FOR C LASS C. Convenience Measure F ORMATION (CF) Behavioral and trust measures are combined to measure the In our approach, each Learner Agent has: (i) to update convenience, for a user, to join with the class cj . The asymmet- all the proposed measures whenever one or more interaction ric nature of the trust measure implies also the asymmetry of occurred, (ii) to send the new values to its class agents and (iii) the convenience. In particular, let φ be a parameter computed to assist its own user to take decision about joining with or as φ = (1−|H(k)−B(j)|) ||cj || , where ||c|| is the number of users leaving classes by executing the CF algorithm (For this aim, it (i.e. agents) affiliated with c. Then the convenience (γu,c ) for will receive behavioral and trust measures from its own class the user u to join with the class c, and that (ηc,u ) of the class agents). c to accept the affiliation request of a user u are computed as: Each class agent has (i) to wait for learner agents messages X X in order to update the proposed measures of the entire class, γk,j = φ τk,i ηj,k = φ τi,k (ii) to send updated behavioral measure for allowing learner ai ∈cj ai ∈cj agents to update their own convenience measures and (iii) to Both measures increase with the difference between the assist its own class manager to take decision about the requests behaviors of ak and cj . As a consequence of the asymmetric coming from learner agents to join with or leave classes. nature of trust, the procedure described in Section IV is dis- tributed among the agents assisting learners and those assisting A. The distributed CF procedure. class managers. As it will be discussed in the experimental Section, the aim of the distributed procedure is to let the Let T be the time between two consecutive steps of the CF system to reach a balance in terms of convenience among all procedure executed by the generic learner agent, in order to the considered actors of the proposed OSN EL scenario. [32] join with a set of classes of the same topic. We also suppose that agents can query a distributed database named CR (Class III. T HE MULTI - AGENT E -L EARNING ARCHITECTURE Repository) on which the list of the classes is stored. The CF procedure performed by the learner agents (See In the proposed approach, OSN users (i.e. learners) are sup- Fig. 1(a)). Let Xn be the set of the classes the agent an is ported by intelligent software agents [33] capable to perform affiliated to, and NM AX the maximum number of classes an all the activities aimed at organizing classes basing on the agent can analyze at time t, with NM AX ≥ |Xn |. Besides, measures presented in Section II. All the agents execute a set suppose that an stores into a cache the class profile of each of tasks which are briefly summarized below, and categorized class contacted in the past and the timestamp d of the last as Learner Agent Behavior and Class Agent Behavior. run of the CF procedure for that class. Let the timestamp The Learner Agent Behavior. The behavior of a learner ξn and χn ∈ [0, 1] be two thresholds fixed by the agent an . agent consists of several tasks periodically executed to main- The ratio of the procedure for the learner agent is to improve tain data useful to run the CF algorithm (see Section IV). Let the convenience in joining with a class. Therefore, firstly the uk be the generic learner and ak his/her agent, the following values of convenience are recalculated if older than ξn (lines tasks are triggered by the learner and executed by the agent 1-4). Then, candidate classes are sorted in a decreasing order as: (i) Any interaction of learner uk with one or more peers based on their Convenience value (line 5). In the loop in lines will trigger agent ak to update Behavioral measures; (ii) Any 7-16 the NM ax classes are selected. If the classes in the set reliability change of a user uj that interacted with a peer, Lgood are not in the set Xn , then agent an could improve the will trigger ak to update the Reliability measure; (iii) The convenience of the owner if the classes accept the user for Convenience measure will be updated for any change in the joining with. Reliability measure; (iv) Behavioral and Trust measures are The CF procedure performed by the class agent - Fig. 1(b). periodically sent to the class agent once and if they have been Let Kc be the set of the agents affiliated to the class c, and recalculated; (v) The generic software agent ak will assist user KM AX the maximum number of learners allowed to be within uk to take decision about joining with or leaving classes. the class c 1 with ||Kc || ≤ KM AX . Suppose that the class The Class Agent Behavior. The behavior of a Class Agent agent Ac stores into a cache the profile P of each user u consists of several tasks executed periodically to maintain data useful to run the CF algorithm (see Section IV). Let cj be a 1 For convenience it is assumed the same for all the classes and topics 58 4 Input: by cj ∈ C for all its students ui ∈ cj . To measure the Xn , NM AX , ξn , χn ; T global convenience P of all the classes of N , we computed the Y = {c ∈ S C} a random class set : |Y | ≤ NM AX , Xn Y = {0}, mean M AC = cj ∈C ACj /||C|| and the standard deviation Z = (Xn Y ) qP 1: for c ∈ Z : dc > ξn do cj ∈C (ACj − M AC) /||C|| . DAC = 2 2: Send a message to Ac to retrieve the profile Pc . 3: Compute γun ,c A first test involved three scenarios consisting of 50, 100, 4: end for and 200 e-Learning classes, as summarized in Table I. To 5: Let be Lgood = {ci ∈ Z : i ≤ j → γun ,ci ≥ χn }, with |Lgood | = NM AX compute the convenience, we assumed that 20% of OSN 6: j → 0 members is unreliable. Behavioral coefficients hreq and hof f 7: for c ∈ Lgood ∧ c 6∈ Xn do and the values of trust (τ ), have been sampled from a normal 8: send a join request to Ac 9: if Ac accepts the request then distribution [34] around specific mean and standard deviation 10: j → j+1 (stdev), see Table I. In particular, τr is the mean of generated 11: end if trust values for reliable users, while τu is the mean for unre- 12: end for 13: for c ∈ {Xn − Lgood } ∧ j > 0 do liable users. Moreover, for this set of experiments, the ratio max ·|C| 14: Sends a leave message to c r= K Nmax ·|U | was set to 1. Besides, the starting composition of 15: j → j−1 classes is random. Table II shows the results of the execution 16: end for of the CF algorithm for the three scenarios reported in Table I, that shows the initial value of MAC/DAC (epoch T0 = 0) and Input: S Kc , KM AX , ωc , πn , ar , Z = Kc {ar }; the final one (epoch Te = 20). Indeed, we have verified that after 20 epochs of executions, the M AC has reached a very 1: if (V (r, Sc ) < Vc ∨ |Kc | ≥ KM AX ) then stable value. It can be observed that, the improvement, in terms 2: Send a reject message to ar of MAC, at the end of the experiments, is about the 8% for all 3: else max ·|C| 4: for a ∈ Kc do the configurations and, since the ratio K Nmax ·|U | is the same for 5: if du ≥ ωc then the three scenarios without relevant variations, the subsequent 6: ask to a its updated profile were driven by r. 7: end if 8: end for For the second set of experiments we assumed a variable max ·|C| 9: for a ∈ Z do value of r = K Nmax ·|U | , as shown in Table III, ranging from 10: compute ηc,a 11: end for 0.1 to 0.9. A value r < 1 say us that users, in overall, can join 12: Let be Kgood = {a ∈ Z : γc,a ≥ πc } more places (Nmax · |U |), than the total allowed (Kmax · |C|). 13: for a ∈ Kc − Kgood do In particular, the best improvement, in terms of M AC, is for 14: send a leave message to a. 15: end for r = 0.4 (+20%), r = 0.5 (+16%) and r = 0.6 (+20%). 16: if ar ∈ Kgood then It means that, from one hand, finding a class to improve the 17: the request of ar is accepted personal convenience γ is a bit more difficult for the user 18: end if 19: end if when r < 1, therefore the CF algorithm helps to improve the MAC with respect to the random composition of classes. Fig. 1. CF algorithm. Top (a): Learner Agent. Bottom (b): Class Agent Nevertheless, the algorithm clearly needs a certain degree of freedom to give some benefits. Therefore, when r is very small, the improvements, in terms of M AC are comparable to managed by his/her learner agent a ∈ Kc and the timestamp those given for values of r close to 1. In overall, these results du of its acquisition. The procedure run by Ac is triggered point out that the CF algorithm gives, on average, a relevant whenever a join request by a learner agent ar (in the interest of ur ) is received by Ac (with the profile Pr ). Let the timestamp ωc and πn ∈ [0, 1] be two thresholds fixed by the agent Ac . TABLE I If the class has reached this maximum, no more students will CF ALGORITHM . S IMULATION PARAMETERS be accepted. By lines 4-8 the class agent asks the updated profile of its students to update their convenienceS γc,a (lines Sc. |C| |U | KM ax NM ax 9-11) so that a new sorted set Kgood ⊂ {Kc ar } is built 1 50 200 2 100 400 20 5 (line 12). Then, the class agent (i) will send a leave message 3 200 800 to all the learner agents a having a convenience γc,a or (ii) if τr τu {hReq , hof f } ar ∈ Kgood (line 16), the agent request is accepted. mean 0.8 0.3 0.5, 0.5 stdev 0.2 0.2 0.2, 0.2 V. E XPERIMENTS In order to evaluate the described approach, we performed TABLE II R ESULTS WITH r = 1.0 some experiments to investigate on the convergence of the CF algorithm described in Section IV. As a measure of Sc 1 Sc 2 Sc 3 the internal convenience for a class cj , we introduced the MAC DAC MAC DAC MAC DAC concept of Average Convenience (AC), computed as the T0 0.63 0.12 0.62 0.12 0.62 0.12 Te 0.67 0.10 0.66 0.10 0.67 0.12 average of all the measures of convenience ηj,i computed 59 5 TABLE III By means of these data, suitable metrics can be created to MAC AND DAC weight the “edges” between users. The proposed algorithm to r=0.1 r=0.2 r=0.3 r=0.4 r=0.5 form groups simply explores the whole OSN to find a minimal T0 MAC 0.61 DAC 0.07 MAC 0.59 DAC 0.02 MAC 0.60 DAC 0.03 MAC 0.60 DAC 0.04 MAC 0.60 DAC 0.08 number of proper candidates to form a group able to optimize Te 0.61 r=0.6 0.08 0.63 r=0.7 0.08 0.69 r=0.8 0.04 0.70 r=0.9 0.10 0.73 0.06 a group EL experience. Differently, we exploit the concept of T0 MAC 0.60 DAC 0.07 MAC 0.63 DAC 0.09 MAC 0.63 DAC 0.09 MAC 0.62 DAC 0.11 trust by combining reliability and reputation. Finally, in [43] Te 0.70 0.09 0.69 0.08 0.68 0.08 0.67 0.10 the student use of Facebook at the University of Cape Town is analyzed by showing positive benefits to build EL micro- communities on Facebook but certain existing challenges, as improvement of the convenience for the classes. [35] including ICT literacy and uneven access, remain opened. In order to test the effectiveness of the trust model we have verified, by simulations, that the class formation algorithm will lead to high and stable values of average convenience. VII. C ONCLUSIONS AND FUTURE WORK Simulations have shown that the CF algorithm will lead Class formation in e-Learning is a critical task for the significant benefits in terms of average quality of interactions. quality of such activities. In this work we focused on a distributed algorithm supported by a trust model and some VI. R ELATED W ORK behavioral measures based on information coming from the OSN (i.e. users trust relationships, interaction quality, histor- Group/class formation is an important task to promote EL ical attitude to interact with peers) to improve the metrics activities and obtain effective results [36]. In particular, form- for dynamic class composition in OSNs. This flexibility is ing random groups/classes may cause absence of participation aimed at improving the quality of learning experiences and it and motivation [37]. A recent survey on group/class forma- is obtained by combining information about trust and previous tion [38] analyzes about 250 works. The authors discovered interactions in a unique measure named “convenience”. In this that the 20% of studies on group formation in collaborative work we have shown a first set of experimental results obtained EL and a 20% of them adopt probabilistic models, while by simulating an artificial scenario with a variable number the remaining studies rely on various AI techniques. Among of users and groups. The results have shown that the class them, an interesting work deals with strategies for group formation algorithm will lead to high and stable values of formation based on individual behaviors [39] obtained by average convenience. monitoring communication data. The results show that the As future work, we will perform a further experimental students participation in small groups is correlated with their campaign in order to verify that the convergence to high values behavior in the class. Therefore, authors suggest to use these of convenience leads to significant benefits in terms of average information to allocate heterogeneously initial classes into quality of interactions. Moreover, a further set of experiments small groups. It partially differ from our approach that is is needed to verify the effectiveness of the trust model to aimed at grouping individuals with similar behaviors, in terms limit malicious behaviors, in order to give trust values which of “positive” and “negative” interactions. Besides, a relevant reflect the actual behavior, in terms of overall reliability, of component in our proposal are the trust relationships from the students. OSN data that in [39] is neglected. A recent survey [40] dealt with the recommender systems R EFERENCES for Technology Enhanced Learning (TEL). These systems [1] J. L. Moore, C. Dickson-Deane, and K. Galyen, “e-learning, online recommend a wide variety of EL resources but their basic learning, and distance learning environments: Are they the same?” The requirements are different from other domains and, therefore, Internet and Higher Education, vol. 14, no. 2, pp. 129–135, 2011. [2] J. Mason and P. Lefrere, “Trust, collaboration, e-learning and organi- specific methods must be adopted to evaluate them. Our sational transformation,” Int.l J. of Training and Development, vol. 7, work includes a recommender system for learners focused no. 4, pp. 259–270, 2003. on the interactions occurring among them. In [41], two new [3] https://www.facebook.com, 2016. [4] https://www.twitter.com, 2016. collaborative team leadership and operational models for EL [5] P. Grabowicz, L. Aiello, V. Eguiluz, and A. Jaimes, “Distinguishing including indexes of trust, reflexivity and shared procedural topical and social groups based on common identity and bond theory,” knowledge are proposed. They attempt to improve practice in Proc. of the ACM Int. Conf. WSDM 2013. ACM, 2013, pp. 627–636. [6] L. Backstrom, D. Huttenlocher, J. Kleinberg, and X. Lan, “Group For- in EL in team-based lifelong learning projects. The authors mation in Large Social Networks: Membership, Growth, and Evolution,” stated that EL teams take benefit from collegian participation in Proc. of the 12th ACM SIGKDD Int. Conf. ACM, 2006, pp. 44–54. in a trusted environment. Also, social skills and knowledge [7] S. Kairam, D. Wang, and J. Leskovec, “The life and death of online groups: Predicting group growth and longevity,” in 5th ACM int. conf. sharing are considered key aspects that, through collegiality on Web search and data mining Proc. of. ACM, 2012, pp. 673–682. and mutual trust, will enable to build innovative, fast-moving [8] F. Messina, G. Pappalardo, D. Rosaci, C. Santoro, and G. M. L. Sarné, EL projects. “Hyson: A distributed agent-based protocol for group formation in online social networks,” in Multiagent System Technologies. Springer In [42] is analyzed the state-of-the-art of the “socialization” Berlin Heidelberg, 2013, pp. 320–333. of EL activities and an automated approach to find proper [9] P. De Meo, F. Messina, D. Rosaci, and G. M. L. Sarné, “Improving the people to form EL groups in OSN is described. As in our work, compactness in social network thematic groups by exploiting a multi- dimensional user-to-group matching algorithm,” in Intelligent Network- it is considered that, in addition to the common criteria to form ing and Collaborative Systems (INCoS), 2014 International Conference groups, OSNs allow the access at a myriad of relationship data. on. IEEE, 2014, pp. 57–64. 60 6 [10] V. Vasuki, N. Natarajan, Z. Lu, B. Savas, and I. Dhillon, “Scalable [26] A. Comi, L. Fotia, F. Messina, G. Pappalardo, D. Rosaci, and G. M. L. affiliation recommendation using auxiliary networks,” ACM Trans. on Sarné, “A distributed reputation-based framework to support commu- Intelligent Systems and Technology, vol. 3, no. 1, p. 3, 2011. nication resources sharing,” in Intelligent Distributed Computing IX. [11] J. Gorla, N. Lathia, S. Robertson, and J. Wang, “Probabilistic group Springer International Publishing, 2016, pp. 211–221. recommendation via information iatching,” in Proc. of the Int. World [27] H. Songhao, S. Kenji, K. Takara, and M. Takashi, “Towards new collab- Wide Web Conf. (WWW ’13). ACM Press, 2013, pp. 495–504. orative e-learning and learning community using portfolio assessment,” [12] F. Messina, G. Pappalardo, D. Rosaci, C. Santoro, and G. M. L. Sarné, in World Conf. on E-Learning in Corporate, Government, Healthcare, “A distributed agent-based approach for supporting group formation in and Higher Education, vol. 2008, no. 1, 2008, pp. 1270–1275. p2p e-learning,” in AI* IA 2013: Advances in Artificial Intelligence. [28] S. Franklin and A. Graesser, “Is it an agent, or just a program?: Springer International Publishing, 2013, pp. 312–323. A taxonomy for autonomous agents,” in Intelligent agents III Agent [13] P. De Meo, E. Ferrara, D. Rosaci, and G. M. L. Sarné, “Trust and Theories, Architectures, and Languages. Springer, 1997, pp. 21–35. compactness in social network groups,” Cybernetics, IEEE Transactions on, vol. 45, no. 2, pp. 205–216, Feb 2015. [29] T. Grandison and M. Sloman, “Trust management tools for internet [14] W. Tan, S. Chen, J. Li, L. Li, T. Wang, and X. Hu, “A trust evalua- applications,” in Trust Management. Springer, 2003, pp. 91–107. tion model for e-learning systems,” Systems Research and Behavioral [30] A. Abdul-Rahman and S. Hailes, “Supporting trust in virtual commu- Science, vol. 31, no. 3, pp. 353–365, 2014. nities,” in HICSS ’00: Proc. of the 33rd Hawaii Int. Conf. on System [15] A. Comi, L. Fotia, F. Messina, G. Pappalardo, D. Rosaci, and G. M. L. Sciences, vol. 6. IEEE Computer Society., 2000. Sarné, “Forming homogeneous classes for e-learning in a social network [31] G. Zacharia and P. Maes, “Trust management through reputation mech- scenario,” in Intelligent Distributed Computing IX. Springer Interna- anisms,” Applied Artificial Intell., vol. 14, no. 9, pp. 881–907, 2000. tional Publishing, 2016, pp. 131–141. [32] S. Garruzzo, D. Rosaci, and G. M. L. Sarné, “Masha-el: A multi- [16] F. Messina, G. Pappalardo, D. Rosaci, C. Santoro, and G. M. L. Sarné, agent system for supporting adaptive e-learning,” in Tools with Artificial “A trust model for competitive cloud federations,” Complex, Intelligent, Intelligence, 2007. ICTAI 2007. 19th IEEE International Conference on, and Software Intensive Systems (CISIS), pp. 469–474, 2014. vol. 2. IEEE, 2007, pp. 103–110. [17] F. Messina, G. Pappalardo, D. Rosaci, and G. M. L. Sarné, “A trust- [33] M. Wooldridge and N. Jennings, “Intelligent agents: Theory and prac- based, multi-agent architecture supporting inter-cloud vm migration tice,” The knowledge eng. review, vol. 10, no. 2, pp. 115–152, 1995. in iaas federations,” in Internet and Distributed Computing Systems. Springer International Publishing, 2014, pp. 74–83. [34] K. Hopkins, G. Glass, and B. Hopkins, Basic statistics for the behavioral [18] A. Comi, L. Fotia, F. Messina, D. Rosaci, and G. M. L. Sarnè, “A qos- sciences . Prentice-Hall, 1987. aware, trust-based aggregation model for grid federations,” in On the [35] S. Garruzzo, D. Rosaci, and G. M. L. Sarné, “Isabel: A multi agent Move to Meaningful Internet Systems: OTM 2014 Conferences. Springer e-learning system,” in Intelligent Agent Technology, 2007. IAT’07. Berlin Heidelberg, 2014, pp. 277–294. IEEE/WIC/ACM International Conference on. IEEE, 2007, pp. 485– [19] T. Snijders, “Network dynamics,” The Handbook of Rational Choice 488. Social Research. Stanford University Press, pp. 252–279, 2013. [36] F. Rennie and T. Morrison, E-learning and social networking handbook: [20] D. Rosaci and G. M. L. Sarné, “Matching users with groups in social Resources for higher education. Routledge, 2013. networks,” in Intelligent Distributed Computing VII. Springer, 2013, [37] P. Dillenbourg, “Over-scripting cscl: The risks of blending collaborative pp. 45–54. learning with instructional design.” Three worlds of CSCL. Can we [21] A. Comi, L. Fotia, F. Messina, G. Pappalardo, D. Rosaci, and G. M. L. support CSCL?, pp. 61–91, 2002. Sarné, “Using semantic negotiation for ontology enrichment in e- learning multi-agent systems,” in Complex, Intelligent, and Software [38] W. Cruz and S. Isotani, “Group formation algorithms in collaborative Intensive Systems (CISIS), 2015 Ninth International Conference on. learning contexts: A systematic mapping of the literature,” in Collabo- IEEE, 2015, pp. 474–479. ration and Technology. Springer, 2014, pp. 199–214. [22] ——, “Supporting knowledge sharing in heterogeneous social network [39] N. Jahng and M. Bullen, “Exploring group forming strategies by thematic groups,” in Complex, Intelligent, and Software Intensive Sys- examining participation behaviours during whole class discussions.” tems (CISIS), 2015 Ninth International Conference on. IEEE, 2015, European J. of Open, Distance and E-Learning, 2012. pp. 480–485. [40] M. Erdt, A. Fernandez, and C. Rensing, “Evaluating recommender sys- [23] P. De Meo, F. Messina, G. Pappalardo, D. Rosaci, and G. M. L. Sarné, tems for technology enhanced learning: A quantitative survey,” Learning “Similarity and trust to form groups in online social networks,” in On the Technologies, IEEE Trans., vol. 8, no. 4, pp. 326–344, 2015. Move to Meaningful Internet Systems: OTM 2015 Conferences. Springer [41] J. Jameson, G. Ferrell, J. Kelly, S. Walker, and M. Ryan, “Building International Publishing, 2015, pp. 57–75. trust and shared knowledge in communities of e-learning practice: [24] A. Comi, L. Fotia, F. Messina, G. Pappalardo, D. Rosaci, and G. M. L. collaborative leadership in the jisc elisa and camel lifelong learning Sarné, “An evolutionary approach for cloud learning agents in multi- projects,” British J. Educat. Tech., vol. 37, no. 6, pp. 949–967, 2006. cloud distributed contexts,” in Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE), 2015 IEEE 24th International [42] S. Brauer and T. C. Schmidt, “Group formation in elearning-enabled Conference on. IEEE, 2015, pp. 99–104. online social networks,” in Interactive Collaborative Learning (ICL), [25] P. De Meo, F. Messina, D. Rosaci, and G. M. L. Sarné, “Recommending 2012 15th Int. Conf. on. IEEE, 2012, pp. 1–8. users in social networks by integrating local and global reputation,” in [43] T. E. Bosch, “Using online social networking for teaching and learning: Internet and Distributed Computing Systems. Springer International Facebook use at the university of cape town,” Comm.: South African J. Publishing, 2014, pp. 437–446. for Comm. Theory and Research, vol. 35, no. 2, pp. 185–200, 2009. 61