AFEL-REC: A Recommender System for Providing Learning Resource Recommendations in Social Learning Environments Dominik Kowald, Emanuel Lacic, Dieter Theiler & Elisabeth Lex Know-Center GmbH & Graz University of Technology, Austria {dkowald,elacic,dtheiler}@know-center.at,elisabeth.lex@tugraz.at ABSTRACT freely-chosen keywords used for collaboratively annotating learn- In this paper, we present preliminary results of AFEL-REC, a recom- ing resources [8]. Although a lot of recommender systems and mender system for social learning environments. AFEL-REC is build algorithms are available in the TEL area, there is still the lack of upon a scalable software architecture to provide recommendations research on recommender systems specifically tailored for social of learning resources in near real-time. Furthermore, AFEL-REC can learning environments (such as e.g., [3]). cope with any kind of data that is present in social learning environ- Therefore, in the course of the H2020 project Analytics for Ev- ments such as resource metadata, user interactions or social tags. eryday Learning (AFEL)1 , we have developed AFEL-REC, which is a We provide a preliminary evaluation of three recommendation use recommender system for social learning environments. AFEL-REC cases implemented in AFEL-REC and we find that utilizing social is build upon a scalable software architecture to support various data in form of tags is helpful for not only improving recommenda- use cases for providing recommendations of learning resources in tion accuracy but also coverage. This paper should be valuable for near real-time (see Section 2). We conducted a preliminary eval- both researchers and practitioners interested in providing resource uation of AFEL-REC using data gathered from the Spanish social recommendations in social learning environments. learning environment Didactalia2 (see Section 3). Taken together, our contributions are twofold (see also Section 4): KEYWORDS (1) We present AFEL-REC and use cases for providing resource Social Recommender Systems; Social Learning Environments; Ana- recommendations in social learning environments. lytics for Everyday Learning; Collaborative Filtering; Coverage (2) We demonstrate that social information, such as social tags, can be used to improve the accuracy and coverage of recom- 1 INTRODUCTION mendations in social learning environments. Recommender systems aim to predict if a specific user will like We believe that our work contributes to the under-researched port- a specific resource. To do so, recommender systems analyze past folio of recommender systems for social learning environments. usage behavior (e.g., clicks or likes) with the goal to generate a Furthermore, we present an overview of use cases and preliminary personalized list of potentially relevant resources [15]. Nowadays, evaluation results, which should be valuable for both researchers recommender systems are part of many applications, such as online and practitioners interested in providing resource recommenda- marketplaces (e.g., Amazon), movie streaming services (e.g., Netflix), tions in social learning environments. job portals (e.g., LinkedIn), and Technology Enhanced Learning (TEL) environments (e.g., Coursera). 2 APPROACH Especially in the field of TEL, recommender systems have be- come an important research area over the past decade [1]. One of the In this section, we present our AFEL-REC system by providing a many examples in this area is the CoFind system [2], which guides detailed description of its software architecture as well as potential learners to resources that were found useful by other learners in use cases that can be realized for recommending resources in social the past. Other examples include the work of [13], who proposed a learning environments. recommendation approach with query extraction mechanisms or the work of [4], who enhanced Collaborative Filtering (CF) [5] by 2.1 System Overview taking into account the learner’s evolution in time. Another recent The software architecture of AFEL-REC is based on the scalable strand is the research on context-aware recommender approaches recommendation framework ScaR3 [11]. It is depicted in Figure 1 for TEL. Here, contextual information, such as the location [19], and consists of the following main modules: are incorporated into the recommendation process. In this respect, social learning environments [18], which aim to support users in Service Provider (SP). The SP acts as a proxy for social learning learning through the observation of other users’ behaviors, bear environments to access AFEL-REC. Thus, it provides REST-based great potential for recommender systems as they provide a vast Web services to enable clients to query recommendations and to amount of social information. This includes, for example, friend- add new data (e.g., user interactions or learning resources) to the ship connections, group memberships or social tags, which are recommender system. Copyright © CIKM 2018 for the individual papers by the papers' 1 http://afel-project . eu/ authors. Copyright © CIKM 2018 for the volume as a collection 2 https://didactalia . net/ by its editors. This volume and its papers are published under 3 http://scar . know-center . tugraz . at/ the Creative Commons License Attribution 4.0 International (CC BY 4.0). ZooKeeper Data Modification Layer DML Service Provider Recommender Engine Recommender Evaluator SP RE REV get add (social) data / evaluate algorithms recommendations interactions s update profiles Recommender Customizer Social Learning Environment RC System Administrator Figure 1: The scalable AFEL-REC software architecture based on the powerful open-source frameworks Apache Solr and Apache ZooKeeper. The main modules of AFEL-REC communicate via REST-based Web services. Data Modification Layer (DML) & Solr. The DML encapsulates ZooKeeper. We are using Apache ZooKeeper5 for handling the all CRUD operations (i.e., create, retrieve, update, delete) in one mod- communication between the modules and for offering horizontal ule and therefore, enables easy access to the underlying data back- scaling. Thus, in cases in which we observe a high request load, we end. As depicted in Figure 1, AFEL-REC uses the high-performance can start multiple instances of the same module (indicated by the search platform Apache Solr4 . This data backend solution not only arrows in the DML and RE modules). guarantees scalability and near real-time recommendations but also the support of multiple data sources. While most recommender 2.2 Use Cases systems rely on rating-based data, AFEL-REC is also capable of Using this software architecture, AFEL-REC is capable of supporting processing relevant social information such as tags. seven use cases for providing recommendations in social learning Recommender Engine (RE). The RE is the heart of AFEL-REC environments: and is responsible for calculating recommendations. As we are using UC1: Recommendation of Popular Resources in the Social Apache Solr, we can benefit from its build-in data structures for Learning Environment. The first use case is a non-personalized efficiently calculating user and resource similarities, and ranking one and is especially useful for new users of a social learning envi- resources based on their relevance for a specific recommendation ronment without any user interactions so far (i.e., cold-start users context. In Section 2.2, we discuss possible use cases and algorithms [16]). Thus, it is typically realized using a MostPopular algorithm. that can be realized using AFEL-REC’s recommender engine. This approach recommends learning resources, which are weighted Recommender Customizer (RC). The RC is used to change the and ranked by the number of interactions. As mentioned, the Most- parameters (e.g., the neighborhood size n) of the recommendation Popular approach is non-personalized and thus, each user will approaches on the fly. Thus, it holds a so-called recommendation receive the same recommendations. profile for each approach, which can be accessed and changed by UC2: Recommendation of Resources That Like-Minded Users the system administrator. These changes are then broadcast to the Have Interacted With. The second use case is a personalized RE to be aware of how a specific approach should be executed. one and thus, analyzes past user interactions to specifically tailor Recommender Evaluator (REV). The REV is responsible for eval- recommendations towards learners. Collaborative Filtering (CF) uating the recommendation algorithms implemented in the RE. algorithms are typically chosen to realize such a use case [5]. CF Thus, it can be executed to perform an offline evaluation with approaches analyze the interactions between users and items (e.g., training/test set splits (see Section 3) or an online evaluation with learning resources) and recommend those items to a given user that A/B-tests. similar users have interacted with in the past. More specifically, in CF methods two users are treated as similar if they have liked the same items in the past. This in turn allows us to assume that these two users will also like the same (or similar) items in the future. 4 http://lucene . apache . org/solr/ 5 https://zookeeper . apache . org/ UC3: Recommendation of Resources Based on Social Infor- Number of interactions (i.e., clicks) 1,879,761 mation. This use case is similar to UC2 but this time two users are Number of users 1,274,858 treated as similar if they have shared some social information in the Number of learning resources 35,346 past. This social information can be friendship connections, group Number of social tags 485,295 memberships or social tags. We hypothesize that social information Average number of interactions per user 1.47 Average number of interactions per learning resource 53.18 is capable of providing a richer semantic representation of a user’s Average number of tags per learning resource 13.73 interests than pure interaction data. Therefore, we also think that this should positively influence the coverage of the recommenda- Table 1: Statistics of our dataset, which was collected from tions as validated in Section 3 of this paper. the social learning environment Didactalia. UC4: Recommendation of Resources That are Similar to the Resources the User Has Interacted With. One disadvantage of we want show that social information can enhance the prediction UC2 and UC3 is that it can only be applied to resources, which al- accuracy and coverage of recommendations. To do so, we focus on ready have user interactions or social information attached to them. UC1 to UC3 presented in the previous section. This means that cold-start resources without any user interactions or social information cannot be recommended. To overcome this, 3.1 Data UC3 aims at utilizing resource similarities for personalized recom- mendations by using a Content-based Filtering (CBF) approach [12]. We collected our data from the social learning environment Didac- CBF methods use resource features such as categories or description talia between the 26th of February 2017 until the 28th of May 2018. texts for calculating similarities between resources. Then, the most This included 1,879,761 user interactions (i.e., clicks on learning similar resources of the resources the given user has interacted resources) by 1,274,858 users on 35,346 learning resources. This with are recommended. resulted in 1.47 interactions per user and 53.18 interactions per learning resource on average. Additionally, 485,295 social tags were UC5: Recommendation of (Alternative) Resources for a Spe- applied to these learning resources, which resulted in 13.73 tags per cific Resource. This use case is related to UC4 but provides contex- resource on average. The full statistics of our dataset are shown in tualized recommendations instead of personalized ones. This means Table 1. To date, the only social information we are using in our data that recommendations are not based on the learner but based on a are tags but we are planning to extend this by also incorporating specific resource by finding alternative ones. Thus, similar to UC4, connections between users or group memberships [10]. the most similar resources for the given resource are recommended using CBF. 3.2 Evaluation Method and Metrics UC6: Recommendation of (Alternative) Resources for a Spe- For evaluating AFEL-REC, we split our dataset into training and cific User and a Specific Resource. The next use case also focuses test sets. Therefore, we followed common practice in the research on recommending alternative resources for a specific resource but area of recommender systems and information retrieval by using this time in a personalized manner. Such use case can be imple- the most recent 20% of interactions of each user for testing and the mented using a contextualized CF approach. This means that we remaining 80% for training. This dataset splitting technique ensures search for similar users of the target user, who have also interacted that the chronological order of the data is preserved and thus, that with the target resource. Thus, this use case is similar to Amazon’s the future is predicted based on past user interactions. “Users who bought this, also bought that” recommender. For measuring the accuracy of the recommendations, we use a rich set of metrics, namely Recall (R@20, measured for k = 20 rec- UC7: Recommendation of Resources for a Specific User and ommended resources), Precision (P@1, for k = 1), F1-score (F1@10 a Specific Learning Goal. Finally, the last use case tackles recom- for k = 10), Mean Reciprocal Rank (MRR@20, for k = 20), Mean Av- mendations in a personalized and adaptive manner. While UC1 to erage Precision (MAP@20, for k = 20) and normalized Discounted UC6 have focused on providing relevant recommendations, they Cumulative Gain (nDCG@20, for k = 20) [17]. Furthermore, we also have neglected the learning goal of the user. Such a learning goal report the coverage (C) of the recommendations, measuring the could be the aim to focus on a specific topic or to receive more fraction of users for whom the algorithm is capable of producing difficult learning resources. To realize adaptive recommendations, any recommendations. the suggested resources could be re-ranked using a feature boost- ing technique. For example, if the learning goal is to study more difficult resources, then resources with a higher complexity (e.g., 3.3 Recommendation Approaches measured via a readability score) should be boosted and easy ones We evaluated UC1 - UC3 presented in Section 2.2 to show (i) the should be down-graded in the recommendation list. general usefulness of AFEL-REC for providing recommendations in social learning environments, and (ii) that social information in the form of tags is helpful for improving the recommendation accuracy 3 PRELIMINARY EVALUATION and coverage. In the future, we will also evaluate UC4 - UC7. In this section, we present preliminary evaluation results for AFEL- The MP (MostPopular) approach refers to UC1 and is a non- REC. The aim of this evaluation is twofold: (i) we want to show that personalized algorithm, which recommends the most frequently AFEL-REC is capable of providing recommendations using data used learning resources in the system. This algorithm also works gathered from a real-world social learning environment, and (ii), for cold-start users [16] and thus, should reach a UC of 100%. Approach R@20 P@1 F1@10 MRR@20 MAP@20 nDCG@20 C AFEL-REC by also considering additional types of social informa- UC1: MP .007 .002 .002 .002 .002 .003 100% tion such as friendship connections or group memberships. Also, UC2: CFi .046 .022 .012 .025 .026 .032 40% we plan to include our cognitive-inspired tag recommender algo- UC3: CFt .070 .027 .016 .034 .035 .044 53% rithms, which have been specifically useful in the context of TEL Table 2: Preliminary results of our evaluation of AFEL-REC. [7, 9]. Finally, we will also evaluate the remaining five use cases, We see that CFt provides not only a better recommendation which we have not tackled so far. This evaluation will then be also accuracy but also coverage (C) than CFi , which means that conducted in an online manner using a project-wide user study. social information in the form of tags is helpful for improv- Acknowledgments. The authors would like to thank Didactalia ing recommendations in social learning environments. and the AFEL consortium. This work was supported by the Know- Center GmbH Graz (Austrian FFG COMET Program) and the EU- funded H2020 project AFEL (GA: 687916). The CFi algorithm refers to UC2 and calculates the neighborhood of a user on the basis of interaction data (i.e., clicks). Thus, two users REFERENCES are treated as similar if they have interacted with the same learning [1] Hendrik Drachsler, Katrien Verbert, Olga C Santos, and Nikos Manouselis. 2015. resources in the past [5]. Based on [6], we used a neighborhood Panorama of recommender systems to support learning. In Recommender systems size of n = 20 users. handbook. Springer, 421–451. [2] Jon Dron, Richard Mitchell, Phil Siviter, and Chris Boyne. 2000. CoFIND-an Similar to CFi , CFt is a Collaborative Filtering-based approach experiment in n-dimensional collaborative filtering. Journal of Network and but, as discussed in UC3, this one calculates the user neighborhood Computer Applications 23, 2 (2000), 131–142. based on social tags. Thus, two users are treated as similar if they [3] Sandy El Helou, Christophe Salzmann, and Denis Gillet. 2010. The 3A Personal- ized, Contextual and Relation-based Recommender System. J. UCS 16 (2010). have used the same social tags in the past [14]. [4] Mercedes Gomez-Albarran and Guillermo Jimenez-Diaz. 2009. Recommendation and studentsâĂŹ authoring in repositories of learning objects: A case-based 3.4 Results reasoning approach. International Journal of Emerging Technologies in Learning (iJET) 4, 2009 (2009), 35–40. The preliminary results of our evaluation are shown in Table 2. [5] Jonathan L Herlocker, Joseph A Konstan, Loren G Terveen, and John T Riedl. 2004. Evaluating collaborative filtering recommender systems. ACM Transactions While the lowest accuracy estimates are provided by the unperson- on Information Systems (TOIS) 22, 1 (2004), 5–53. alized MP approach (UC1), this approach also provides the highest [6] Simone Kopeinik, Dominik Kowald, Ilire Hasani-Mavriqi, and Elisabeth Lex. 2017. UC with 100%. This means that MP is capable of providing recom- Improving Collaborative Filtering Using a Cognitive Model of Human Category Learning. The Journal of Web Science 2, 1 (2017). mendations for all users in the social learning environment. [7] Simone Kopeinik, Elisabeth Lex, Paul Seitlinger, Dietrich Albert, and Tobias Ley. When looking at the next algorithm, CFi (i.e., CF on the basis 2017. Supporting Collaborative Learning with Tag Recommendations: A Real- of user interactions, UC2), we notice that this approach provides world Study in an Inquiry-based Classroom Project. In Proceedings of the Seventh International Learning Analytics and Knowledge Conference (LAK ’17). ACM, New approx. 10 times higher accuracy values than MP. We contribute York, NY, USA, 409–418. DOI:http://dx . doi . org/10 . 1145/3027385 . 3027421 this to the personalization factor of the algorithm. However, one [8] Dominik Kowald, Simone Kopeinik, Paul Seitlinger, Tobias Ley, Dietrich Albert, and Christoph Trattner. 2015. Refining frequency-based tag reuse predictions drawback of CFi is the rather small UC of 40%, which means that it by means of time and semantic context. In Mining, Modeling, and Recommend- cannot generate any recommendations for 60% of the users. ing’Things’ in Social Media. Springer, 55–74. Finally, CFt (i.e., CF on the basis of social tags, UC3), provides [9] Dominik Kowald, Subhash Chandra Pujari, and Elisabeth Lex. 2017. Temporal Effects on Hashtag Reuse in Twitter: A Cognitive-Inspired Hashtag Recommen- not only the best results with respect to recommendation accuracy dation Approach. In Proceedings of the 26th International Conference on World but also a larger coverage (C) than CFi . This result shows that social Wide Web (WWW ’17). 1401–1410. information can indeed help to improve recommendations in social [10] Emanuel Lacic, Dominik Kowald, Lukas Eberhard, Christoph Trattner, Denis Parra, and Leandro Balby Marinho. 2015. Utilizing online social network and learning environments. location-based data to recommend products and categories in online marketplaces. In Mining, Modeling, and Recommending’Things’ in Social Media. Springer. [11] Emanuel Lacic, Matthias Traub, Dominik Kowald, and Elisabeth Lex. 2015. ScaR: 4 CONCLUSION AND FUTURE WORK Towards a Real-Time Recommender Framework Following the Microservices In this paper, we presented AFEL-REC, a recommender system for Architecture. In Proceedings of LSRS2015 Workshop at RecSys 2015. . [12] Pasquale Lops, Marco De Gemmis, and Giovanni Semeraro. 2011. Content-based social learning environments. AFEL-REC is build upon a scalable recommender systems: State of the art and trends. In Recommender systems software architecture based on powerful open-source frameworks handbook. Springer, 73–105. such as Apache Solr or Apache ZooKeeper and thus, is capable of [13] Eleni Mangina and John Kilbride. 2008. Evaluation of keyphrase extraction algorithm and tiling process for a document/resource recommender within e- providing near real-time recommendations of learning resources. learning environments. Computers & Education 50, 3 (2008), 807–820. We have demonstrated the usefulness of AFEL-REC by discussing [14] Denis Parra and Peter Brusilovsky. 2009. Collaborative filtering for social tagging systems: an experiment with CiteULike. In Proceedings of the third ACM conference use cases that can be realized with it and by providing prelimi- on Recommender systems. ACM, 237–240. nary evaluation results using data gathered from the Spanish social [15] Francesco Ricci, Lior Rokach, and Bracha Shapira. 2011. Introduction to recom- learning environment Didactalia. Our evaluation results show that mender systems handbook. Springer. [16] Andrew I Schein, Alexandrin Popescul, Lyle H Ungar, and David M Pennock. social information in the form of tags can be used to enhance the ac- 2002. Methods and metrics for cold-start recommendations. In Proceedings of the curacy and coverage of learning resource recommendations. These 25th annual international ACM SIGIR conference on Research and development in findings should be of interest for both researchers and practition- information retrieval. ACM, 253–260. [17] Christoph Trattner, Dominik Kowald, and Emanuel Lacic. 2015. TagRec: towards ers interested in providing resource recommendations in social a toolkit for reproducible evaluation and development of tag-based recommender learning environments. algorithms. ACM SIGWEB Newsletter Winter (2015), 3. [18] Julita Vassileva. 2008. Toward social learning environments. IEEE transactions Future Work. One limitation of this work is that we have only on learning technologies 1, 4 (2008), 199–214. investigated tags as potential social information for learning re- [19] Zhiwen Yu, Xingshe Zhou, and Lei Shu. 2010. Towards a semantic infrastructure source recommendations. Thus, for future work, we plan to extend for context-aware e-learning. Multimedia Tools and Applications 47, 1 (2010).