=Paper=
{{Paper
|id=Vol-1887/paper3
|storemode=property
|title=Feature Factorization for Top-N Recommendation: From Item Rating to Features Relevance
|pdfUrl=https://ceur-ws.org/Vol-1887/paper3.pdf
|volume=Vol-1887
|authors=Vito Walter Anelli,Tommaso Di Noia,Pasquale Lops,Eugenio Di Sciascio
|dblpUrl=https://dblp.org/rec/conf/recsys/AnelliNLS17
}}
==Feature Factorization for Top-N Recommendation: From Item Rating to Features Relevance==
Feature Factorization for top-n Recommendation: from item rating to features relevance Vito Walter Anelli, Tommaso Di Noia, Pasquale Lops Eugenio Di Sciascio University of Bari “Aldo Moro” Polytechnic University of Bari Via E. Orabona, 4 Via E. Orabona, 4 Bari Bari pasquale.lops@uniba.it {vitowalter.anelli,tommaso.dinoia,disciascio}@poliba.it ABSTRACT systems have widely proved to improve performances in terms of In the last decade, collaborative filtering approaches have shown accuracy and diversity of results[15, 18, 25, 29]. Whenever avail- their effectiveness in computing accurate recommendations starting able, descriptions of the items can be used as a valuable source of from the user-item matrix. Unfortunately, due to their inner nature, information to augment the knowledge injected in and exploited collaborative algorithms work very well with dense matrices but by the system to compute the recommendation list of items. In show their limits when they deal with sparse ones. In these cases, this direction, an interesting class of recommender systems is the encoding user preferences only by means of past ratings may lead so called semantics-aware [8] where the information describing to unsatisfactory recommendations. In this paper we propose to items goes beyond text and keywords and is represented by cate- exploit past user ratings to evaluate the relevance of every single gorical/ontological data. SA approaches make use of ontologies or feature within each profile thus moving from a user-item to a user- encyclopedic sources to encode and exploit domain-specific knowl- feature matrix. We then use matrix factorization techniques to edge and in the last years many approaches have been proposed compute recommendations. The evaluation has been performed on [2, 17, 19]. More recently, thanks to the Linking Open Data initiative, two datasets referring to different domains (music and books) and many structured data have become freely available to represent experimental results show that the proposed method outperforms the content of items in different knowledge domains and then feed the matrix factorization approach performed in the user-item space recommendation engines [9]. in terms of accuracy of results. As a general remark, we can say that most of the recommenda- tion algorithms available in the literature focus on computing the ACM Reference format: relevance of a set of items with reference to the user profile. Rec- Vito Walter Anelli, Tommaso Di Noia, Eugenio Di Sciascio and Pasquale ommendation algorithms are designed around the computation of Lops. 2017. Feature Factorization for top-n Recommendation: from item rating to features relevance. In Proceedings of RecSysKTL Workshop @ ACM a relevance score to an item by evaluating its similarity with refer- RecSys ’17, August 27, 2017, Como, Italy, , 6 pages. ence to other items. Features composing the description of an item, DOI: N/A whatever the source, are not considered per se in the recommen- dation process but are usually exploited to evaluate the similarity between items or users. We believe that more attention should be 1 INTRODUCTION paid to modeling the recommendation problem with a focus on Recent years have seen the flourishing of many and diverse rec- recommending features rather then items. Expanding an item in ommendation techniques based on the collaborative information its features brings with it some interesting side effects. On the one encoded in the user-rating matrix. Factorization techniques work- hand, all features may represent relations that, e.g., latent factor ing in such matrix have proven their effectiveness in improving models are not able to look at. On the other hand, features give us a the performance of recommendation engines and are implemented new set of explicit connections between items to be exploited with in many industrial and commercial systems [1, 14]. State-of-art collaborative filtering algorithms. Finally, recommending items via algorithms can capture complex non-linear or latent factors-based feature recommendation may lead to an easier generation of expla- relationships between users and items and this results more effec- nation for the recommended list of items. Unfortunately, moving tive in all those scenarios where several users partially overlap from items to features is not that straight as in a forest of many their ratings or, in other words, the user-rating matrix is less sparse. features, most of them may result not relevant to a user. Moreover, In order to overcome the limits of pure collaborative approaches, once we design an algorithm able to compute a recommendation list hybrid ones [4] have been proposed that encode also side informa- of features, we have to go back to the items space, as the ultimate tion about the items, typically content-based. Hybrid recommender goal of a recommender systems is to suggest items to a user. In this paper we present FF (for Features Factorization), a RecSysKTL Workshop @ ACM RecSys ’17, August 27, 2017, Como, Italy top-N recommendation algorithm relying on user’s feature prefer- © 2017 Copyright is held by the author(s). ences and collaborative filtering information in the features space. The main goal of FF is to compute an ordered list of features pre- ferred by the user and, starting from such list, to reassemble the relevance values of each returned feature to produce a top-N list of items to recommend. All the side information adopted by FF 16 is retrieved from DBpedia, the cornerstone dataset of the Linked same or similar features. A similar approach is proposed in [26], in Data cloud. For each item in the user profile we retrieve its fea- which effective strategies to incorporate item features for top-N tures by querying DBpedia thus having them as a set of entities. recommender systems are developed. In graph-based recommender This avoids all problems related to synonymy and polysemy which systems, an interesting work was proposed in [13], in which rec- usually occur when dealing with keyword-based features. By com- ommendations are produced inferring user preferences, evaluating bining the popularity of a feature in the user profile and the ratings item-preferences and attribute-preferences. The paper points out assigned to items it is part of, for each user we compute a pair the importance of the feature evaluation and a method is proposed, containing the relevance of the feature and its inferred rating. The which exploits explicit feature ratings, named attributes. Recently, resulting matrix in the user-feature space is then manipulated via an interesting approach called Feature Preferences Matrix Factor- factorization techniques to compute, for each user, a ranked list of ization (FPMF) has been proposed in [24]. FPMF incorporates user features which is in turn post-processed to produce the final list of feature preferences in a matrix factorization to predict user likes. It recommendations. Experimental evaluations of FF on two datasets is worth to note that none of the previous mentioned approaches related to the domains of books and music show its effectiveness rely on features coming from the Linked Open Data cloud. in terms of accuracy of results in very sparse settings. The remainder of the paper is structured as follows. In the next 3 PROPOSED APPROACH section we report some related work on LOD-based and feature- 3.1 Motivation based approaches to recommendation. We continue in Section 3 by introducing and describing FF. Experimental evaluations are This work aims at investigating the role of feature rating and presented in Section 4 while in Section 5 we present and discuss relevance in the item rating process. The main intuition behind the corresponding results. Conclusion and future works close the FF is that items can be handled as a collection of features on which paper. the recommendation process is then performed. Hence, when users rate an item, they are actually expressing their preference over the whole collection. The item rating action can be then summarized as 2 RELATED WORK the non trivial attempt to choose an overall rate for the entire set. Several works have tried to build recommender systems by exploit- If we want to discover the contribution of each single feature in the ing Linked Open Data (LOD) as side information for representing evaluation, first of all, we need to unpack each item in its composing users or items, in addition to the user preferences usually collected features. Then, by combining the overall popularity of each feature through user ratings. Such approaches usually rely on DBpedia, the in the user profile (feature relevance) and the rating assigned to nucleus which acts as a hub for most of the knowledge in the so- items containing that feature we may estimate the implicit rating called LOD cloud. In the following we review the recent literature the user is giving to that specific feature. In the evaluation of a on both LOD-based recommender systems and approaches which movie, the user implicitly evaluates the director, the actors, the leverage the relevance of single features in the user profile. producer, the country in which the movie is set. Each feature has its LOD-based RS. A detailed review of recommender systems own rating and a relevance degree, hence a recommender system leveraging Linked Open Data is presented in [8]. Properties gath- should consider these factors. ered from DBpedia may be used for different tasks, i.e. to produce The second observation we based our work on, is that the rele- cross-domain recommendations [10], to build a multirelational vance of an item in the user profile cannot be entirely encoded in graph for a graph-based recommender [27], or to generate effec- its rating as the single rating represents a degree of liking about tive natural-language recommendation explanations [22]. On the the specific item. The relevance of the item within a collection is other hand, DBpedia properties may be used in different ways: 1) not explicitly encoded anywhere with reference to the user’s view. to define semantic similarity measures for providing more accurate Our assumption is that such item-relevance naturally influences recommendations [18, 23, 30]; 2) to deal with problems as the lim- feature-relevances and vice-versa. ited content analysis or cold-start, e.g. by introducing new relevant In our model the user profile is not just a set of hitem, ratinдi features to improve item representations [3, 33], or to cope with the pairs but it contains information about the relevance of each feature increasing data sparsity [21]; 3) to improve the overall accuracy of composing the rated items and its estimated rating hf eature, relevance, ratinдi. a recommender [20, 29], or to provide a good balance between dif- In the following we will see principled methods to estimate both the ferent recommendation objectives, such as accuracy and diversity user-feature rating and the user-feature relevance. Then, we focus [15, 21, 28]. the recommendation problem on the features composing the user Feature-based RS. Several works attempt to analyze the user profile. FF exploits a collaborative filtering step to get approximated purchasing behavior based on item features. In [35], products are information about the missing features in the users-features matrix represented using vectors of features, and a customer profile module and finally it combines the predicted ratings and relevance for each computes the level of interest of the customer in product features feature available in each item to compute a personalized ranked as the ratio of features among the products purchased, and the list of items. product quantity purchased by that customer. Similarly, in [12] a feature-based recommender system for domains without enough 3.2 Data Model historical data to effectively measure user or item similarities is For a better understanding of the data we use to reshape the user presented. The authors build the system based on the idea that profile as user-feature matrices, we first introduce the multidimen- users who bought items with specific features also buy items with the sional graph we used to build them. As we can see from Figure 1 17 the user profile is built by considering information coming from user u. More formally we have: both the user-item matrix and from DBpedia as external knowl- i ∈Iu |{hi, p, ei | hi, p, ei ∈ DBpedia}| P edge source. The graph-based nature of this latter one is exploited ρ uf (pe) = |Iu | to identify features used to represent items. The knowledge en- coded in Linked Data is represented as RDF labeled oriented graphs The idea behind this computation is quite straight: the more a and the corresponding data model is based on the notion of triple feature is connected to the items in the user profile , the higher its hsubject, predicate, objecti where predicate represents the relation relevance for the user. connecting the two entities subject and object. With reference to Once we have computed the relevance of all the features in the Figure 1, we have that each item in the catalog represents the user profile, we can move to the computation of the relevance for the subject of a triple hi, p, ei ∈ DBpedia. In order to catch the differ- items i ∈ Iu . This can be computed as the normalized summation ent knowledge encoded in the use of the same entity as object in of the relevance for all the features it is composed by. In formulas, triples with diverse predicates, in our model, we consider the chain we have predicate − object, (corresponding to property − entity, pe path in uf hi,p,ei∈DBpedia ρ (pe) P the knowledge graph) as a feature associated to the item i which in ρ ui (i) = |{hi, p, ei | hi, p, ei ∈ DBpedia}| turn represents the subject of the corresponding triple. Each item in the user profile is associated with a relevance func- Given a feature pe, the computation of the feature rating r uf (pe) tion we denote with ρ ui (·). Its value represents an estimation of exploits both the rating and the relevance of each item i ∈ Iu how important is a particular item to the user u. Analogously, we containing pe. have a value associated to each feature in the profile computed via ui hi,p,ei∈DBpedia rui · ρ (i) P the function ρ uf (·) computing the relevance of the feature f (rep- r uf (pe) = P ui (1) resented by the pair of property and entity pe) in the user profile. hi,p,ei∈DBpedia ρ (i) Actually, each feature is associated also with a rating r uf (·) which 3.4 top-N Recommendation is inferred by considering the rating of all the items containing f . The profiles we built contain only the features the user met before, but usually the number of those features is dramatically smaller than the overall number of features and this results in P and R be- ing very sparse. In order to complete the information they contain, we compute, via Biased Matrix Factorization, the missing values ρ̂ uf (pe) for P and rˆuf (pe) for R. We run matrix factorization in- dependently on P and R. Biased Matrix Factorization is a matrix factorization model that minimizes RMSE using stochastic gradient descent [16]. It computes user’s and item’s biases to improve the estimation of the predicted value. Biased Matrix Factorization repre- sents a state-of-the-art algorithm in rating prediction task. ρ̂ uf (pe) and rˆuf (pe) represent the predicted relevance and the predicted rating for all those features not belonging to any of the items in Iu . As the resulting matrices contain both content-based and collab- orative informations (due to the matrix factorization), we refer to them as hybrid profile. With the hybrid profile we can estimate a ranked list for all the remaining items within the collection. In fact, the ranking of an item in the list is computed by considering the rating of the features belonging to the item and their relevance. Figure 1: A graph-based representation of the data behind rˆui (i) = ρ uf (pe) · r uf (pe)+ X the computation of the user profile. (hi,p,ei∈DBpedia)∧(i ∈Iu ) (2) ρ̂ uf (pe) · rˆuf (pe) X + (hi,p,ei∈DBpedia)∧(iα [32] both in its pure collaborative version and in the hybrid one ρ̂ uf (pe) · rˆuf (pe) X + considering side information BPRMF+SI. We included also PopRank (hi,p,ei∈DBpedia)∧(i β as it is acknowledged that popularity ranking can show good per- (3) formance and it is an important baseline to compare against [7]. In order to produce recommendation lists from these well-known 4 EXPERIMENTAL EVALUATION algorithms we used their MyMediaLite1 implementation [11]. As In this section the experimental evaluation settings and the met- for the selection of α and β parameters needed in Equation (3), in rics used to evaluate the proposed algorithm are presented. We these experiments we kept a conservative approach and set respec- evaluated the algorithms in terms of ranking accuracy for top-N tively α to the mean µ of the rated items and β to the mean µ plus recommendations. The evaluation has been carried out on two the standard deviation σ . Clearly, these values are not the optimal datasets, LibraryThing and Last.fm belonging respectively to the ones and the performances could be improved by a cross-validation domains of books and music. In order to remove the popularity setting of these parameters. bias from the evaluation results we removed the 1% most popular items [7]. Moreover we removed users with a number of ratings 5 EXPERIMENTAL RESULTS smaller than five as we want to evaluate the algorithms in a non Tables 2 and 3 show the performance of FF compared with the cold start setting. The LibraryThing dataset contains 7,564 users, competing algorithms described in Section 4. In bold we mark the 39,515 items and 797,299 ratings. The minimum, mean and max- best result for each metric. All the evaluations have been performed imum number of ratings for user in the dataset are 20, 63, 3,018, by using the same protocols as implemented in RankSys2 library respectively. Last.fm contains 1,892 users, 17,632 items and 92,834 [6]. ratings. In LibraryThing, ratings are distributed over a 1-10 scale. In Table 2 we show the evaluation results on LibraryThing In Last.fm the rating is the number of times a song has been played, dataset with a threshold set to 7/10 in a Top-10 recommendation hence that number has been rescaled for each user in a 1-10 scale. list. The ranking accuracy performance, measured through nDCG, Table 1 shows some statistics of the datasets subsets considering precision and recall shows that Features Factorization per- only the items mapped to DBpedia (using publicly available map- forms better than the competing algorithms. In details, FF performs pings [29]) after the pre-processing step. In case a mapping does 4 to 6 times better than BPRMF, the second best accurate algorithm, not exist, a simple placeholder feature is used, that inherits the depending on the metrics. corresponding item values in terms of rating and relevance. As the rescaling operation in Last.fm affects the values of the Table 1 also reports the sparsity values both for users-items and items in the test set, we decided to perform evaluations considering users-features matrices. all the items in test set as relevant (i.e. without any relevance thresh- To evaluate FF we use the all unrated items [34] evaluation proto- old). Table 3 shows ranking accuracy evaluation results on Last.fm col, in which the ability to choose the correct set of items to propose dataset with threshold of 0/10 for a Top-10 recommendation list. For to the users is favorite despite of the local ranking ability (rated test- precision metric the best performing algorithm is FF that performs items evaluation protocol). In all unrated items the recommendation 4 times better than BPRMF. For nDCG, Features Factorization list is produced using as candidate list the Cartesian product be- performs at least 5 times better than the competing algorithms. tween users and item minus the items the user experimented in the The differences about accuracy metrics between FF and the other training set. The evaluation has been conducted using a hold-out 1 http://www.mymedialite.net/ 80-20 splitting, in which 20% of the ratings are retained as test set. 2 https://github.com/RankSys/RankSys 19 Alg P@N R@N nDCG@N of books and music. In both datasets FF results the best algorithm FF 0.03251 0.06576 0.06129 in terms of recommending accurate items. This can be considered BPRMF 0.00837 0.01280 0.01020 as a strong clue to confirm our intuition that recommending items BPRMF+SI 0.00777 0.01325 0.01007 via feature ranking is a feasible way to develop content-aware rec- PopRank 0.00023 0.00095 0.00044 ommendation engines. As future work, we are investigating the Table 2: Comparative results on LibraryThing dataset, Top- behavior of FF with respect to novelty and diversity of results. We 10 recommendation list and relevance threshold of 7/10. are also interested in exploring the behavior of FF approach with different collaborative filtering algorithms, other than factorization techniques in the item-feature space and in particular with Factor- Alg P@N R@N nDCG@N ization Machines [31]. Moreover, since we collected content-based FF 0.01543 0.02701 0.02330 data from Linked Open Data datasets, an analysis on the influence BPRMF 0.00348 0.00902 0.00495 of such datasets on the recommendation results is also in progress. BPRMF+SI 0.00032 0.00073 0.00028 Another aspect we are willing to deepen is related to results expla- PopRank 0.00027 0.00089 0.00021 nation. Indeed, very interestingly, item recommendation via feature Table 3: Comparative results on Last.fm dataset, Top-10 rec- ranking paves the way to new proposals for explanation services. ommendation list and no relevance threshold. REFERENCES [1] Robert M Bell and Yehuda Koren. 2007. Lessons from the Netflix prize challenge. Acm Sigkdd Explorations Newsletter 9, 2 (2007), 75–79. algorithms are statistically significant according to the Student’s [2] Yolanda Blanco-Fernandez, Jose J Pazos-Arias, Alberto Gil-Solla, Manuel Ramos- paired t-test with p < 0.001 for every cases. Cabrer, and Martin Lopez-Nores. 2008. Providing entertainment by content- based filtering and semantic reasoning in intelligent recommender systems. IEEE The differences in the behavior for the two datasets can be ex- Transactions on Consumer Electronics 54, 2 (2008). plained by looking at different dimensions of the Last.fm dataset [3] Svetlin Bostandjiev, John O’Donovan, and Tobias Höllerer. 2012. TasteWeights: a visual interactive hybrid recommender system. In Proceedings of the sixth ACM (both the original and the feature-augmented one we used in our conference on Recommender Systems. 35–42. experiments). As for the original dataset, while for LibraryThing [4] Robin Burke. 2002. Hybrid recommender systems: Survey and experiments. User we used the original ratings of the user, in Last.fm we rescaled modeling and user-adapted interaction 12, 4 (2002), 331–370. [5] Cinzia Cappiello, Tommaso Di Noia, Bogdan Alexandru Marcu, and Maristella the users feedback represented as the number of times they played Matera. 2016. A Quality Model for Linked Data Exploration. In Web Engineering a song and normalized it in a 1-10 scale. This could have affected - 16th International Conference, ICWE. 397–404. the final results especially in terms of accuracy. Indeed, the pure [6] Pablo Castells, Neil J. Hurley, and Saul Vargas. 2015. Novelty and Diversity in Recommender Systems. Springer US, 881–918. content-based feature ratings we predict highly depend on the [7] Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. 2010. Performance of original rating value (see Equation 1). recommender algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM conference on Recommender systems (RecSys ’10). ACM, New York, If we consider the feature-augmented dataset, by looking at NY, USA, 39–46. DOI:https://doi.org/10.1145/1864708.1864721 the data represented in Table 1 the first observation we make is [8] Marco de Gemmis, Pasquale Lops, Cataldo Musto, Fedelucio Narducci, and Gio- that the number of features in Last.fm is two order of magnitude vanni Semeraro. 2015. Semantics-Aware Content-Based Recommender Systems. In Recommender Systems Handbook. 119–159. higher than the number of items while in LibraryThing it is just [9] Tommaso Di Noia, Roberto Mirizzi, Vito Claudio Ostuni, Davide Romito, and one. Then, the decrease in performance of FF may be attributed Markus Zanker. 2012. Linked open data to support content-based recommender also to the curse of dimensionality problem. Moreover, a deeper systems. In Proceedings of the 8th International Conference on Semantic Systems. ACM, 1–8. investigation on the quality of the adopted LOD dataset is needed. [10] Ignacio Fernández-Tobías, Paolo Tomeo, Iván Cantador, Tommaso Di Noia, and Recently, a few papers have been published on this topic [5, 36] Eugenio Di Sciascio. 2016. Accuracy and diversity in cross-domain recommenda- tions for cold-start users with positive-only feedback. In Proceedings of the 10th but there is not yet a common view on the metrics to be adopted ACM Conference on Recommender Systems. ACM, 119–122. to evaluate the quality of the knowledge encoded in a Linked Data [11] Zeno Gantner, Steffen Rendle, Christoph Freudenthaler, and Lars Schmidt- dataset and, more generally, in a knowledge graph. Thieme. 2011. MyMediaLite: a free recommender system library. In Proceedings of the fifth ACM conference on Recommender systems (RecSys ’11). ACM, New York, NY, USA, 305–308. 6 CONCLUSION [12] Eui-Hong Han and George Karypis. 2005. Feature-based recommendation system. In Proceedings of the 2005 ACM CIKM International Conference on Information In this paper we presented FF, a novel algorithm that bases on fea- and Knowledge Management. 446–452. ture recommendation as an intermediate step for computing top-N [13] Luheng He, Nathan Nan Liu, and Qiang Yang. 2011. Active Dual Collaborative Filtering with Both Item and Attribute Feedback.. In AAAI. items recommendation lists. The main idea behind FF is that feature [14] Yuchin Juan, Yong Zhuang, Wei-Sheng Chin, and Chih-Jen Lin. 2016. Field- relevance in a user profile plays a key role in the selection and rating aware factorization machines for CTR prediction. In Proceedings of the 10th ACM of an item in a collection. Based on this observation we developed Conference on Recommender Systems. ACM, 43–50. [15] Houda Khrouf and Raphaël Troncy. 2013. Hybrid Event Recommendation Using an algorithm that shifts the recommendation problem from a user- Linked Data and User Diversity. In Proceedings of the 7th ACM Conference on item space to a user-feature one. In this new space we introduced Recommender Systems (RecSys ’13). ACM, New York, NY, USA, 185–192. DOI: the explicit notion of feature relevance and feature rating and com- https://doi.org/10.1145/2507157.2507171 [16] Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix Factorization bined them with well known factorization techniques to perform Techniques for Recommender Systems. Computer 42, 8 (2009), 30–37. a Features Factorization aimed at predicting a rating and a [17] John Lees-Miller, Fraser Anderson, Bret Hoehn, and Russell Greiner. 2008. Does wikipedia information help netflix predictions?. In Machine Learning and Appli- relevance for each feature unknown to the user. We compared FF cations, 2008. ICMLA’08. Seventh International Conference on. IEEE, 337–343. with well known factorization techniques (both pure collaborative [18] Rouzbeh Meymandpour and Joseph G Davis. 2015. Enhancing Recommender Sys- and hybrid with side information) on two datasets in the domains tems Using Linked Open Data-Based Semantic Analysis of Items. In Proceedings 20 of the 3rd Australasian Web Conference (AWC 2015), Vol. 27. 11–17. [19] Stuart E Middleton, Nigel R Shadbolt, and David C De Roure. 2004. Ontological user profiling in recommender systems. ACM Transactions on Information Systems (TOIS) 22, 1 (2004), 54–88. [20] Cataldo Musto, Pierpaolo Basile, Pasquale Lops, Marco de Gemmis, and Giovanni Semeraro. 2014. Linked Open Data-enabled Strategies for Top-N Recommen- dations. In Proceedings of the 1st Workshop on New Trends in Content-based Recommender Systems co-located with the 8th ACM Conference on Recommender Systems, CBRecSys@RecSys. [21] Cataldo Musto, Pierpaolo Basile, Pasquale Lops, Marco de Gemmis, and Giovanni Semeraro. 2017. Introducing linked open data in graph-based recommender systems. Information Processing & Management 53, 2 (2017), 405–435. [22] Cataldo Musto, Fedelucio Narducci, Pasquale Lops, Marco De Gemmis, and Gio- vanni Semeraro. 2016. ExpLOD: A Framework for Explaining Recommendations based on the Linked Open Data Cloud. In Proceedings of the 10th ACM Conference on Recommender Systems. ACM, 151–154. [23] C. Musto, G. Semeraro, P. Lops, M. de Gemmis, and F. Narducci. 2012. Leveraging Social Media Sources to Generate Personalized Music Playlists. In E-Commerce and Web Technologies - 13th International Conference, EC-Web 2012. 112–123. [24] Mona Nasery, Matthias Braunhofer, and Francesco Ricci. 2016. Recommendations with Optimal Combination of Feature-Based and Item-Based Preferences. In Pro- ceedings of the 2016 Conference on User Modeling Adaptation and Personalization, UMAP. 269–273. [25] Xia Ning and George Karypis. 2012. Sparse linear methods with side informa- tion for top-n recommendations. In Proceedings of the sixth ACM conference on Recommender systems (RecSys ’12). ACM, New York, NY, USA, 155–162. [26] Xia Ning and George Karypis. 2012. Sparse linear methods with side information for top-n recommendations. In Sixth ACM Conference on Recommender Systems, RecSys. 155–162. [27] Tommaso Di Noia, Vito Claudio Ostuni, Paolo Tomeo, and Eugenio Di Sciascio. 2016. Sprank: Semantic path-based ranking for top-n recommendations using linked open data. ACM Transactions on Intelligent Systems and Technology (TIST) 8, 1 (2016), 9. [28] Sergio Oramas, Vito Claudio Ostuni, Tommaso Di Noia, Xavier Serra, and Eu- genio Di Sciascio. 2017. Sound and Music Recommendation with Knowledge Graphs. ACM TIST 8, 2 (2017), 21:1–21:21. [29] Vito Claudio Ostuni, Tommaso Di Noia, Eugenio Di Sciascio, and Roberto Mirizzi. 2013. Top-N Recommendations from Implicit Feedback Leveraging Linked Open Data. In Proceedings of the 7th ACM Conference on Recommender Systems (RecSys ’13). ACM, New York, NY, USA, 85–92. DOI:https://doi.org/10.1145/2507157. 2507172 [30] Guangyuan Piao and John G Breslin. 2016. Measuring semantic distance for linked open data-enabled recommender systems. In Proceedings of the 31st Annual ACM Symposium on Applied Computing. 315–320. [31] Steffen Rendle. 2010. Factorization Machines. In Proceedings of the 2010 IEEE International Conference on Data Mining (ICDM ’10). IEEE Computer Society, Washington, DC, USA, 995–1000. DOI:https://doi.org/10.1109/ICDM.2010.127 [32] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt- Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI ’09). AUAI Press, Arlington, Virginia, United States, 452–461. [33] Max Schmachtenberg, Thorsten Strufe, and Heiko Paulheim. 2014. Enhancing a Location-based Recommendation System by Enrichment with Structured Data from the Web. In 4th International Conference on Web Intelligence, Mining and Semantics WIMS. 17:1–17:12. [34] Harald Steck. 2013. Evaluation of recommendations: rating-prediction and ranking. In RecSys. 213–220. [35] Sung-Shun Weng and Mei-Ju Liu. 2004. Feature-based recommendations for one-to-one marketing. Expert Syst. Appl. 26, 4 (2004), 493–508. [36] Amrapali Zaveri, Anisa Rula, Andrea Maurino, Ricardo Pietrobon, Jens Lehmann, and Sören Auer. 2016. Quality assessment for linked data: A survey. Semantic Web 7, 1 (2016), 63–93. 21