<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">RDF Graph Embeddings for Content-based Recommender Systems</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Jessica</forename><surname>Rosati</surname></persName>
							<email>jessica.rosati@unicam.it</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Camerino</orgName>
								<address>
									<addrLine>Piazza Cavour 19</addrLine>
									<postCode>f -62032</postCode>
									<settlement>Camerino</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">Polytechnic University of Bari</orgName>
								<address>
									<addrLine>Via Orabona</addrLine>
									<postCode>4 -70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Petar</forename><surname>Ristoski</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Tommaso</forename><forename type="middle">Di</forename><surname>Noia</surname></persName>
							<email>tommaso.dinoia@poliba.it</email>
						</author>
						<author>
							<persName><forename type="first">Renato</forename><surname>De Leone</surname></persName>
							<email>renato.deleone@unicam.it</email>
						</author>
						<author>
							<persName><forename type="first">Heiko</forename><surname>Paulheim</surname></persName>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="department">Data and Web Science Group</orgName>
								<orgName type="institution">University of Mannheim</orgName>
								<address>
									<addrLine>B6, 26</addrLine>
									<postCode>68159</postCode>
									<settlement>Mannheim</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff3">
								<orgName type="institution">Polytechnic University of Bari</orgName>
								<address>
									<addrLine>Via Orabona</addrLine>
									<postCode>4 -70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff4">
								<orgName type="institution">University of Camerino</orgName>
								<address>
									<addrLine>Piazza Cavour 19</addrLine>
									<postCode>f -62032</postCode>
									<settlement>Camerino</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff5">
								<orgName type="department">Data and Web Science Group</orgName>
								<orgName type="institution">University of Mannheim</orgName>
								<address>
									<addrLine>B6, 26</addrLine>
									<postCode>68159</postCode>
									<settlement>Mannheim</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">RDF Graph Embeddings for Content-based Recommender Systems</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">28CCE56CD0D6D4501ABA31ACA57043D1</idno>
					<idno type="DOI">10.1145/1235</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-19T15:28+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>H.3.3 [Information Systems]: Information Search and Retrieval Recommender System</term>
					<term>Graph Embeddings</term>
					<term>Linked Open Data</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Linked Open Data has been recognized as a useful source of background knowledge for building content-based recommender systems. Vast amount of RDF data, covering multiple domains, has been published in freely accessible datasets. In this paper, we present an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs used for building content-based recommender system. We generate sequences by leveraging local information from graph sub-structures and learn latent numerical representations of entities in RDF graphs. Our evaluation on two datasets in the domain of movies and books shows that feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be effectively used in content-based recommender systems.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>recommendation approaches is that the information on which they rely is generally insufficient to elicit user's interests and characterize all the aspects of her interaction with the system. This is the main drawback of the approaches built on textual and keyword-based representations, which cannot capture complex relations among objects since they lack the semantics associated to their attributes. A process of "knowledge infusion" <ref type="bibr" target="#b40">[40]</ref> and semantic analysis has been proposed to face this issue, and numerous approaches that incorporate ontological knowledge have been proposed, giving rise to the newly defined class of semantics-aware contentbased recommender systems <ref type="bibr">[6]</ref>. More recently the Linked Open Data (LOD) initiative <ref type="bibr" target="#b3">[3]</ref> has opened new interesting possibilities to realize better recommendation approaches. The LOD initiative in fact gave rise to a variety of open knowledge bases freely accessible on the Web and being part of a huge decentralized knowledge base, the LOD cloud, where each piece of little knowledge is enriched by links to related data. LOD is an open, interlinked collection of datasets in machine-interpretable form, built on World Wide Web Consortium (W3C) standards as RDF 1 , and SPARQL 2 . Currently the LOD cloud consists of about 1, 000 interlinked datasets covering multiple domains from life science to government data <ref type="bibr" target="#b39">[39]</ref>. It has been shown that LOD is a valuable source of background knowledge for content-based recommender systems in many domains <ref type="bibr" target="#b12">[12]</ref>. Given that the items to be recommended are linked to a LOD dataset, information from LOD can be exploited to determine which items are considered to be similar to the ones that the user has consumed in the past, allowing to discover hidden information and implicit relations between objects <ref type="bibr" target="#b26">[26]</ref>. While LOD is rich in high quality data, it is still challenging to find effective and efficient way of exploiting the knowledge for content-based recommendations. So far, most of the pro-posed approaches in the literature are supervised or semisupervised, which means cannot work without human interaction.</p><p>In this work, we adapt language modeling approaches for latent representation of entities in RDF graphs. To do so, we first convert the graph into a set of sequences of entities using graph walks. In the second step, we use those sequences to train a neural language model, which estimates the likelihood of a sequence of entities appearing in the graph. Once the training is finished, each entity in the graph is represented with a vector of latent numerical values. Projecting such latent representation of entities into a lower dimensional feature space shows that semantically similar entities appear closer to each other. Such entity vectors can be directly used in a content-based recommender system.</p><p>In this work, we utilize two of the most prominent RDF knowledge graphs <ref type="bibr" target="#b29">[29]</ref>, i.e. DBpedia <ref type="bibr" target="#b18">[18]</ref> and Wikidata <ref type="bibr" target="#b42">[42]</ref>. DBpedia is a knowledge graph which is extracted from structured data in Wikipedia. The main source for this extraction are the key-value pairs in the Wikipedia infoboxes. Wikidata is a collaboratively edited knowledge graph, operated by the Wikimedia foundation 3 that also hosts various language editions of Wikipedia.</p><p>The rest of this paper is structured as follows. In Section 2, we give an overview of related work. In Section 3, we introduce our approach, followed by an evaluation in Section 4. We conclude with a summary and an outlook on future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">RELATED WORK</head><p>It has been shown that LOD can improve recommender systems towards a better understanding and representation of user preferences, item features, and contextual signs they deal with. LOD has been used in content-based, collaborative, and hybrid techniques, in various recommendation tasks, i.e., rating prediction, top-N recommendations and improving of diversity in content-based recommendations. LOD datasets, e.g. DBpedia, have been used in contentbased recommender systems in <ref type="bibr" target="#b11">[11]</ref> and <ref type="bibr" target="#b12">[12]</ref>. The former performs a semantic expansion of the item content based on ontological information extracted from DBpedia and Linked-MDB <ref type="bibr" target="#b16">[16]</ref>, the first open semantic web database for movies, and tries to derive implicit relations between items. The latter involves DBpedia and LinkedMDB too, but is an adaptation of the Vector Space Model to Linked Open Data: it represents the RDF graph as a 3-dimensional tensor where each slice is an ontological property (e.g. starring, director,...) and represents its adjacency matrix. It has been proven that leveraging LOD datasets is also effective for hybrid recommender systems <ref type="bibr" target="#b4">[4]</ref>, that is in those approaches that boost the collaborative information with additional knowledge, such as the item content. In <ref type="bibr" target="#b10">[10]</ref> the authors propose SPRank, a hybrid recommendation algorithm that extracts semantic path-based features from DBpedia and uses them to compute top-N recommendations in a learning to rank approach and in multiple domains, movies, books and musical artists. SPRank is compared with numerous collaborative approaches based on matrix factorization <ref type="bibr" target="#b17">[17,</ref><ref type="bibr" target="#b34">34]</ref> and with other hybrid RS, such as BPR-SSLIM <ref type="bibr" target="#b25">[25]</ref>, and exhibits good performance especially in those contexts characterized by high sparsity, where the contribution of the 3 http://wikimediafoundation.org/ content becomes essential. Another hybrid approach is proposed in <ref type="bibr" target="#b36">[36]</ref>, which builds on training individual base recommenders and using global popularity scores as generic recommenders. The results of the individual recommenders are combined using stacking regression and rank aggregation. Most of these approaches can be referred to as top-down approaches <ref type="bibr">[6]</ref>, since they rely on the integration of external knowledge and cannot work without human intervention. On the other side, bottom-up approaches ground on the distributional hypothesis <ref type="bibr" target="#b15">[15]</ref> for language modeling, according to which the meaning of words depends on the context in which they occur, in some textual content. The resulting strategy is therefore unsupervised, requiring a corpora of textual documents for training as large as possible. Approaches based on the distributional hypothesis, referred to as discriminative models, behave as word embeddings techniques where each term (and document) becomes a point in the vector space. They substitute the term-document matrix typical of Vector Space Model with a term-context matrix on which they apply dimensionality reduction techniques such as Latent Semantic Indexing (LSI) <ref type="bibr" target="#b8">[8]</ref> and the more scalable and incremental Random Indexing (RI) <ref type="bibr" target="#b38">[38]</ref>. The latter has been involved in <ref type="bibr" target="#b22">[22]</ref> and <ref type="bibr" target="#b23">[23]</ref> to define the so called enhanced Vector Space Model (eVSM) for contentbased RS, where user's profile is incrementally built summing the features vectors representing documents liked by the user and a negation operator is introduced to take into account also negative preferences.</p><p>Word embedding techniques are not limited to LSI and RI. The word2vec strategy has been recently presented in <ref type="bibr" target="#b19">[19]</ref> and <ref type="bibr" target="#b20">[20]</ref>, and to the best of our knowldge, has been applied to item recommendations in a few works <ref type="bibr" target="#b21">[21,</ref><ref type="bibr" target="#b28">28]</ref>. In particular, <ref type="bibr" target="#b21">[21]</ref> is an empirical evaluation of LSI, RI and word2vec to make content-based movie recommendation exploiting textual information from Wikipedia, while <ref type="bibr" target="#b28">[28]</ref> deals with check-in venue (location) recommendations and adds a nontextual feature, the past check-ins of the user. They both draw the conclusion that word2vec techniques are promising for the recommendation task. Finally there is a single example of product embedding <ref type="bibr" target="#b14">[14]</ref>, namely prod2vec, which operates on the artificial graph of purchases, treating a purchase sequence as a "sentence" and products within the sequence as words.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">APPROACH</head><p>In our approach, we adapt neural language models for RDF graph embeddings. Such approaches take advantage of the word order in text documents, explicitly modeling the assumption that closer words in the word sequence are statistically more dependent. In the case of RDF graphs, we follow the approach sketched in <ref type="bibr" target="#b37">[37]</ref>, considering entities and relations between entities instead of word sequences. Thus, in order to apply such approaches on RDF graph data, we have to transform the graph data into sequences of entities, which can be considered as sentences. After the graph is converted into a set of sequences of entities, we can train the same neural language models to represent each entity in the RDF graph as a vector of numerical values in a latent feature space. Such entity vectors can be directly used in a content-based recommender system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">RDF Graph Sub-Structures Extraction</head><p>We propose random graph walks as an approach for con-verting graphs into a set of sequences of entities. Definition 1. An RDF graph is a graph G = (V, E), where V is a set of vertices, and E is a set of directed edges.</p><p>The objective of the conversion functions is for each vertex v ∈ V to generate a set of sequences Sv, where the first token of each sequence s ∈ Sv is the vertex v followed by a sequence of tokens, which might be edges, vertices, or any substructure extracted from the RDF graph, in an order that reflects the relations between the vertex v and the rest of the tokens, as well as among those tokens.</p><p>In this approach, for a given graph G = (V, E), for each vertex v ∈ V we generate all graph walks Pv of depth d rooted in the vertex v. To generate the walks, we use the breadth-first algorithm. In the first iteration, the algorithm generates paths by exploring the direct outgoing edges of the root node vr. The paths generated after the first iteration will have the following pattern vr -&gt;e1i, where i ∈ E(vr). In the second iteration, for each of the previously explored edges the algorithm visits the connected vertices. The paths generated after the second iteration will follow the following pattern vr -&gt;e1i -&gt;v1i. The algorithm continues until d iterations are reached. The final set of sequences for the given graph G is the union of the sequences of all the vertices v∈V Pv.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Neural Language Models -word2vec</head><p>Until recently, most of the Natural Language Processing systems and techniques treated words as atomic units, representing each word as a feature vector using a one-hot representation, where a word vector has the same length as the size of a vocabulary. In such approaches, there is no notion of semantic similarity between words. While such approaches are widely used in many tasks due to their simplicity and robustness, they suffer from several drawbacks, e.g., high dimensionality and severe data sparsity, which limit the performance of such techniques. To overcome such limitations, neural language models have been proposed, inducing lowdimensional, distributed embeddings of words by means of neural networks. The goal of such approaches is to estimate the likelihood of a specific sequence of words appearing in a corpus, explicitly modeling the assumption that closer words in the word sequence are statistically more dependent.</p><p>While some of the initially proposed approaches suffered from inefficient training of the neural network models, with the recent advancements in the field several efficient approaches has been proposed. One of the most popular and widely used is the word2vec neural language model <ref type="bibr" target="#b19">[19,</ref><ref type="bibr" target="#b20">20]</ref>. Word2vec is a particularly computationally-efficient two-layer neural net model for learning word embeddings from raw text. There are two different algorithms, the Continuous Bag-of-Words model (CBOW) and the Skip-Gram model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.1">Continuous Bag-of-Words Model</head><p>The CBOW model predicts target words from context words within a given window.The input layer is comprised from all the surrounding words for which the input vectors are retrieved from the input weight matrix, averaged, and projected in the projection layer. Then, using the weights from the output weight matrix, a score for each word in the vocabulary is computed, which is the probability of the word being a target word. Formally, given a sequence of training words w1, w2, w3, ..., wT , and a context window c, the objective of the CBOW model is to maximize the average log probability:</p><formula xml:id="formula_0">1 T T t=1 log p(wt|wt−c...wt+c),<label>(1)</label></formula><p>where the probability p(wt|wt−c...wt+c) is calculated using the softmax function:</p><formula xml:id="formula_1">p(wt|wt−c...wt+c) = exp(v T v w t ) V w=1 exp(v T v w ) ,<label>(2)</label></formula><p>where v w is the output vector of the word w, V is the complete vocabulary of words, and v is the averaged input vector of all the context words:</p><formula xml:id="formula_2">v = 1 2c −c≤j≤c,j =0 vw t+j<label>(3)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.2">Skip-Gram Model</head><p>The Skip-Gram model does the inverse of the CBOW model and tries to predict the context words from the target words. More formally, given a sequence of training words w1, w2, w3, ..., wT , and a context window c, the objective of the skip-gram model is to maximize the following average log probability:</p><formula xml:id="formula_3">1 T T t=1 −c≤j≤c,j =0 log p(wt+j|wt),<label>(4)</label></formula><p>where the probability p(wt+j|wt) is calculated using the softmax function:</p><formula xml:id="formula_4">p(wo|wi) = exp(v T wo vwi) V w=1 exp(v T w vwi) ,<label>(5)</label></formula><p>where vw and v w are the input and the output vector of the word w, and V is the complete vocabulary of words.</p><p>In both cases, calculating the softmax function is computationally inefficient, as the cost for computing is proportional to the size of the vocabulary. Therefore, two optimization techniques have been proposed, i.e., hierarchical softmax and negative sampling <ref type="bibr" target="#b20">[20]</ref>. The empirical studies show that in most cases negative sampling leads to better performances than hierarchical softmax, which depends on the selected negative samples, but it has higher runtime.</p><p>Once the training is finished, semantically similar words appear close to each other in the feature space. Furthermore, basic mathematical functions can be performed on the vectors, to extract different relations between the words.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">EVALUATION</head><p>We evaluate different variants of our approach on two distinct datasets, and compare them to common approaches for creating content-based item representations from LOD and with state of the art collaborative approaches. Furthermore, we investigate the use of two different LOD datasets as background knowledge, i.e., DBpedia and Wikidata.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Datasets</head><p>In order to test the effectiveness of our proposal, we evaluate it in terms of ranking accuracy and aggregate diversity on two datasets belonging to different domains, i.e. Movielens 1M 4 for movies and LibraryThing 5 for books. The 4 http://grouplens.org/datasets/movielens/ 5 https://www.librarything.com/ former contains 1 million 1-5 stars ratings from 6,040 users on 3,883 movies. The LibraryThing dataset contains more than 2 millions ratings from 7,564 users on 39,515 books. As there are many duplicated ratings in the dataset, when a user has rated more than once the same item, we select her last rating. This choice brings to have 626,000 ratings in the range from 1 to 10. The user-item interactions contained in the datasets are enriched with side information thanks to the item mapping and linking to DBpedia technique detailed in <ref type="bibr" target="#b27">[27]</ref>, whose dump is available at http: //sisinflab.poliba.it/semanticweb/lod/recsys/datasets/. In the attempt to reduce the popularity bias from our final evaluation we decided to remove the top 1% most popular items from both datasets <ref type="bibr" target="#b5">[5]</ref>. Moreover we keep out, from LibraryThing, users with less than five ratings and items rated less than five times, and to have a dataset characterized by lower sparsity we retain for Movielens only users with at least fifty ratings, as already done in <ref type="bibr" target="#b10">[10]</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.1">RDF Embeddings</head><p>As RDF datasets we use DBpedia and Wikidata. We use the English version of the 2015-10 DBpedia dataset, which contains 4, 641, 890 instances and 1, 369 mapping-based properties. In our evaluation we only consider object properties, and ignore the data properties and literals.</p><p>For the Wikidata dataset we use the simplified and derived RDF dumps from 2016-03-28 <ref type="foot" target="#foot_0">6</ref> . The dataset contains 17, 340, 659 entities in total. As for the DBpedia dataset, we only consider object properties, and ignore the data properties and literals.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Evaluation Protocol</head><p>As evaluation protocol for our comparison, we adopted the all unrated items methodology presented in <ref type="bibr" target="#b41">[41]</ref> and already used in <ref type="bibr" target="#b10">[10]</ref>. Such methodology asks to predict a score for each item not rated by a user, irrespective of the existence of an actual rating, and to compare the recommendation list with the test set.</p><p>The metrics involved in the experimental comparison are precision, recall and nDCG as accuracy metrics, and catalog coverage and Gini coefficient for the aggregate diversity. precision@N represents the fraction of relevant items in the top-N recommendations. recall@N indicates the fraction of relevant items, in the user test set, occurring in the top-N list. As relevance threshold, we set 4 for Movielens and 8 for LibraryThing, as previously done in <ref type="bibr" target="#b10">[10]</ref>. Although precision and recall are good indicators to evaluate the accuracy of a recommendation engine, they are not rank-sensitive. nDCG@N <ref type="bibr" target="#b2">[2]</ref> instead takes into account also the position in the recommendation list, being defined as</p><formula xml:id="formula_5">nDCG@ N = 1 iDCG • N i=1 2 rel(u,i) − 1 log2(1 + i)<label>(6)</label></formula><p>where rel(u, i) is a boolean function representing the relevance of item i for user u and iDCG is a normalization factor that sets nDCG@ N value to 1 when an ideal ranking is returned <ref type="bibr" target="#b2">[2]</ref>. As suggested in <ref type="bibr" target="#b41">[41]</ref> and set up in <ref type="bibr" target="#b10">[10]</ref>, in the computation of nDCG@N we fixed a default "neutral" value for those items with no ratings, i.e. 3 for Movielens and 5 for LibraryThing.</p><p>Providing accurate recommendations has been recognized as just one of the main task a recommender system must be able to perform. We therefore evaluate the contribution of our latent features in terms of aggregate diversity, and more specifically by means of catalog coverage and Gini coefficient <ref type="bibr" target="#b1">[1]</ref>. The catalog coverage represents the percentage of available candidate items recommended at least once. It is an important quality dimension for both user and business perspective <ref type="bibr" target="#b13">[13]</ref>, since it exhibits the capacity to not settle just on a subset of items (e.g. the most popular). This metric however should be supported by a distribution metric which has to show the ability of a recommendation engine to equally spread out the recommendations across all users. Gini coefficient <ref type="bibr" target="#b1">[1]</ref> is used for this purpose, since it measures the concentration degree of top-N recommendations across items and is defined as</p><formula xml:id="formula_6">Gini = 2 n i=1 n + 1 − i n + 1 • rec(i) total<label>(7)</label></formula><p>In Equation <ref type="bibr" target="#b7">(7)</ref>, n is the number of candidate items available for recommendation, total represents the total number of top-N recommendations made across all users, and rec(i) is the number of users to whom item i has been recommended. Gini coefficient gives therefore an idea of the "equity" in the distribution of the items. It is worth to remind that we are following the notion given in <ref type="bibr" target="#b1">[1]</ref>, where the complement of the standard Gini coefficient is used, so that higher values correspond to more balanced recommendations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Experimental Setup</head><p>The first step of our approach is to convert the RDF graphs into a set of sequences. Therefore, to extract the entities embeddings for the large RDF datasets, we use only random graph walks entity sequences. More precisely, we follow the approach presented in <ref type="bibr" target="#b32">[32]</ref> to generate only a limited number of random walks for each entity. For DBpedia, we experiment with 500 walks per entity with depth of 4 and 8, while for Wikidata, we use only 200 walks per entity with depth of 4. Additionally, for each entity in DBpedia and Wikidata, we include all the walks of depth 2, i.e., direct outgoing relations. We use the corpora of sequences to build both CBOW and Skip-Gram models with the following parameters: window size = 5; number of iterations = 5; negative sampling for optimization; negative samples = 25; with average input vector for CBOW. We experiment with 200 and 500 dimensions for the entities' vectors. All the models are publicly available 7 .</p><p>We compare our approach to several baselines. For generating the data mining features, we use three strategies that 7 http://data.dws.informatik.uni-mannheim.de/rdf2vec/ take into account the direct relations to other resources in the graph <ref type="bibr" target="#b30">[30]</ref>, and two strategies for features derived from graph sub-structures <ref type="bibr" target="#b7">[7]</ref>:</p><p>• Features derived from specific relations. In the experiments we use the relations rdf:type (types), and dcterms:subject (categories) for datasets linked to DBpedia.</p><p>• Features derived from generic relations, i.e., we generate a feature for each incoming (rel in) or outgoing relation (rel out) of an entity, ignoring the value of the relation.</p><p>• Features derived from generic relations-values, i.e, we generate feature for each incoming (rel-vals in) or outgoing relation (rel-vals out) of an entity including the value of the relation.</p><p>• Kernels that count substructures in the RDF graph around the instance node. These substructures are explicitly generated and represented as sparse feature vectors.</p><p>-The Weisfeiler-Lehman (WL) graph kernel for RDF <ref type="bibr" target="#b7">[7]</ref> counts full subtrees in the subgraph around the instance node. This kernel has two parameters, the subgraph depth d and the number of iterations h (which determines the depth of the subtrees). We use d = 1 and h = 2 and therefore we will indicate this strategy as WL12.</p><p>-The Intersection Tree Path kernel for RDF <ref type="bibr" target="#b7">[7]</ref> counts the walks in the subtree that span from the instance node. Only the walks that go through the instance node are considered. We will therefore refer to it as the root Walk Count (WC) kernel. The root WC kernel has one parameter: the length of the paths l, for which we test 2. This strategy will be denoted accordingly as WC2.</p><p>The strategies for creating propositional features from Linked Open Data are implemented in the RapidMiner LOD extension<ref type="foot" target="#foot_1">8</ref>  <ref type="bibr" target="#b31">[31,</ref><ref type="bibr" target="#b35">35]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4">Results</head><p>The target of the experimental section of this paper is two-fold. On the one hand, we want to prove that the latent features we extracted are able to subsume the other kind of features in terms of accuracy and aggregate diversity. On the other hand we aim at qualifying our strategies as valuable means for the recommendation task, through a first comparison with state of the art approaches. Both goals are pursued implementing an item-based K-nearest-neighbor method, hereafter denoted as ItemKNN, with cosine similarity among features vectors. Formally, this method determines similarities between items through cosine similarity between relative vectors and then selects a subset of themthe neighbors -for each item, that will be used to estimate the rating of user u for a new item i as follows:</p><formula xml:id="formula_7">r * (u, i) = j∈ratedItems(u) cosineSim(j, i) • ru,j</formula><p>where ratedItems(u) is the set of items already evaluated by user u, ru,j indicates the rating for item j by user u and cosineSim(j, i) is the cosine similarity score between items j and i. In our experiments, the size of the considered neighbourhood is limited to 5. The computation of recommendations has been done with the publicly available library RankSys<ref type="foot" target="#foot_2">9</ref> . All the results have been computed @10, that is considering the top-10 lists recommended to the users: precision, recall and nDCG are computed for each user and then averaged across all users, while diversity metrics are global measures.</p><p>Tables <ref type="table" target="#tab_3">2 and 3</ref> contain the values of precision, recall and nDCG, respectively for Movielens and LibraryThing, for each kind of features we want to test. The best approach for both datasets is retrieved with a Skip-Gram model and with a size of 200 for vectors built upon DBpedia. For the sake of truth, on the Movielens dataset the highest value of precision is achieved using vector size of 500, but the size 200 is prevalent according to the F1 measure, i.e. the harmonic mean of precision and recall. A substantial difference however concerns the exploratory depth of the random walks, since for Movielens the results related to depth 4 outdo those computed with depth 8, while the tendency is reversed for LibraryThing. The advantage of the Skip-Gram model over the CBOW is a constant both on DBpedia and Wikidata. Moreover, the employment of the Wikidata RDF dataset turns out to be more effective for Library-Thing, where the Skip-Gram vectors with depth 4 exceeds the corresponding DBpedia vectors. Moving to the features extracted from direct relations, the contribution of the "categories" stands clearly out, together with relations-values "rel-vals", especially when just incoming relations are considered. The extraction of features from graph structures, i.e. WC2 and WL12 approaches, seems not to provide significant advantages to the recommendation algorithm.</p><p>To point out that our latent features are able to capture the structure of the RDF graph, placing closely semantically similar items, we provide some examples of the neighbouring sets retrieved using our graph embeddings technique and used within the ItemKNN. Table <ref type="table" target="#tab_5">4</ref> is related to movies and displays that neighboring items are highly relevant and close to the query item, i.e. the item for which neighbors are searched for.</p><p>To further analyse the semantics of the vector representations, we employ Principal Component Analysis (PCA) to project the "high"-dimensional entities' vectors in a two dimensional feature space, or 2D scatter plot. For each of the query movies in Table <ref type="table" target="#tab_5">4</ref> we visualize the vectors of the 5 nearest neighbors as shown in Figure <ref type="figure" target="#fig_0">1</ref>. The figure illustrates the ability of the model to automatically cluster the movies.</p><p>The impact on the aggregate diversity. As a further validation of the interactiveness of our latent features for recommendation task, we report the performances of the ItemKNN approach in terms of aggregate diversity. The relation between accuracy and aggregate diversity has gained the attention of researchers in the last few years and is generally characterized as a trade-off <ref type="bibr" target="#b1">[1]</ref>. Quite surprisingly, however, the increase in accuracy, shown in Tables <ref type="table" target="#tab_3">2 and 3</ref>, seems not to rely on a concentration on a subset of items, e.g. the most   popular ones, according to the results proposed in Tables <ref type="table" target="#tab_8">5  and 6</ref>. Here we are reporting, for the sake of concisenesses, only the best approaches for each kind of features. More clearly, we are displaying the best approach for latent features computed on DBpedia, the best approach for latent features computed on Wikidata and the values for the strategy involving categories, since it provides the highest scores among features extracted through direct relations. We are not reporting the values related to WL12 and WC2 algorithms, since their contribution is rather low also in this   analysis. For both movies and books domain, the best approaches found on DBpedia for the accuracy metrics, i.e. respectively "DB2vec SG 200 4" and "DB2vec SG 200 8", perform better also in terms of aggregate diversity. For the LibraryThing dataset the Skip-Gram model computed with random walks on Wikidata and size vector limited to 200 is very close to the highest scores retrieved in DBpedia, while for Movielens is the CBOW model, with depth 4, to gain the best performance on Wikidata. The contribution of the categories, despite being lower than the best approach on each dataset, is quite significant for diversity measures too.</p><p>Comparison with state of the art collaborative approaches. It is a quite common belief in the RS field that using pure content-based approaches would not be enough to provide accurate suggestions and that the recommendation engines must ground on collaborative information too. This motivated us to explicitly compare the best approaches built on graph embeddings technique with the well-known state of the art collaborative recommendation algorithms listed be-  low, and implemented with the publicly available software library MyMediaLite<ref type="foot" target="#foot_3">10</ref> .</p><p>• Biased Matrix Factorization (MF) <ref type="bibr" target="#b17">[17]</ref>, recognized as the state of the art for rating prediction, is a matrix factorization model that minimizes RMSE using stochastic gradient descent and both user and item bias.</p><p>• PopRank is a baseline based on popularity. It recommends the same recommendations to all users according to the overall items popularity. Recent studies have point out that recommending the most popular items could already result in a high performance <ref type="bibr" target="#b5">[5]</ref>.</p><p>• Bayesian Personalized Ranking (BPRMF) combines a matrix factorization approach with a Bayesian Personalized Ranking optimization criterion <ref type="bibr" target="#b34">[34]</ref>.</p><p>• SLIM <ref type="bibr" target="#b24">[24]</ref> is a Sparse LInear Method for top-N recommendation that learns a sparse coefficient matrix for the items involved in the system by only relying on the users purchase/ratings profile and by solving a L1norm and L2-norm regularized optimization problem.</p><p>• Soft Margin Ranking Matrix Factorization (RankMF) is a matrix factorization approach for ranking, whose loss function is ordinal regression Tables <ref type="table" target="#tab_10">7 and 8</ref> provide the comparison results for Movielens and LibraryThing respectively. Table <ref type="table" target="#tab_9">7</ref> shows that matrix factorization techniques and the SLIM algorithm exceed our approach based only on content information. This outcome was somehow expected, especially considering that, in our experimental setting, Movielens dataset retains only users with at least fifty ratings. The community-based information is unquestionably predominant for this dataset, whose sparsity would probably be unlikely for most realworld scenarios. The behaviour however is completely overturned on the LibraryThing dataset, whose results are collected in Table <ref type="table" target="#tab_10">8</ref>. In this case, the mere use of our features vectors (i.e. the "DB2vec SG 200 8" strategy) is able to outperform the competitor algorithms, which are generally regarded as the most efficient collaborative algorithms for both rating and ranking prediction. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">CONCLUSION</head><p>In this paper, we have presented an approach for learning low-dimensional real-valued representations of entities in RDF graphs, in a completely domain independent way. We have first converted the RDF graphs into a set of sequences using graph walks, which are then used to train neural language models. In the experimental section we have shown that a content-based RS relying on the similarity between items computed according to our latent features vectors, outdo the same kind of system but grounding on explicit features (e.g. types, categories,...) or features generated with the use of kernels, from both perspectives of accuracy and aggregate diversity. Our purely content-based system has been further compared to state of the arts collaborative approaches for rating prediction and item ranking, giving outstanding results on a dataset with a realistic sparsity degree.</p><p>As future work, we intend to introduce the features vectors deriving from the graph embeddings technique within a hybrid recommender system in order to get a fair comparison against state of the art hybrids approaches such as SPRank <ref type="bibr" target="#b10">[10]</ref> and BRP-SSLIM <ref type="bibr" target="#b25">[25]</ref>. In this perspective we could take advantage of the Factorization Machines <ref type="bibr" target="#b33">[33]</ref>, general predictor working with any features vector, that combine Support Vector Machines and factorization models. We aim to extend the evaluation to additional metrics, such as the individual diversity <ref type="bibr" target="#b44">[44,</ref><ref type="bibr" target="#b9">9]</ref>, and to provide a deeper insight into cold-start users, i.e. users with a small interaction with the system for whom the information inference is difficult to draw and that generally benefit most of content "infusion".</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Two-dimensional PCA projection of the 200-dimensional Skip-gram vectors of movies in Table 4.</figDesc><graphic coords="6,316.81,232.68,241.01,212.16" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Table 1 contains the final statistics for our datasets. Statistics about the two datasets</figDesc><table><row><cell></cell><cell cols="2">Movielens LibraryThing</cell></row><row><cell>Number of users</cell><cell>4,186</cell><cell>7,149</cell></row><row><cell>Number of items</cell><cell>3,196</cell><cell>4,541</cell></row><row><cell>Number of ratings</cell><cell>822,597</cell><cell>352,123</cell></row><row><cell>Data sparsity</cell><cell>93.85%</cell><cell>98.90%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2 :</head><label>2</label><figDesc>Results of the ItemKNN approach on Movielens dataset. P and R stand respectively for precision and recall, SG indicates the Skip-Gram model, and DB and WD represent DBpedia and Wikidata respectively.</figDesc><table><row><cell>Strategy</cell><cell>P@10</cell><cell>R@10</cell><cell>nDCG@10</cell></row><row><cell>DB2vec CBOW 200 4</cell><cell>0.05127</cell><cell>0.11777</cell><cell>0.21244</cell></row><row><cell>DB2vec CBOW 500 4</cell><cell>0.05065</cell><cell>0.11557</cell><cell>0.21039</cell></row><row><cell>DB2vec SG 200 4</cell><cell>0.05719</cell><cell>0.12763</cell><cell>0.2205</cell></row><row><cell>DB2vec SG 500 4</cell><cell>0.05811</cell><cell>0.12864</cell><cell>0.22116</cell></row><row><cell>DB2vec CBOW 200 8</cell><cell>0.00836</cell><cell>0.02334</cell><cell>0.14147</cell></row><row><cell>DB2vec CBOW 500 8</cell><cell>0.00813</cell><cell>0.02335</cell><cell>0.14257</cell></row><row><cell>DB2vec SG 200 8</cell><cell cols="2">0.07681 0.17769</cell><cell>0.25234</cell></row><row><cell>DB2vec SG 500 8</cell><cell>0.07446</cell><cell>0.1743</cell><cell>0.24809</cell></row><row><cell>WD2vec CBOW 200 4</cell><cell>0.00537</cell><cell>0.01084</cell><cell>0.13524</cell></row><row><cell>WD2vec CBOW 500 4</cell><cell>0.00444</cell><cell>0.00984</cell><cell>0.13428</cell></row><row><cell>WD2vec SG 200 4</cell><cell>0.06416</cell><cell>0.14565</cell><cell>0.23309</cell></row><row><cell>WD2vec SG 500 4</cell><cell>0.06031</cell><cell>0.14194</cell><cell>0.22752</cell></row><row><cell>types</cell><cell>0.01854</cell><cell>0.04535</cell><cell>0.16064</cell></row><row><cell>categories</cell><cell>0.06662</cell><cell>0.15258</cell><cell>0.23733</cell></row><row><cell>rel in</cell><cell>0.04577</cell><cell>0.10219</cell><cell>0.20196</cell></row><row><cell>rel out</cell><cell>0.04118</cell><cell>0.09055</cell><cell>0.19449</cell></row><row><cell>rel in &amp; out</cell><cell>0.04531</cell><cell>0.10165</cell><cell>0.20115</cell></row><row><cell>rel-vals in</cell><cell>0.06176</cell><cell>0.14101</cell><cell>0.22574</cell></row><row><cell>rel-vals out</cell><cell>0.06163</cell><cell>0.13763</cell><cell>0.22826</cell></row><row><cell>rel-vals in &amp; out</cell><cell>0.06087</cell><cell>0.13662</cell><cell>0.22615</cell></row><row><cell>WC2</cell><cell>0.00159</cell><cell>0.00306</cell><cell>0.12858</cell></row><row><cell>WL12</cell><cell>0.00155</cell><cell>0.00389</cell><cell>0.12937</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3 :</head><label>3</label><figDesc>Results of the ItemKNN approach on Li-braryThing dataset.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 4 :</head><label>4</label><figDesc>Examples of K-nearest-neighbor sets on Movielens, for the Skip-Gram model with depth of 4 and size vectors 200, on DBpedia.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_7"><head>Table 5 :</head><label>5</label><figDesc>Methods comparison in terms of aggregate diversity on the Movielens dataset. Coverage stands for catalog coverage and Gini for Gini coefficient.</figDesc><table><row><cell>Strategy</cell><cell>Coverage</cell><cell>Gini</cell></row><row><cell>DB2vec SG 200 8</cell><cell>0.76386</cell><cell>0.29534</cell></row><row><cell>WD2vec SG 200 4</cell><cell>0.73037</cell><cell>0.28525</cell></row><row><cell>categories</cell><cell>0.7246</cell><cell>0.26409</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_8"><head>Table 6 :</head><label>6</label><figDesc>Methods comparison in terms of aggregate diversity on the LibraryThing dataset.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_9"><head>Table 7 :</head><label>7</label><figDesc>Comparison with state of the art collaborative approaches on Movielens.</figDesc><table><row><cell>Strategy</cell><cell>P@10</cell><cell>R@10</cell><cell>nDCG@10</cell></row><row><cell>DB2vec SG 200 4</cell><cell>0.0568</cell><cell>0.0312</cell><cell>0.3183</cell></row><row><cell>MF</cell><cell>0.2522</cell><cell>0.1307</cell><cell>0.4427</cell></row><row><cell>PopRank</cell><cell>0.1673</cell><cell>0.0787</cell><cell>0.3910</cell></row><row><cell>BPRMF</cell><cell>0.2522</cell><cell>0.1307</cell><cell>0.4427</cell></row><row><cell>SLIM</cell><cell cols="2">0.2632 0.1474</cell><cell>0.4599</cell></row><row><cell>RankMF</cell><cell>0.1417</cell><cell>0.0704</cell><cell>0.3736</cell></row><row><cell>Strategy</cell><cell>P@10</cell><cell>R@10</cell><cell>nDCG@10</cell></row><row><cell cols="3">DB2vec SG 200 8 0.0768 0.1777</cell><cell>0.2523</cell></row><row><cell>MF</cell><cell>0.0173</cell><cell>0.0209</cell><cell>0.1423</cell></row><row><cell>PopRank</cell><cell>0.0397</cell><cell>0.0452</cell><cell>0.1598</cell></row><row><cell>BPRMF</cell><cell>0.0449</cell><cell>0.0751</cell><cell>0.1858</cell></row><row><cell>SLIM</cell><cell>0.0543</cell><cell>0.0988</cell><cell>0.2317</cell></row><row><cell>RankMF</cell><cell>0.0369</cell><cell>0.0459</cell><cell>0.1714</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_10"><head>Table 8 :</head><label>8</label><figDesc>Comparison with state of the art collaborative approaches on LibraryThing.</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_0">http://tools.wmflabs.org/wikidata-exports/rdf/index. php?content=dump\ download.php\&amp;dump=20160328</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_1">http://dws.informatik.uni-mannheim.de/en/research/ rapidminer-lod-extension</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_2">http://ranksys.org/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="10" xml:id="foot_3">http://www.mymedialite.net</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title/>
		<author>
			<persName><surname>References</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Improving aggregate recommendation diversity using ranking-based techniques</title>
		<author>
			<persName><forename type="first">Gediminas</forename><surname>Adomavicius</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Youngok</forename><surname>Kwon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. on Knowl. and Data Eng</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="896" to="911" />
			<date type="published" when="2012-05">May 2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A comparative study of heterogeneous item recommendations in social systems</title>
		<author>
			<persName><forename type="first">Alejandro</forename><surname>Bellogín</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Iván</forename><surname>Cantador</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pablo</forename><surname>Castells</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Inf. Sci</title>
		<imprint>
			<biblScope unit="volume">221</biblScope>
			<biblScope unit="page" from="142" to="169" />
			<date type="published" when="2013-02">February 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Linked Data -The Story So Far</title>
		<author>
			<persName><forename type="first">Christian</forename><surname>Bizer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tom</forename><surname>Heath</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tim</forename><surname>Berners-Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International journal on semantic web and information systems</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="1" to="22" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Hybrid recommender systems: Survey and experiments</title>
		<author>
			<persName><forename type="first">Robin</forename><surname>Burke</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">User Modeling and User-Adapted Interaction</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="331" to="370" />
			<date type="published" when="2002-11">November 2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Performance of recommender algorithms on top-n recommendation tasks</title>
		<author>
			<persName><forename type="first">Paolo</forename><surname>Cremonesi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yehuda</forename><surname>Koren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Roberto</forename><surname>Turrin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fourth ACM Conference on Recommender Systems, RecSys &apos;10</title>
				<meeting>the Fourth ACM Conference on Recommender Systems, RecSys &apos;10<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="39" to="46" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Semantics-aware content-based recommender systems</title>
		<author>
			<persName><forename type="first">Pasquale</forename><surname>Marco De Gemmis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Cataldo</forename><surname>Lops</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Narducci</forename><surname>Musto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Giovanni</forename><surname>Fedelucio</surname></persName>
		</author>
		<author>
			<persName><surname>Semeraro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Recommender Systems Handbook</title>
				<editor>
			<persName><forename type="first">Francesco</forename><surname>Ricci</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Lior</forename><surname>Rokach</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Bracha</forename><surname>Shapira</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="119" to="159" />
		</imprint>
	</monogr>
	<note>2nd edition</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Substructure counting graph kernels for machine learning from rdf data</title>
		<author>
			<persName><forename type="first">Gerben</forename><surname>Klaas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dirk</forename><forename type="middle">De</forename><surname>Vries</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Steven</forename><surname>De</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rooij</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Web Semantics: Science, Services and Agents on the World Wide Web</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="71" to="84" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Indexing by latent semantic analysis</title>
		<author>
			<persName><forename type="first">Scott</forename><surname>Deerwester</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Susan</forename><forename type="middle">T</forename><surname>Dumais</surname></persName>
		</author>
		<author>
			<persName><forename type="first">George</forename><forename type="middle">W</forename><surname>Furnas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thomas</forename><forename type="middle">K</forename><surname>Landauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Richard</forename><surname>Harshman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="391" to="407" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">An analysis of users&apos; propensity toward diversity in recommendations</title>
		<author>
			<persName><forename type="first">T</forename><surname>Di Noia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">C</forename><surname>Ostuni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Rosati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Tomeo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">Di</forename><surname>Sciascio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ACM RecSys &apos;14, RecSys &apos;14</title>
				<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="285" to="288" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Sprank: Semantic path-based ranking for top-n recommendations using linked open data</title>
		<author>
			<persName><forename type="first">T</forename><surname>Di Noia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">C</forename><surname>Ostuni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Tomeo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">Di</forename><surname>Sciascio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Intelligent Systems and Technology</title>
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>TIST</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Exploiting the web of data in model-based recommender systems</title>
		<author>
			<persName><forename type="first">Tommaso</forename><surname>Di Noia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Roberto</forename><surname>Mirizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vito</forename><forename type="middle">Claudio</forename><surname>Ostuni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Davide</forename><surname>Romito</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Sixth ACM Conference on Recommender Systems, RecSys &apos;12</title>
				<meeting>the Sixth ACM Conference on Recommender Systems, RecSys &apos;12<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="253" to="256" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Linked open data to support content-based recommender systems</title>
		<author>
			<persName><forename type="first">Tommaso</forename><surname>Di Noia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Roberto</forename><surname>Mirizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Claudio</forename><surname>Vito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Davide</forename><surname>Ostuni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Markus</forename><surname>Romito</surname></persName>
		</author>
		<author>
			<persName><surname>Zanker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 8th International Conference on Semantic Systems, I-SEMANTICS &apos;12</title>
				<meeting>the 8th International Conference on Semantic Systems, I-SEMANTICS &apos;12<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Beyond accuracy: evaluating recommender systems by coverage and serendipity</title>
		<author>
			<persName><forename type="first">Mouzhi</forename><surname>Ge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Carla</forename><surname>Delgado-Battenfeld</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dietmar</forename><surname>Jannach</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">RecSys &apos;10</title>
				<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page">257</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">E-commerce in your inbox: Product recommendations at scale</title>
		<author>
			<persName><forename type="first">Mihajlo</forename><surname>Grbovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vladan</forename><surname>Radosavljevic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nemanja</forename><surname>Djuric</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Narayan</forename><surname>Bhamidipati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jaikit</forename><surname>Savla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Varun</forename><surname>Bhagwan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Doug</forename><surname>Sharp</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;15</title>
				<meeting>the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;15<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="1809" to="1818" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Mathematical Structures of Language</title>
		<author>
			<persName><forename type="first">Z</forename><forename type="middle">S</forename><surname>Harris</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1968">1968</date>
			<publisher>Wiley</publisher>
			<pubPlace>New York, NY, USA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Linked movie data base</title>
		<author>
			<persName><forename type="first">Oktie</forename><surname>Hassanzadeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mariano</forename><forename type="middle">M</forename><surname>Consens</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Workshop on Linked Data on the Web</title>
				<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Matrix factorization techniques for recommender systems</title>
		<author>
			<persName><forename type="first">Yehuda</forename><surname>Koren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Robert</forename><surname>Bell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chris</forename><surname>Volinsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="30" to="37" />
			<date type="published" when="2009-08">August 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">DBpedia -A Large-scale, Multilingual Knowledge Base Extracted from Wikipedia</title>
		<author>
			<persName><forename type="first">Jens</forename><surname>Lehmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Robert</forename><surname>Isele</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Max</forename><surname>Jakob</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anja</forename><surname>Jentzsch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dimitris</forename><surname>Kontokostas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pablo</forename><forename type="middle">N</forename><surname>Mendes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sebastian</forename><surname>Hellmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mohamed</forename><surname>Morsey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ãűren</forename><surname>Patrick Van Kleef</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christian</forename><surname>Auer</surname></persName>
		</author>
		<author>
			<persName><surname>Bizer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Semantic Web Journal</title>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Efficient estimation of word representations in vector space</title>
		<author>
			<persName><forename type="first">Tomas</forename><surname>Mikolov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kai</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Greg</forename><surname>Corrado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jeffrey</forename><surname>Dean</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1301.3781</idno>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Distributed representations of words and phrases and their compositionality</title>
		<author>
			<persName><forename type="first">Tomas</forename><surname>Mikolov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ilya</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kai</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Greg</forename><forename type="middle">S</forename><surname>Corrado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jeff</forename><surname>Dean</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in neural information processing systems</title>
				<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="3111" to="3119" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Word embedding techniques for content-based recommender systems: an empirical evaluation</title>
		<author>
			<persName><forename type="first">Cataldo</forename><surname>Musto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Giovanni</forename><surname>Semeraro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>De Gemmis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pasquale</forename><surname>Lops</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">RecSys Posters, ser. CEUR Workshop Proceedings</title>
				<editor>
			<persName><forename type="first">P</forename><surname>Castells</surname></persName>
		</editor>
		<imprint>
			<biblScope unit="volume">1441</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">Random Indexing and Negative User Preferences for Enhancing Content-Based Recommender Systems</title>
		<author>
			<persName><forename type="first">Cataldo</forename><surname>Musto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Giovanni</forename><surname>Semeraro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pasquale</forename><surname>Lops</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>De</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gemmis</forename></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
			<publisher>Springer</publisher>
			<biblScope unit="page" from="270" to="281" />
			<pubPlace>Berlin Heidelberg; Berlin, Heidelberg</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m" type="main">Contextual eVSM: A Content-Based Context-Aware Recommendation Framework Based on Distributional Semantics</title>
		<author>
			<persName><forename type="first">Cataldo</forename><surname>Musto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Giovanni</forename><surname>Semeraro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pasquale</forename><surname>Lops</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>De</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gemmis</forename></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
			<publisher>Springer</publisher>
			<biblScope unit="page" from="125" to="136" />
			<pubPlace>Berlin Heidelberg; Berlin, Heidelberg</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">SLIM: sparse linear methods for top-n recommender systems</title>
		<author>
			<persName><forename type="first">Xia</forename><surname>Ning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">George</forename><surname>Karypis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">11th IEEE International Conference on Data Mining, ICDM 2011</title>
				<meeting><address><addrLine>Vancouver, BC, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2011">December 11-14, 2011. 2011</date>
			<biblScope unit="page" from="497" to="506" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Sparse linear methods with side information for top-n recommendations</title>
		<author>
			<persName><forename type="first">Xia</forename><surname>Ning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">George</forename><surname>Karypis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Sixth ACM Conference on Recommender Systems, RecSys &apos;12</title>
				<meeting>the Sixth ACM Conference on Recommender Systems, RecSys &apos;12<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="155" to="162" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Building a relatedness graph from linked open data: A case study in the it domain</title>
		<author>
			<persName><forename type="first">Tommaso</forename><surname>Di Noia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Claudio</forename><surname>Vito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jessica</forename><surname>Ostuni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paolo</forename><surname>Rosati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Eugenio</forename><forename type="middle">Di</forename><surname>Tomeo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Roberto</forename><surname>Sciascio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Claudio</forename><surname>Mirizzi</surname></persName>
		</author>
		<author>
			<persName><surname>Bartolini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="page" from="354" to="366" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Top-n recommendations from implicit feedback leveraging linked open data</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">C</forename><surname>Ostuni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Di Noia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">Di</forename><surname>Sciascio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Mirizzi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ACM RecSys &apos;13</title>
				<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="85" to="92" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<monogr>
		<author>
			<persName><forename type="first">Makbule</forename><surname>Gulcin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ozsoy</forename></persName>
		</author>
		<idno type="arXiv">arXiv:1601.01356</idno>
		<title level="m">From word embeddings to item recommendation</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Knowledge graph refinement: A survey of approaches and evaluation methods</title>
		<author>
			<persName><forename type="first">Heiko</forename><surname>Paulheim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Semantic Web</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1" to="20" />
		</imprint>
	</monogr>
	<note type="report_type">Preprint</note>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Unsupervised generation of data mining features from linked open data</title>
		<author>
			<persName><forename type="first">Heiko</forename><surname>Paulheim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Johannes</forename><surname>Fümkranz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd international conference on web intelligence, mining and semantics</title>
				<meeting>the 2nd international conference on web intelligence, mining and semantics</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page">31</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<monogr>
		<title level="m" type="main">Data mining with background knowledge from the web</title>
		<author>
			<persName><forename type="first">Heiko</forename><surname>Paulheim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Petar</forename><surname>Ristoski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Evgeny</forename><surname>Mitichkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christian</forename><surname>Bizer</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2014">2014</date>
			<publisher>RapidMiner World</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Deepwalk: Online learning of social representations</title>
		<author>
			<persName><forename type="first">Bryan</forename><surname>Perozzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rami</forename><surname>Al-Rfou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Steven</forename><surname>Skiena</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining</title>
				<meeting>the 20th ACM SIGKDD international conference on Knowledge discovery and data mining</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="701" to="710" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Factorization machines with libfm</title>
		<author>
			<persName><forename type="first">Steffen</forename><surname>Rendle</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Intell. Syst. Technol</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page">22</biblScope>
			<date type="published" when="2012-05">May 2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Bpr: Bayesian personalized ranking from implicit feedback</title>
		<author>
			<persName><forename type="first">Steffen</forename><surname>Rendle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christoph</forename><surname>Freudenthaler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zeno</forename><surname>Gantner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lars</forename><surname>Schmidt-Thieme</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI &apos;09</title>
				<meeting>the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI &apos;09<address><addrLine>Arlington, Virginia, United States</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="452" to="461" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Mining the web of linked data with rapidminer</title>
		<author>
			<persName><forename type="first">Petar</forename><surname>Ristoski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christian</forename><surname>Bizer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Heiko</forename><surname>Paulheim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Web Semantics: Science, Services and Agents on the World Wide Web</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="142" to="151" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">A hybrid multi-strategy recommender system using linked open data</title>
		<author>
			<persName><forename type="first">Petar</forename><surname>Ristoski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Eneldo</forename><surname>Loza Mencía</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Heiko</forename><surname>Paulheim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Semantic Web Evaluation Challenge</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="150" to="156" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Rdf2vec: Rdf graph embeddings for data mining</title>
		<author>
			<persName><forename type="first">Petar</forename><surname>Ristoski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Heiko</forename><surname>Paulheim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Semantic Web Conference (To Appear)</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">An introduction to random indexing</title>
		<author>
			<persName><forename type="first">Magnus</forename><surname>Sahlgren</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Methods and Applications of Semantic Indexing Workshop at the 7th International Conference on Terminology and Knowledge Engineering</title>
				<meeting><address><addrLine>TKE</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2005">2005. 2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Adoption of the linked data best practices in different topical domains</title>
		<author>
			<persName><forename type="first">Max</forename><surname>Schmachtenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christian</forename><surname>Bizer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Heiko</forename><surname>Paulheim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Semantic Web Conference</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="245" to="260" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Knowledge infusion into content-based recommender systems</title>
		<author>
			<persName><forename type="first">Giovanni</forename><surname>Semeraro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pasquale</forename><surname>Lops</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pierpaolo</forename><surname>Basile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>De</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gemmis</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Third ACM Conference on Recommender Systems, RecSys &apos;09</title>
				<meeting>the Third ACM Conference on Recommender Systems, RecSys &apos;09<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="301" to="304" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">Evaluation of recommendations: Rating-prediction and ranking</title>
		<author>
			<persName><forename type="first">Harald</forename><surname>Steck</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th ACM Conference on Recommender Systems, RecSys &apos;13</title>
				<meeting>the 7th ACM Conference on Recommender Systems, RecSys &apos;13<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="213" to="220" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Wikidata: a free collaborative knowledgebase</title>
		<author>
			<persName><forename type="first">Denny</forename><surname>Vrandečić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Markus</forename><surname>Krötzsch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">57</biblScope>
			<biblScope unit="issue">10</biblScope>
			<biblScope unit="page" from="78" to="85" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<analytic>
		<title level="a" type="main">Improving maximum margin matrix factorization</title>
		<author>
			<persName><forename type="first">Markus</forename><surname>Weimer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexandros</forename><surname>Karatzoglou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alex</forename><surname>Smola</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Mach. Learn</title>
		<imprint>
			<biblScope unit="volume">72</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="263" to="276" />
			<date type="published" when="2008-09">September 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b44">
	<analytic>
		<title level="a" type="main">Avoiding monotony: Improving the diversity of recommendation lists</title>
		<author>
			<persName><forename type="first">Mi</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Neil</forename><surname>Hurley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2008 ACM Conference on Recommender Systems, RecSys &apos;08</title>
				<meeting>the 2008 ACM Conference on Recommender Systems, RecSys &apos;08<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="123" to="130" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
