<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">PReFacTO: Preference Relations Based Factor Model with Topic Awareness and Offset</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Priyanka</forename><surname>Choudhary</surname></persName>
						</author>
						<author role="corresp">
							<persName><forename type="first">Maunendra</forename><surname>Sankar Desarkar</surname></persName>
							<email>maunendra@iith.ac.in</email>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="institution">Indian Institute of Technology Hyderabad Hyderabad</orgName>
								<address>
									<region>Telangana</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="institution">Indian Institute of Technology Hyderabad Hyderabad</orgName>
								<address>
									<region>Telangana</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<address>
									<settlement>Ann Arbor</settlement>
									<region>Michigan</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">PReFacTO: Preference Relations Based Factor Model with Topic Awareness and Offset</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">E9B71F52D81EC87D5C3329A99E66EA83</idno>
					<idno type="DOI">10.1145/nnnnnnn.nnnnnnn</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T06:31+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Recommendation System</term>
					<term>Pairwise Preferences</term>
					<term>Topic Modeling</term>
					<term>Latent Factor Models</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Recommendation systems create personalized list of items that might interest the user by analyzing the user's history of past purchases and/or consumption. For rating based systems, most of the traditional methods for recommendation focus on the absolute ratings provided by the users to the items. In this paper, we extend the traditional Matrix Factorization approach for recommendation and propose pairwise relation based factor modeling. While modeling the items in the system, the use of pairwise preferences allow information flow between the items through the preference relations as an additional information. Item feedbacks are available in the form of reviews apart from the rating information. The reviews have textual information that can be really helpful to represent the item's latent feature vector appropriately. We perform topic modeling of the item reviews and use the topic vectors to guide the joint factor modeling of the users and items and learn their final representations. The proposed method shows promising results in comparison to the state-of-the-art methods in our experiments.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Users have access to large variety of items available online for purchase, subscription, consumption etc. Such a huge list of options often result in choice overload, where it becomes difficult to browse through and/or select the items of interest. Recommendation Systems (RS) make this task of selecting appropriate items easier by finding and suggesting subset of the items that might be of interest to the user. Many traditional recommendation techniques use only ratings to assess the users' taste and behavior. Given a small subset of rating data containing ratings given to the items by the users, Recommendation Systems try to predict the ratings of the items that are not yet rated/viewed by the user. Based on these predicted rating values, ranked list of the items that can be of user's interest are recommended to the users. Latent factor models <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b12">13]</ref> have been extensively used in the past for this purpose.</p><p>There are lot of recommendation systems where the user feedback come in the form of ratings. Majority of such recommendation systems use these absolute ratings entered by the users for modeling the users and items according to latent factor modeling, and use those models for recommendation. Latent factor models like Matrix Factorization <ref type="bibr" target="#b7">[8]</ref> are commonly used to transform or represent the users and the items to latent feature spaces. These representations are helpful for explaining the observed ratings and predicting the unknown ratings. These latent factors, e.g. in case of movie recommendations, can be genres, actors or directors or something un-interpretable. These factors try to explain the aspects behind the liking of the items by a particular user. The items are modeled in a similar fashion by representation of the hidden factors possessed by them. This representation predicts the rating by possession of these factors in an item and affinity of users towards these hidden factors.</p><p>User feedback in the form of reviews along with the ratings is also available for many online systems like Amazon, IMDb, TripAdvisor etc. The review information can be really useful as it contains the users' perception about the items. There can be systems where the item description is also available. There are algorithms <ref type="bibr" target="#b13">[14]</ref> which consider the item description as additional input for latent factor modeling. However, the descriptions are often entered by the item producers or sellers. On the other hand, the feedback in the form of reviews given by the user generally conveys these factors that are being liked or disliked in an item. An attempt to include these textual information can be helpful for better modeling, interpretation and visualization of the hidden dimensions <ref type="bibr" target="#b10">[11]</ref>.</p><p>An alternate form of recommendation system can be based on pairwise preferences of the user among the items <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b6">7]</ref>. Given a pair of items (i, j), user u may give feedback regarding which of the item he prefers over the other. Such type of feedback is referred to as pairwise preference or pairwise preference based feedback. A survey in <ref type="bibr" target="#b5">[6]</ref> shows that users do prefer comparisons through pairwise scores rather than providing absolute ratings. Although there is no available dataset where the pairwise preferences were directly captured, many approaches in literature have induced pairwise preferences from absolute ratings <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b9">10]</ref> and used those relations for developing algorithms for recommendation.</p><p>The existing methods from the literature that are based on pairwise preferences do not consider the item content information in the modeling process. In this paper, we propose approaches that combine the pairwise feedback with the additional review data available. We propose an algorithm to use Latent Factor modeling using the pairwise preferences to discover the latent dimensions, map users and items to joint latent feature vector space and produce recommendations for the end user. The latent feature vector space for the items are derived through topic modeling. In this approach, we construct a proxy document for each item by considering the reviews that it has got. If available, the descriptions of the items also can be used to populate this document. We performed probabilistic topic modeling on these documents representing items using Latent Dirichlet Allocation (LDA). These topics are then used to guide the factorization process for learning the latent representations of the users. We propose two different approaches for this purpose. One in which the LDA topic vectors for the items are directly used as the latent representations of the items, and another where these LDA representations are used to initialize the item vectors in the factorization process. For the second approach, the item-latent offset is introduced alongside the LDA representations. The offset is learned throughout the factorization process and tries to capture the deviations from the LDA representations of the items. We call our approach as Preference Relations Based Factor Model with Topic Awareness and Offset or PReFacTO in short. Experimental evaluation and analysis performed on a benchmark dataset helps to understand the strengths of the pairwise methods and their ability to generate efficient recommendations. We summarize the contribution of our work below:</p><p>• We use relative preferences over item pairs in a factor modeling framework for modeling users and items. The models are then used for generating recommendations. • We incorporate item reviews in the factorization process.</p><p>• Detailed experimental evaluation is performed on a benchmark dataset. Analysis of the results are performed to understand the advantages and shortcomings of the methods.</p><p>The rest of the paper is organized as follows. After discussing the related work, we present the proposed methods in Section 3. We briefly talk about pairwise preferences and handling textual reviews and then provide detailed description about the methods being proposed in this paper. In Section 4, we define the four evaluation metrics used to measure the performance of the proposed methods with the baseline methods. We provide the detailed discussion and analysis of the results obtained. The conclusion and the future work of this paper has been summarized in Section 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">RELATED WORK</head><p>Traditional recommendation systems have extensively used latent factor based modeling techniques. Many researches have been done that employ the use of Matrix Factorization(MF) <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9]</ref> techniques for the prediction of unknown rating values of items not seen by the user and providing recommendations by selecting top-N items. This basic MF model which corresponds to the pointwise method used in this paper. It acts as a baseline model to compare the proposed methods presented in this paper. The works of <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b13">14]</ref> have included the content based modeling to interpret the textual labels for the rating dimensions. This justifies the reasons how the user assess the products. Similar kind of work has been done in <ref type="bibr" target="#b4">[5]</ref>. It tries to improve the rating predictions and provide feature discovery. Different users give different weights to these features. For e.g., a user who loves horror movies and hates romantic genre will have high weightage to "Annabelle" movie than the "The Notebook" in contrary to a romantic movie lover. This weightage will affect the overall scores and explain the rating difference.</p><p>Recently researchers have shown keen interest in pairwise preferences based recommendation techniques. In <ref type="bibr" target="#b1">[2]</ref> suitable graphical interface has been provided to the user to mark his choices over the pair of items. In <ref type="bibr" target="#b6">[7]</ref> the pairwise preferences are induced from the available rating values of the items. Both implicit <ref type="bibr" target="#b11">[12]</ref> and explicit feedback can be modeled using the pairwise preferences based latent factor models. In <ref type="bibr" target="#b2">[3]</ref>, the users motivate the use of preference relations or relative feedback for recommendation systems. Pairwise preferences have been used in <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b9">10]</ref> in matrix factorization and nearest neighbor of latent factor modeling settings to generate recommendations. However, in none of these works, the user reviews are taken into account.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">METHODOLOGY</head><p>In this section, we present our proposed recommendation methods that work with pairwise preference information from the user. Apart from the pairwise feedback, we also consider the reviews that are provided by the users for different items. The methods represent each user and item in a shared latent feature space, through factor modeling approach. Before discussing our proposed methods in detail, we briefly describe the concepts of pairwise preferences and also about the way in which we handle the textual reviews available for the items.</p><p>Pairwise Preferences: The ratings in recommendation systems are generally absolute in nature, often in the range of 1-5 or 1-10. However, users have different behavior while rating the items. The same rating value entered by two different users might be due to two different satisfaction levels. Moreover, the absolute rating entered by a user to an item may change over time, if the same user is asked to rate the same item again. Motivated by observations like this, pairwise preferences are introduced in modeling users and items in recommendation systems <ref type="bibr" target="#b2">[3]</ref>. Pairwise relation based approaches try to capture the relative preference between the items. Such feedback, if directly obtained, removes the user bias that may correspond to the leniency or strictness of the users while assigning the absolute ratings.</p><p>Although pairwise preference relations can address some of the problems with absolute ratings mentioned above, there is no dataset (publicly available) with directly obtained pairwise preferences. In absence of such data, we consider in our work, datasets with absolute ratings as user feedback, and induce relative ratings from those absolute ratings. We then consider those relative pairwise preferences as input to the proposed methods.</p><p>Handling Textual Reviews -Topic modeling: If the item descriptions are available, then the system can identify more about the attributes or aspects that the items possess. This information can be useful in making the recommendations. In fact, content-based recommendation algorithms try to exploit these item attributes for generating the recommendations.</p><p>Several systems allow the users to enter reviews for the items. Item reviews are very useful in making view/purchase decisions as they often contain reasons or explanations regarding why the item was liked or disliked by the user who wrote the review. The reviews often describe some additional details about the items, for example the aspects that they possess. An example review for a product from Amazon is given below.</p><p>It seems like just about everybody has made a Christmas Carol movie. This one is the best by far! It seems more realistic than all the others and the time period seems to be perfect. The acting is also far better than any of the others I've seen; my opinion.</p><p>We hypothesize that even if item descriptions are not available, then also, the reviews reveal a great deal of information about the different attributes (specified or latent) that might be contained in the items <ref type="foot" target="#foot_0">1</ref> . These attributes can then be useful in modeling the items, and can further aid in generating efficient recommendations.</p><p>Based on this assumption, we use the reviews given by the users to different items as an additional source of information for learning the item representations. We use Latent Dirichlet Allocation (LDA) topic modeling technique to learn the topic representation of the items. LDA is an unsupervised method, which, given a document collection, identifies a fixed-number (say k, input to the algorithm) of latent topics present in the collection. Each document can then be represented as a k-dimensional vector in that topic space. LDA works on documents, so we need to represent each item as a document. For that purpose, we combine all the reviews assigned to an item to create a proxy document for that item. If d ui represents a review given by a user u for an item i, then we denote the proxy document (d i ) for the item i as the concatenation of all the reviews given by the set of users U for i. Then, we can have a document collection d corresponding to the set of items</p><formula xml:id="formula_0">I as d = ∪ i (d i ) where i = 1, • • • , |I |.</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Preference Relation based Factor modeling (Pairwise)</head><p>Between the pair of items (i, j), users can express their relative preference if such a provision exists. This would allow the user to indicate, for the item pair, which item he prefers more. The user can also indicate if he favors both the items equally. This pairwise preference can be captured through an interface where users mark their preferences over a small subset of data. However, as mentioned earlier, we are not aware of the existence of any such system that allows the users to enter the pairwise preferences directly. In absence of that, if the rating data is available, pairwise preferences can be obtained as: r u i j = r ui − r u j . Here, r ui indicates the absolute rating given by user u to item i. If the sign of r u i j is positive, we may consider that item i is preferred over item j by the user u. If the sign is negative we may consider that j is preferred over i. If the value of r u i j is zero, then it indicates that both the items are equally preferable to u. Similar kind of approach was adopted in <ref type="bibr" target="#b3">[4]</ref> for inducing pairwise preferences from absolute ratings.</p><p>We take a different approach for converting the absolute rating to relative preferences. If the ratings given by user u to the two items i and j are r ui and r u j respectively, then we define the (actual or ground truth) preference strength for the triplet (u, i, j) as</p><formula xml:id="formula_1">r ui j = exp (r ui ) exp (r ui ) + exp (r u j ) = 1 1 + exp (−(r ui − r u j ))<label>(1)</label></formula><p>The value of r ui j thus obtained can capture the strength of the preference relation as well. If the difference between r ui and r u j becomes larger, then the strength of this relation becomes stronger as shown in Figure <ref type="figure" target="#fig_0">1</ref>.</p><p>We model the prediction of the unobserved r ui j 's as:</p><formula xml:id="formula_2">ru i j = exp (p u (q i − q j ) + (b i − b j )) 1 + exp (p u (q i − q j ) + (b i − b j )) = 1 1 + exp (−(p u (q i − q j ) + (b i − b j ))))<label>(2)</label></formula><p>where the rating matrix R consisting of user-item interaction gives access to the values of r ui , indicating the rating given to item i by user u. The quantity b i represents the bias for the item. The method models each user u by a vector p u . This vector space measures user's interest in the particular item based on affinity of user towards these factors. Similarly, each item i is represented by a feature vector q i . This latent factor representation defines the degree to which these factors are possessed by the item.</p><p>Given the training set, the mean-squared error (MSE) function on the training data (with suitable regularization) is used as the objective function. The optimization is generally performed using Stochastic Gradient Descent (SGD) and the algorithm outputs optimized values of the rating parameters Θ = {B, P, Q } where B represents the bias values (b i ) for all the items i ∈ I , P represents the user latent feature vector (p u ) for all the users u ∈ U and Q represents the item latent feature vector (q i ) for all the items i ∈ I . The objective function is defined as :</p><formula xml:id="formula_3">min Θ (u,i, j)∈T r ui j − s i j 1 + s i j 2 + λ||p u || 2 + λ 2 ||q i || 2 + λ 2 ||q j || 2 + λ 2 ||b i || 2 + λ 2 ||b j || 2 (3)</formula><p>where</p><formula xml:id="formula_4">s i j = exp (p u (q i − q j ) + (b i − b j ))</formula><p>T represents the training set and λ is the regularization parameter. The update rules for optimizing the above objective function are given below:</p><p>Update rules :</p><formula xml:id="formula_5">p u ← p u + α 2e ui j s i j (q i − q j ) (1 + s i j ) 2 − 2λp u<label>(4)</label></formula><formula xml:id="formula_6">q i ← q i + α 2e ui j s i j p u (1 + s i j ) 2 − λq i<label>(5)</label></formula><p>q j ← q j − α 2e ui j s i j p u (1</p><formula xml:id="formula_7">+ s i j ) 2 + λq j (6) b i ← b i + α 2e ui j s i j (1 + s i j ) 2 − λb i (7) b j ← b j − α 2e ui j s i j (1 + s i j ) 2 + λb j<label>(8)</label></formula><p>where e ui j = r ui j − s i j (1+s i j ) and α is the learning rate. After obtaining the model parameters through stochastic gradient descent, we can predict the personalized utility of the item i for the user u as:</p><formula xml:id="formula_8">ρ ui = b i + p u q i<label>(9)</label></formula><p>The top-N items according to this predicted personalized utility are recommended to the user.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Preference Relation based Factor modeling with Topics (Pairwise+Topic)</head><p>As motivated in the previous section, the review comments about items can be useful in identifying the aspects that the items possess. Moreover, it also helps to understand the reasons behind the liking or disliking of the item by the user. Hence, we extend the previous method to incorporate the reviews about the items. The proxy documents for the items are passed through a Latent Dirichlet Allocation (LDA) framework to identify the latent topics present in the documents. LDA is a probabilistic topic modeling technique that discovers latent topics in the documents. It represents each document d i by k-dimensional topic distribution θ i through Dirichlet distribution.</p><p>The k-th dimension of the vector indicates the probability with which the k-th topic is being discussed in the document. Each topic is associated with the word distribution ϕ k which is the probability of the word-topic association.</p><p>We pass the collection of documents D = ∪ i ∈I d i to LDA. As an output, we get the topic vector q i corresponding to each document d i ∈ D. For each item i, the latent representation is now fixed at q i , and these values of q i 's are fed to the factor modeling technique used in Section 3.1. The objective function for this method is given by Equation <ref type="formula">10</ref>. The optimization variables (parameters) now become Θ = {B, P }. The solution to this objective function is obtained through Stochastic Gradient Descent.</p><formula xml:id="formula_9">min Θ (u,i, j)∈T r ui j − s i j 1 + s i j 2 + λ||p u || 2 + λ 2 ||b i || 2 + λ 2 ||b j || 2 (10)</formula><p>Here q i remains fixed throughout the learning process. Hence, we do not have regularization term for q i in the objective function.</p><p>The update rules remain same for p u , b i and b j as in Equation <ref type="formula" target="#formula_5">4</ref>, 7 and 8 respectively. Personalized utility scores of the items are computed using Equation <ref type="formula" target="#formula_8">9</ref>and recommendations are generated.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Pairwise Relation based Factor modeling with Topics and Offset (PreFacTO)</head><p>In the previous method described in Section 3.2, the topic modeling provides the seed information for the item latent vector representations obtained from the reviews. These representations were fixed throughout the learning process. In our next method, we allow the item representations to take deviations from their LDA topic vectors. If ϵ i is the deviation of the item i's representation from its topic vector q i , then the pairwise ratings can be modeled as:</p><formula xml:id="formula_10">ru i j = exp (p u ((q i + ϵ i ) − (q j + ϵ j )) + (b i − b j )) 1 + exp (p u ((q i + ϵ i ) − (q j + ϵ j )) + (b i − b j )) = 1 1 + exp (−(p u ((q i + ϵ i ) − (q j + ϵ j )) + (b i − b j ))))<label>(11)</label></formula><p>The parameters for this model are Θ = {B, P, E}. As earlier, B and P are the collection of item-bias vectors and user vectors. E is the collection of deviations or offsets of the items from their LDA topic vectors. The objective function for this model can be written as:</p><formula xml:id="formula_11">min Θ (u,i, j)∈T r ui j − s i j 1 + s i j 2 + λ||p u || 2 + λ 2 ||b i || 2 + λ 2 ||b j || 2 + λ 2 ||ϵ i || 2 + λ 2 ||ϵ j || 2 (12)</formula><p>where</p><formula xml:id="formula_12">s i j = exp (p u (q i + ϵ i ) − (q j + ϵ j )) + (b i − b j ))</formula><p>and r ui j is already defined in Equation <ref type="formula" target="#formula_1">1</ref>.</p><p>The model parameters are learned using Stochastic Gradient Descent. The update rules are given below:</p><formula xml:id="formula_13">p u ← p u + α 2e ui j s i j ((q i + ϵ i ) − (q j + ϵ j )) (1 + s i j ) 2 − 2λp u (<label>13</label></formula><formula xml:id="formula_14">)</formula><formula xml:id="formula_15">ϵ i ← ϵ i + α 2e ui j s i j p u (1 + s i j ) 2 − λϵ i (14) ϵ j ← ϵ j − α 2e ui j s i j p u (1 + s i j ) 2 + λϵ j (<label>15</label></formula><formula xml:id="formula_16">)</formula><p>where e ui j = r ui j − s i j (1+s i j ) . The update rules for the bias terms remain same as specified in Equations 7 and 8. After the optimized values of the parameters are obtained, personalized utility of the item i for user u is computed using following equation and Top-N recommendations are made for each user.</p><formula xml:id="formula_17">ρ ui = b i + p u (q i + ϵ i ) (16)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">PERFORMANCE EVALUATION 4.1 Dataset</head><p>We use the Amazon product review dataset 2 for our experiments. This dataset contains reviews and ratings given to different items by different users. We consider items from the Movies and TV category. All items in this category were released between 1999 to 2013. We divided this timeline into three blocks each consisting of 5 year span: (A) 1999-2003, (B) 2004-2008, and (C) 2009-2013. From each block, we removed the items which have less than 10 reviews in that block and the users who have given less than 5 reviews in that block. After this filtering to remove these non-prolific users and items, we have 3,513 items, 85,375 users, 725198 ratings and 725176 reviews in our dataset. We have used 70% of this data for training and the remaining 30% for testing purposes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Baseline Methods</head><p>We compare our preference relation based models to the following baselines: (a) Absolute Rating based Factor modeling (Pointwise):</p><p>In analogous to the standard latent-model <ref type="bibr" target="#b7">[8]</ref>, we convert the absolute rating values using the sigmoid function. The sigmoid function is then used to make predictions using the following objective function:</p><formula xml:id="formula_18">min Θ (u,i)∈T ρ ui − s i 1 + s i 2 + λ||p u || 2 + λ 2 ||q i || 2 + λ 2 ||b i || 2</formula><p>where</p><formula xml:id="formula_19">ρ ui = exp (r ui ) 1 + exp (r ui ) s i = exp (p u q i + b i ) (b)</formula><p>Absolute Rating based Factor modeling with Topics (Pointwise+Topics) : We combine the topic modeling technique with the latent factor modeling. The latent vector representations of the items are drawn from the reviews (by passing the reviews of the items as an input to the LDA) and fed to latent factor model. Here the item representations will remain fixed and the user-latent space will be learned using the Stochastic Gradient Descent. (c) Absolute Rating based Factor modeling with Topics And Offset (Pointwise+Topics+Offset) : Along with the factor and the topic modeling, we introduce item latent vector offset which captures the deviations of the item feature vector representations drawn from the LDA. The objective function to model the system and learn the user-latent and the item-offset representations can be written as:</p><formula xml:id="formula_20">min Θ (u,i)∈T ρ ui − s i 1 + s i 2 + λ||p u || 2 + λ 2 ||b i || 2 + λ 2 ||ϵ i || 2</formula><p>where</p><formula xml:id="formula_21">s i = exp (p u (q i + ϵ i ) + b i )</formula><p>2 http://jmcauley.ucsd.edu/data/amazon/</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Evaluation</head><p>For evaluation of the models presented in Section 3, we compare those three algorithms with the baseline methods mentioned in Section 4.2. We use Precision@k, Recall@k, IRecall and URecall as the evaluation metrics. We took k = 100. The IRecall and the URecall metrics are described below.</p><p>IRecall: IRecall of an item is computed using the following equation:</p><formula xml:id="formula_22">IRecall i = |Rec(i) ∩ Rated(i)| |Rated(i)| ,<label>(17)</label></formula><p>where Rec(i) denotes the sets of users to whom item i is recommended. Rated(i) denotes the set of users who have i in their test set. Thus this metric measures the algorithm's ability to recommend items to the users who have actually rated it. IRecall for an algorithm is defined as the average of the item-wise IRecall values over the set of concerned items.</p><p>URecall: URecall of a user is computed as:</p><formula xml:id="formula_23">U Recall u = |Rec(u) ∩ Rated(u)| |Rated(u)| ,<label>(18)</label></formula><p>where Rec(u) denotes the sets of items that have been recommended to user u. Rated(u) denotes the set of items present in the test set of user u.</p><p>For the experimentation and evaluation purposes, we have divided the items into bins. These bins are created based on the number of reviews. For each block, we maintain item review count written by the user during that time span (block range). We define two bins for each block as follows: Bin-0 consists of the items having review count less than 40 and Bin-1 contains the items having review count greater than or equal to 40. We consider the Bin-0 as a collection of sparse items, and the items from Bin-1 as dense items. For each bin, we compute the average of the IRecall value of all the items present in the corresponding bin. Analogous to the items, we divide the users as well into the bins based on the number of reviews given by the user. Also, we take the average of the URecall value of all the users falling into the corresponding bin. We then compare the IRecall and URecall values of the different methods mentioned in this paper with the baseline approaches.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4">Experimental Analysis And Discussion</head><p>Setting the parameters for the proposed method: The model hyperparameters λ (regularization parameter) and k (number of topics) need to be determined in order to produce best models for recommendation. Experiments were conducted with different values of λ and k on a small subset of the data. From the experiments, the combination of λ = 4E − 05 and k = 10 were found to be the best values for the parameters. Hence, we select these two values of the hyperparameters for further experimentation. Performance of the algorithm on the test set for different values of λ (keeping k fixed at 10) and different values of k (keeping λ fixed at 4E − 05 are shown in Table <ref type="table" target="#tab_0">1</ref> and Table <ref type="table" target="#tab_1">2</ref> respectively.</p><p>Comparison with other methods and discussion: For each method, we run the experiments for the three blocks, and compute the average value of each metric over these three blocks. These average values are reported in Table <ref type="table" target="#tab_2">3</ref>. It can be seen from the experimental results that pairwise methods and in particular, PreFacTO  gives the best results compared to other algorithms for the complete dataset. Although the PreFacTO and pointwise are at par based on their performance but the PreFacTO slightly surpasses the pointwise in terms of overall precision and recall values. If we compare the IRecall values for the sparse items, the Pairwise method outperforms all other approaches. The IRecall values for dense items shows that PreFacTO performs very well for dense items. The IRecall values for the sparse and dense items for different blocks are compared in Figure <ref type="figure">2</ref> and Figure <ref type="figure" target="#fig_1">3</ref> respectively. There are four groups of columns in both the figures. The first three represent the three blocks, and the last one represents the average value over the three blocks. The superior performance of Pairwise and worst performance of PreFacTO in case of the sparse items might be due to the sparseness of the reviews. The LDA representation for the sparse items having very less reviews and further learning in the form of deviations on top of the LDA vectors do not provide any additional benefit. On the contrary, it might have led to overfitting. But on the other hand, Pairwise tries to model the system only through rating information. The preference relations provide some additional information to the item in the process of comparing it with the other items. There is no overfitting in the process and modeling the system for the sparse items works well. If we look at URecall values for the sparse users, the PreFacTO actually performs well.</p><p>However, in case of dense items, the PreFacTO outperforms every other method including Pointwise. Along with the pairwise preference based learning, the item vector representation from the rich-textual information of the reviews and learning the deviations from these item vectors help in better prediction with reasoning as to why the item will be likeable or dislikable to the user.</p><p>In any real recommendation system, there are sparse items, and there are dense items as well. Depending on the exact system or domain, the ratio of sparse to dense items can vary. In this study, we have explored few algorithms that consider pairwise feedback instead of absolute ratings. Among the proposed methods, Pairwise works well for the sparse items and PreFacTO works well for the dense items. The experiments show the power of preference relation based feedback for recommendation. However, it does not establish the superiority of any single algorithm that works across the entire range of data (both sparse and dense zones). Nonetheless, we believe that it might be possible to design such algorithms that works well for the entire range of data. It might be an interesting research direction to develop hybrid methods that consider both Pairwise and PreFacTo for fusing the recommendations from sparse and dense zones to generate the final recommendations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">CONCLUSIONS AND FUTURE WORK</head><p>We have presented the PReFacTO approach in this paper, which aligns the latent factor modeling between the users and the item pairs with the hidden topics in the reviews of the item. The pairwise relation adds significant information for the sparse items and provides better modeling of the user-item interaction, and the item hidden dimensions are effectively drawn from the reviews. The topic modeling based latent factors of the items along with the pairwise relation between these items (where the latent feature space of the items drawn from the LDA are allowed to change through offset during the learning process) provides significant improvement over the methods considered in isolation. Our algorithm runs very effectively on large dataset and comparable with the pointwise approach. In fact, PreFacTO method gives marginal improvements over the pointwise methods. It is also shown that Pairwise method works well for the sparse items and PreFacTO provides better performance in case of dense items.</p><p>It was observed in the experimental results that Pairwise works well for sparse items and PreFacTO works well for dense items. It might be possible to develop hybrid methods that consider both Pairwise and PreFacTo and fuse the recommendations generated by them from sparse and dense zones to come up with the final recommendations. It might also be possible to develop parameterized algorithms that automatically switch between Pairwise (no consideration of reviews) and PreFacTo (considering the reviews) depending on the availability of data for the item under consideration during the modeling.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Graph showing pairwise relation between the items as a function of sigmoid.</figDesc><graphic coords="3,341.98,83.69,192.20,144.15" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Comparison of IRecall values of different algorithms with review count of the items greater than or equal to 40.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Values of the evaluation metrics for different values of λ. Number of topics were fixed at 10.</figDesc><table><row><cell cols="6">Alpha Precision Recall IRecall(reviews&lt;40) IRecall(reviews&gt;40) URecall(reviews&lt;40) URecall(reviews&gt;40)</cell></row><row><cell>4.00E-02 0.0076</cell><cell>0.1045</cell><cell>0.0117</cell><cell>0.0673</cell><cell>0.1074</cell><cell>0.0863</cell></row><row><cell>4.00E-03 0.0122</cell><cell>0.1451</cell><cell>0.0013</cell><cell>0.0793</cell><cell>0.1448</cell><cell>0.1456</cell></row><row><cell>4.00E-04 0.0120</cell><cell>0.1398</cell><cell>0.0012</cell><cell>0.0789</cell><cell>0.1390</cell><cell>0.1435</cell></row><row><cell>4.00E-05 0.0125</cell><cell>0.1457</cell><cell>0.0012</cell><cell>0.0792</cell><cell>0.1448</cell><cell>0.1504</cell></row><row><cell>4.00E-06 0.0124</cell><cell>0.1448</cell><cell>0.0011</cell><cell>0.0797</cell><cell>0.1438</cell><cell>0.1495</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 :</head><label>2</label><figDesc>Values of the evaluation metrics for different values of k: the number of topics. The value of λ was fixed at 4.00E − 05.</figDesc><table><row><cell cols="6">No. of Topics Precision Recall IRecall(reviews&lt;40) IRecall(reviews&gt;40) URecall(reviews&lt;40) URecall(reviews&gt;40)</cell></row><row><cell>5 0.0107</cell><cell>0.1229</cell><cell>0.0008</cell><cell>0.0781</cell><cell>0.1221</cell><cell>0.1302</cell></row><row><cell>10 0.0125</cell><cell>0.1457</cell><cell>0.0012</cell><cell>0.0792</cell><cell>0.1448</cell><cell>0.1504</cell></row><row><cell>15 0.0108</cell><cell>0.1246</cell><cell>0.0011</cell><cell>0.0778</cell><cell>0.1238</cell><cell>0.1324</cell></row><row><cell>20 0.0108</cell><cell>0.1244</cell><cell>0.0008</cell><cell>0.0784</cell><cell>0.1233</cell><cell>0.1331</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3 :</head><label>3</label><figDesc>Comparing performances of different algorithms. The best values for each metric across the algorithms are marked in bold.</figDesc><table><row><cell cols="8">Method Precision Recall IRecall(reviews&lt;40) IRecall(reviews&gt;40) URecall(reviews&lt;40) URecall(reviews&gt;40)</cell></row><row><cell cols="2">Pointwise 0.0106</cell><cell>0.1267</cell><cell>0.0141</cell><cell>0.0635</cell><cell></cell><cell>0.1271</cell><cell>0.1210</cell></row><row><cell cols="2">Pointwise+Topics 0.0048</cell><cell>0.0555</cell><cell>0.0256</cell><cell>0.0551</cell><cell></cell><cell>0.0551</cell><cell>0.0568</cell></row><row><cell cols="2">Pointwise+Topics+Offset 0.0055</cell><cell>0.0650</cell><cell>0.0252</cell><cell>0.0514</cell><cell></cell><cell>0.0651</cell><cell>0.0632</cell></row><row><cell cols="2">Pairwise 0.0021</cell><cell>0.0254</cell><cell>0.0420</cell><cell>0.0312</cell><cell></cell><cell>0.0255</cell><cell>0.0252</cell></row><row><cell cols="2">Pairwise+Topics 0.0038</cell><cell>0.0485</cell><cell>0.0378</cell><cell>0.0399</cell><cell></cell><cell>0.0491</cell><cell>0.0448</cell></row><row><cell cols="2">PreFacTO 0.0125</cell><cell>0.1457</cell><cell>0.0012</cell><cell>0.0792</cell><cell></cell><cell>0.1448</cell><cell>0.1504</cell></row><row><cell>0.08</cell><cell></cell><cell></cell><cell>0.14</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>0.07</cell><cell></cell><cell></cell><cell>0.12</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>0.06</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>0.1</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>0.05</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>0.08</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>0.04</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>0.06</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>0.03</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>0.04</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>0.02</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>0.01</cell><cell></cell><cell></cell><cell>0.02</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>0</cell><cell></cell><cell></cell><cell>0</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Block-1</cell><cell>Block-2</cell><cell>Block-3</cell><cell>Average</cell><cell>Block-1</cell><cell>Block-2</cell><cell>Block-3</cell><cell>Average</cell></row><row><cell cols="2">Pointwise</cell><cell cols="2">Pairwise</cell><cell></cell><cell>Pointwise</cell><cell></cell><cell>Pairwise</cell></row><row><cell cols="2">Pointwise+Topic</cell><cell cols="2">Pairwise+Topic</cell><cell></cell><cell>Pointwise+Topic</cell><cell></cell><cell>Pairwise+Topic</cell></row><row><cell cols="2">Pointwise+Topic+Offset</cell><cell cols="2">PreFacTO</cell><cell cols="2">Pointwise+Topic+Offset</cell><cell></cell><cell>PreFacTO</cell></row><row><cell cols="4">Figure 2: Comparison of IRecall values of different</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="4">algorithms taking into consideration the items having re-</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>view count less than 40.</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">The dataset used in our experiments did not have the item descriptions, but contained the reviews</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Regression-based latent factor models</title>
		<author>
			<persName><forename type="first">Deepak</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bee-Chung</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining</title>
				<meeting>the 15th ACM SIGKDD international conference on Knowledge discovery and data mining</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="19" to="28" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Pairwise preferences elicitation and exploitation for conversational collaborative filtering</title>
		<author>
			<persName><forename type="first">Laura</forename><surname>Blédaité</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesco</forename><surname>Ricci</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 26th ACM Conference on Hypertext &amp; Social Media</title>
				<meeting>the 26th ACM Conference on Hypertext &amp; Social Media</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="231" to="236" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Aggregating Preference Graphs for Collaborative Rating Prediction</title>
		<author>
			<persName><forename type="first">Maunendra</forename><surname>Sankar Desarkar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sudeshna</forename><surname>Sarkar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pabitra</forename><surname>Mitra</surname></persName>
		</author>
		<idno type="DOI">10.1145/1864708.1864716</idno>
		<ptr target="https://doi.org/10.1145/1864708.1864716" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fourth ACM Conference on Recommender Systems (RecSys &apos;10)</title>
				<meeting>the Fourth ACM Conference on Recommender Systems (RecSys &apos;10)<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="21" to="28" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Preference relation based matrix factorization for recommender systems</title>
		<author>
			<persName><forename type="first">Maunendra</forename><surname>Sankar Desarkar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Roopam</forename><surname>Saxena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sudeshna</forename><surname>Sarkar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on user modeling, adaptation, and personalization</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="63" to="75" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Beyond the stars: improving rating predictions using review text content</title>
		<author>
			<persName><forename type="first">Gayatree</forename><surname>Ganu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Noemie</forename><surname>Elhadad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Amélie</forename><surname>Marian</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">WebDB</title>
				<imprint>
			<publisher>Citeseer</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Comparisons instead of ratings: Towards more stable preferences</title>
		<author>
			<persName><forename type="first">Nicolas</forename><surname>Jones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Armelle</forename><surname>Brun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anne</forename><surname>Boyer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology-Volume 01</title>
				<meeting>the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology-Volume 01</meeting>
		<imprint>
			<publisher>IEEE Computer Society</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="451" to="456" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Pairwise preferences based matrix factorization and nearest neighbor recommendation techniques</title>
		<author>
			<persName><forename type="first">Saikishore</forename><surname>Kalloori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesco</forename><surname>Ricci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marko</forename><surname>Tkalcic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th ACM Conference on Recommender Systems</title>
				<meeting>the 10th ACM Conference on Recommender Systems</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="143" to="146" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Matrix factorization techniques for recommender systems</title>
		<author>
			<persName><forename type="first">Robert</forename><surname>Bell Koren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yehuda</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Chris</forename><surname>Volinsky</surname></persName>
		</author>
		<idno type="DOI">10.1109/MC.2009.263</idno>
		<ptr target="https://doi.org/10.1109/MC.2009.263" />
	</analytic>
	<monogr>
		<title level="j">Computer</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="page" from="30" to="37" />
			<date type="published" when="2009-08">2009. Aug. 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Advances in collaborative filtering</title>
		<author>
			<persName><forename type="first">Yehuda</forename><surname>Koren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Robert</forename><surname>Bell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Recommender systems handbook</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="77" to="118" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Preference Relationbased Markov Random Fields for Recommender Systems</title>
		<author>
			<persName><forename type="first">Shaowu</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gang</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Truyen</forename><surname>Tran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yuan</forename><surname>Jiang</surname></persName>
		</author>
		<ptr target="http://proceedings.mlr.press/v45/Liu15.html" />
	</analytic>
	<monogr>
		<title level="m">Asian Conference on Machine Learning (Proceedings of Machine Learning Research</title>
				<editor>
			<persName><forename type="first">Geoffrey</forename><surname>Holmes</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Tie-Yan</forename><surname>Liu</surname></persName>
		</editor>
		<meeting><address><addrLine>PMLR, Hong Kong</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page" from="157" to="172" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Hidden factors and hidden topics: understanding rating dimensions with review text</title>
		<author>
			<persName><forename type="first">Julian</forename><surname>Mcauley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jure</forename><surname>Leskovec</surname></persName>
		</author>
		<idno type="DOI">10.1145/2507157.2507163</idno>
		<ptr target="https://doi.org/10.1145/2507157.2507163" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th ACM conference on Recommender systems</title>
				<meeting>the 7th ACM conference on Recommender systems</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2013-10">2013. 2013. Oct. 2013</date>
			<biblScope unit="page" from="165" to="172" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Using implicit preference relations to improve recommender systems</title>
		<author>
			<persName><forename type="first">Ladislav</forename><surname>Peska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Vojtas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal on Data Semantics</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="15" to="30" />
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Bayesian probabilistic matrix factorization using Markov chain Monte Carlo</title>
		<author>
			<persName><forename type="first">Ruslan</forename><surname>Salakhutdinov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andriy</forename><surname>Mnih</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 25th international conference on Machine learning</title>
				<meeting>the 25th international conference on Machine learning</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="880" to="887" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Collaborative topic modeling for recommending scientific articles</title>
		<author>
			<persName><forename type="first">Chong</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><forename type="middle">M</forename><surname>Blei</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining</title>
				<meeting>the 17th ACM SIGKDD international conference on Knowledge discovery and data mining</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="448" to="456" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
