<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Deep Neural Architecture for News Recommendation</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Vaibhav</forename><surname>Kumar</surname></persName>
							<email>vaibhav.kumar@research.iiit.ac.in</email>
							<affiliation key="aff0">
								<orgName type="institution">International Institute of Information Technology Hyderabad</orgName>
								<address>
									<postCode>500032</postCode>
									<settlement>Gachibowli</settlement>
									<region>Telangana</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
							<affiliation key="aff0">
								<orgName type="institution">International Institute of Information Technology Hyderabad</orgName>
								<address>
									<postCode>500032</postCode>
									<settlement>Gachibowli</settlement>
									<region>Telangana</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Dhruv</forename><surname>Khattar</surname></persName>
							<email>dhruv.khattar@research.iiit.ac.in</email>
							<affiliation key="aff0">
								<orgName type="institution">International Institute of Information Technology Hyderabad</orgName>
								<address>
									<postCode>500032</postCode>
									<settlement>Gachibowli</settlement>
									<region>Telangana</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
							<affiliation key="aff0">
								<orgName type="institution">International Institute of Information Technology Hyderabad</orgName>
								<address>
									<postCode>500032</postCode>
									<settlement>Gachibowli</settlement>
									<region>Telangana</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Shashank</forename><surname>Gupta</surname></persName>
							<email>shashank.gupta@research.iiit.ac.in</email>
							<affiliation key="aff0">
								<orgName type="institution">International Institute of Information Technology Hyderabad</orgName>
								<address>
									<postCode>500032</postCode>
									<settlement>Gachibowli</settlement>
									<region>Telangana</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Manish</forename><surname>Gupta</surname></persName>
							<email>manish.gupta@iiit.ac.in</email>
							<affiliation key="aff0">
								<orgName type="institution">International Institute of Information Technology Hyderabad</orgName>
								<address>
									<postCode>500032</postCode>
									<settlement>Gachibowli</settlement>
									<region>Telangana</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vasudeva</forename><surname>Varma</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">International Institute of Information Technology Hyderabad</orgName>
								<address>
									<postCode>500032</postCode>
									<settlement>Gachibowli</settlement>
									<region>Telangana</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Deep Neural Architecture for News Recommendation</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">FFCB1E42C8F6861328C715BABE627BC9</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T20:30+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>Deep Neural Networks, News Recommendation</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Deep neural networks have yielded immense success in speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks for recommender systems has received a relatively little introspection. Also, different recommendation scenarios have their own issues which creates the need for different approaches for recommendation. Specifically in news recommendation a major problem is that of varying user interests. In this work, we use deep neural networks with attention to tackle the problem of news recommendation.</p><p>The key factor in user-item based collaborative filtering is to identify the interaction between user and item features. Matrix factorization is one of the most common approaches for identifying this interaction. It maps both the users and the items into a joint latent factor space such that user-item interactions in that space can be modeled as inner products in that space. Some recent work has used deep neural networks with the motive to learn an arbitrary function instead of the inner product that is used for capturing the user-item interaction. However, directly adapting it for the news domain does not seem to be very suitable. This is because of the dynamic nature of news readership where the interests of the users keep changing with time. Hence, it becomes challenging for recommendation systems to model both user preferences as well as account for the interests which keep changing over time. We present a deep neural model, where a non-linear mapping of users and item features are learnt first. For learning a non-linear mapping for the users we use an attention-based recurrent layer in combination with fully connected layers. For learning the mappings for the items we use only fully connected layers. We then use a ranking based objective function to learn the parameters of the network. We also use the content of the news articles as features for our model. Extensive experiments on a real-world dataset show a significant improvement of our proposed model over the state-of-the-art by 4.7% (Hit Ratio@10). Along with this, we also show the effectiveness of our model to handle the user cold-start and item cold-start problems.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>The web provides instant access to a wide variety of online news. Hence, it becomes desirable to have a recommender system that would point a user to the most relevant items and thus would maximize the user engagement with the site and minimize the time for finding relevant content. With the advent of deep learning, although recommender systems have been used with good success for products like movies and books, but have surprisingly found very little attention to the problem of news recommendation.</p><p>A major approach to the task of recommendation is called collaborative filtering <ref type="bibr" target="#b4">[5]</ref>[3] <ref type="bibr" target="#b3">[4]</ref> which uses the user's past interaction with the item to predict the most relevant content. Another common approach is content-based recommendation, which uses features between items and/or users to recommend new items to the users based on the similarity between features. However, amongst the various approaches for collaborative filtering, matrix factorization <ref type="bibr" target="#b10">[11]</ref> is the most popular one, which projects users and items into a shared latent space, using a vector of latent features to represent a user or an item. Thereafter, a user's interaction with an item is modeled as the inner product of their latent vectors.</p><p>Collaborative filtering needs a considerable amount of previous history of interaction before it can provide high quality recommendation. This problem is known as the typical cold start problem. For a newly established news website, the problem would become even more severe since users have little or no history of interaction with the site. Traditional approaches fail to produce high quality recommendation in this case. However, in practice, it has been shown that content-based approach can handle cold start problem for new items well.</p><p>Each recommendation scenario has its own issues which creates the need for different approaches for building recommendation systems. For example, news recommendation may put more focus on the freshness of the content while other systems like that of movie recommendation may emphasize more on content relatedness. Adding to this, specifically in the case of news, user interests keep evolving/changing over time. It might be possible that a user who reads news articles only pertaining to politics may suddenly develop interest in sports due to various reasons. Hence, it becomes very crucial to account for the dynamic changes in interests as well as come up with better recommendation. While many existing techniques assume the user interest to be static, this assumption seems a bit unrealistic. This suggests the need to handle temporal changes in the interests of the users.</p><p>Recently in <ref type="bibr" target="#b0">[1]</ref>, authors proposed a Neural Network architecture for collaborative filtering. They explore the use of deep neural network for learning the interaction function from the data. Their proposed method specifically aims to model the relationship between users and items.</p><p>In this work, we come up with a hybrid approach that uses user-item interactions and the content of the news to capture the similarity between users and items (news). We only focus on implicit feedback (clicks and impressions) provided by the users, i.e whether they have read a given article or not and in what sequence were those articles read by them.</p><p>The sequence in which the articles are read by the user encapsulates information about the interests of the user. Capturing the interests of the user from the sequence of read articles requires a component which should be capable of learning long-term dependencies. LSTMs in general have shown to be suitable for this particular task <ref type="bibr" target="#b28">[29]</ref> <ref type="bibr" target="#b29">[30]</ref>. To capture both static and dynamic interests which the user has developed over time, we use bidirectional LSTMs <ref type="bibr" target="#b30">[31]</ref>. We chose a specific amount of reading history of each user as input to the LSTMs. Once these interests are captured, we then need to know the extent of each of the user's interests. We incorporate a neural attention mechanism <ref type="bibr" target="#b24">[25]</ref> for this purpose. Then, in order to capture the similarity between users and items, we need to be able to project them to the same latent space. We adapt Deep Structured Semantic Model (DSSM) <ref type="bibr" target="#b7">[8]</ref> for this. DSSM was originally used for the task of web document ranking. Later, it was adapted for the task of recommendation in <ref type="bibr" target="#b8">[9]</ref>. However, in <ref type="bibr" target="#b8">[9]</ref> the features for the users are their search queries and features for items come from multiple domains (e.g Apps, Movies/TV etc.) which makes it difficult for a news website to directly adapt it as a lot of information outside the news domain is required. Then, for learning the parameters of the model we use a ranking based objective function. Finally, for recommending news articles to the users we use the computed inner product between user and item latent vectors.</p><p>To summarize, the contributions in this work are as follows.</p><p>1. We present a deep neural architecture for news recommendation in which we utilize the user-item interaction as well as the content of the news (items) to model the latent features of users and items. 2. In order to address the changing interests of the users and the granularity/extent of these interests over time, we incorporate attentional bidirectional LSTMs which in turn helps to model the latent features of the user. 3. We perform experiments to demonstrate the effectiveness of our model for the problem of news recommendation. We then perform experiments to show the effectiveness of our model to solve the problems of user and item coldstart respectively.</p><p>The rest of the paper is organized as follows, first we review major approaches in recommender systems followed by a discussion on works which are directly related to ours. In Section 3, we give a brief description of the dataset used. After that, in Section 4 we provide the architecture of our model and also depict its similarity to matrix factorization. We then present a comprehensive empirical study to support our claims in Section 5. Finally, we conclude and suggest future work.</p><p>There has been extensive study on recommendation systems with a myriad of publications. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this section, we aim at reviewing a representative set of approaches that are related to our proposed approach.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Common Approaches for Recommendation</head><p>Recommendation systems in general can be divided into collaborative recommendation and content based recommendation. In a narrower sense, in collaborative filtering based recommendations, an item is recommended to a user if similar users liked that item. Collaborative filtering can be further divided into user collaborative filtering, item collaborative filtering or a hybrid of both user and item collaborative filtering. Examples of such technique include Bayesian matrix factorization <ref type="bibr" target="#b1">[2]</ref>, matrix completion <ref type="bibr" target="#b2">[3]</ref>, Restricted Boltzmann Machine <ref type="bibr" target="#b3">[4]</ref>, nearest neighbour modelling <ref type="bibr" target="#b4">[5]</ref> etc. In user collaborative methods such as <ref type="bibr" target="#b4">[5]</ref>, the algorithm first computes similarity between different users based on the items liked by them. Then, the scores of user-item pairs are computed by combining scores of this item given by similar users. Item based collaborative filtering <ref type="bibr" target="#b5">[6]</ref>, computes similarity between items based on the users who like both items. It then recommends items to the user based on the items she has previously liked. Finally, in user-item based collaborative filtering, both the users and the items are projected into a common vector space based on the user-item matrix and then the item and user representation are combined to find a recommendation. Matrix factorization based approaches like <ref type="bibr" target="#b2">[3]</ref> and <ref type="bibr" target="#b1">[2]</ref> are examples of such a technique. One of the major drawbacks of collaborative filtering is its inability to handle new users and new items, a problem which is often referred as the cold-start issue.</p><p>Another common approach for recommendation is content-based recommendation. In this approach, features from user's profile and/or item's are extracted and are used for recommending items to users based on these features. The underlying assumption is that the users tend to like items similar to those they already like. In <ref type="bibr" target="#b6">[7]</ref>, each user is modeled by a distribution over news topics that is constructed from articles she liked with a prior distribution of topic preference computed using all users who share the same location. A major advantage of using content-based recommendation is that it can handle the problem of item cold-start as it uses item features for recommendation. For user cold-start, a variety of other features like age, location, popularity aspects could be used. In the following we discuss recommendation works which use neural networks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Neural Network based Recommendation</head><p>Early pioneer work which used neural network was done in <ref type="bibr" target="#b3">[4]</ref>, where a twolayer Restricted Boltzmann Machine (RBM) was used to model users explicit ratings on items. The work has been later extended to model the ordinal nature of ratings <ref type="bibr" target="#b21">[22]</ref>. Recently autoencoders have become a popular choice for building recommendation systems <ref type="bibr" target="#b23">[24]</ref>[23] <ref type="bibr" target="#b18">[19]</ref>. The idea of user-based AutoRec <ref type="bibr" target="#b22">[23]</ref> is to learn hidden structures that can reconstruct a user's ratings given her historical ratings as inputs. In terms of user personalization, this approach shares a similar spirit as the item-item model <ref type="bibr" target="#b20">[21]</ref>[6] that represent a user as her rated item features. While previous work has lent support for addressing collaborative filtering, most of them have focused on observed ratings and modeled the observed data only. As a result, they can easily fail to learn users preference from the positive-only implicit data.</p><p>The work that is most relevant to our work is <ref type="bibr" target="#b17">[18]</ref> and <ref type="bibr" target="#b0">[1]</ref>. In <ref type="bibr" target="#b17">[18]</ref> a collaborative denoising autoencoder (CDAE) for CF with implicit feedback is presented. In contrast to the DAE-based CF <ref type="bibr" target="#b18">[19]</ref>, CDAE additionally plugs a user node to the input of autoencoders for reconstructing the user's ratings. As shown by the authors, CDAE is equivalent to the SVD++ model <ref type="bibr" target="#b10">[11]</ref> when the identity function is applied to activate the hidden layers of CDAE. Although CDAE is a collaborative filtering model, it is solely based on item-item interaction whereas the work which we present here is based on user-item interaction. On the other hand in <ref type="bibr" target="#b0">[1]</ref>, authors have explored deep neural networks for recommender systems. They present a general framework named NCF, short for Neural Collaborative Filtering that replaces the inner product with a neural architecture that can learn an arbitrary function from the given data. It uses a multi-layer perceptron to learn the user-item interaction function. NCF is able to express and generalize matrix factorization. They then combine the linearity of matrix factorization and non-linearity of deep neural networks for modelling user-item latent structures. They call this model as NeuMF, short for Neural Matrix Factorization.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">User-Item Projection</head><p>Since our work is based on user-item based collaborative filtering, we need to project users and items to a common latent space in order to capture their similarity and recommend items to users accordingly. One of the most effective approaches in projecting queries and documents into a common low-dimensional space has been shown in <ref type="bibr" target="#b7">[8]</ref>. The model is named as Deep Semantic Structured Model (DSSM) <ref type="bibr" target="#b7">[8]</ref> which is effective in calculating the relevance of the document given a query by computing the distance between them. Originally this model was meant for the purpose of ranking, but since the problem of ranking has very close associations with that of recommendation, DSSM was later extended to recommendation scenarios in <ref type="bibr" target="#b8">[9]</ref>. In <ref type="bibr" target="#b8">[9]</ref>, the authors used DSSM for recommendation where the first neural network contains user's query history (and thus referred to as user view) and the second neural network contains implicit feedback of items. The resulting model is named multi-view DNN (MV-DNN) since it can incorporate item information from more than one domain and then jointly optimize all of them using the same loss function in DSSM. However, in <ref type="bibr" target="#b8">[9]</ref>, the features for the users were their search queries and features for items came from multiple sources (e.g Apps, Movies/TV etc.). This makes it less adaptable by a news website as it requires a lot of information outside the news domain.</p><p>For many of the approaches in recommendation systems, the objective is to minimize the root mean squared error on the user-item matrix reconstruction. However, in <ref type="bibr" target="#b9">[10]</ref> it has been shown that ranking based objective function is more effective in generating relevant recommendations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">MODEL ARCHITECTURE</head><p>We first briefly review DSSM and then we provide the description of our model. We then try to show the relationship between matrix factorization and our approach. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Recurrent Attention DSSM (RA-DSSM)</head><p>In the MV-DNN, the input to the user view was merely the query history of users. In this work, we modify the way in which inputs are sent to the user view in order to adapt it specifically for the case of news recommendation. One of the major issues in news recommendation is that of changing user interests. Interests of users can be classified into short term as well as long term interests. Hence, it becomes crucial for a news recommender to identify these interests and recommend accordingly.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Deep Semantic Structured Model</head><p>The Deep Semantic Structured Model (DSSM) <ref type="bibr" target="#b7">[8]</ref> was proposed for the purpose of ranking. Essentially, DSSM can be viewed as a multi-view learning model that often composes of two or more neural networks for each individual view. In the original two-view DSSM model, the network on the left side was meant for query representation, whereas the networks on the right side were meant for representing the documents. The input to these networks could be of any arbitrary type like letter-tri-gram in the original paper or bag of unigrams used in <ref type="bibr" target="#b8">[9]</ref>. After that, each input vector goes through a non-linear transformation in the feedforward neural network to output an embedding vector, which is smaller in size than the input vector. The learning objective of the DSSM is to maximize the cosine similarity between the two output vectors. For the purpose of training, a set of positive examples and randomly sampled negative examples are generated in order to minimize the cosine loss on positive examples. In <ref type="bibr" target="#b8">[9]</ref>, authors used DSSM for recommendation where the first neural network contained the query history of users and the second neural network contained the implicit feedback of items (e.g News Clicks, App Downloads). The resulting model is named as multiview DNN (MV-DNN) since it can incorporate item information from more than one domain and jointly optimize them using the same loss function in DSSM.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Recurrent Attention DSSM (RA-DSSM)</head><p>In the MV-DNN, the input to the user view was merely the query history of users. In this work, we modify the way in which inputs are sent to the user view in order to adapt it specifically for the case of news recommendation. One of the major issues in news recommendation is that of changing user interests. Interests of users can be classified into short term as well as long term interests. Hence, it becomes crucial for a news recommender to identify these interests and recommend accordingly.</p><p>LSTMs have shown to be capable of learning long-term dependencies <ref type="bibr">[29][30]</ref>. Bidirectional LSTMs on the other hand can capture past and future information effectively. Users interests keep changing over time and at the time of recommendation we need to know the current interest and the long term user interest. Using Bidirectional LSTMs as an encoder helps us to identify interests which the user has taken up recently (short term) as well the long term interests of the user. For each user, we have the sequence in which news articles were read by her. We then choose the first R read articles for each user and use it as inputs to our bidirectional LSTMs. The forward state updates of the LSTM satisfy the following equations </p><formula xml:id="formula_0">− → f t , − → i t , − → o t = σ − → W − → h t−1 , − → r t + − → b (1) − → l t = tanh − → V − → h t−1 , − → r t + − → d<label>(2)</label></formula><formula xml:id="formula_1">− → c t = − → f t • − → c t−1 + − → i t • − → l t (3) − → h t = − → o t • tanh( − → c t )<label>(4</label></formula><formula xml:id="formula_2">← − h 1 , ← − h 2 , . . . , ← − h R )</formula><p>are computed in a similar manner as above. The amount of reading history used as inputs to the bidirectional LSTM is denoted by R. We then concatenate the forward and backward states to obtain the annotations (h 1 , h 2 , . . . , h R ), where</p><formula xml:id="formula_3">h i = − → h i ← − h i<label>(5)</label></formula><p>We then need to identify the extent/granularity of each interests. Recently in <ref type="bibr" target="#b24">[25]</ref>, the effectiveness of attention mechanisms has been shown for the task of neural machine translation. The goal of the attention mechanism in such tasks is to derive a context vector that captures relevant source side information to help predict the current target word. In our case, we want to use the sequence of annotations generated by the encoder to come up with a context vector that captures the extent of the user's interests. Though, in a typical RNN encoderdecoder framework <ref type="bibr" target="#b24">[25]</ref>, a context vector is generated at each time step to predict the target word, in our case, we only need to calculate the context vector for a single time step.</p><formula xml:id="formula_4">c attention = R j=1 α j h j<label>(6)</label></formula><p>where, h 1 ,. . . ,h R represents the sequence of annotations to which the encoder maps the sequence of read news articles and each α j represents the respective weight corresponding to each annotation h j . The user view (left view) of the model can be seen in Figure <ref type="figure" target="#fig_0">1</ref>. The input to this is a selected amount of reading history of each user. Each r i in the figure is a news embedding of dimension 300. However, the right view of the DSSM remains the same as can be seen in Figure <ref type="figure" target="#fig_0">1</ref>. For inputs to the right view of the DSSM, we select one positive sample i.e an article that has been read by the user (apart from those that were used as input to the user view) and n randomly selected negative samples (articles that have not been read by the user). Each item + , item − used as inputs to the item view is also an embedding of size 300.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">Learning</head><p>Typically in matrix factorization, to learn the model parameters, existing pointwise methods <ref type="bibr" target="#b12">[13]</ref>[16] perform regression with a squared loss. This is based on the assumption that observations are generated from a Gaussian distribution. However, in <ref type="bibr" target="#b0">[1]</ref> it has been shown that such a method does not tally well when we have implicit data available to us. Also, in <ref type="bibr" target="#b9">[10]</ref> it has been shown that a ranking based objective function is more suitable for the task of recommendation. Keeping these two aspects in mind, we adapt the loss function used in DSSM <ref type="bibr" target="#b7">[8]</ref>. We first compute the posterior probability of a clicked news item given a user from the relevance score using a softmax function</p><formula xml:id="formula_5">P (item + |u) = exp(R(u, item + )) ∀item exp(R(u, item))<label>(7)</label></formula><p>where u denotes the user, item + denotes the item that was clicked by the user and R represents the inner product function. We then maximize the likelihood of the clicked news items given the user with the following loss function</p><formula xml:id="formula_6">L(Λ) = − log u,item + P (item + |u)<label>(8)</label></formula><p>where, Λ represents the parameters of our model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5">Relation with Matrix Factorization</head><p>We now show how we could interpret our model as a special case of matrix factorization, which is one of the most popular model for recommendation and has been investigated extensively in literature. Matrix factorization models map both users and items to a joint latent factor space of dimensionality f , such that user-item interactions are modeled as inner products in that space. Accordingly, each item i is associated with a vector q i ∈ R f and each user is associated with a vector p u ∈ R f . For a given item i, the elements of q i measure the extent to which the item possesses those factors, positive or negative. For a given user u, the elements of p u measure the extent of interest the user has in items that are high on the corresponding factors, again, positive or negative. The resulting dot product of the two vectors captures the interaction between the user u and item i. This approximates the user u's rating for the item i, denoted by r ui , leading to the estimate</p><formula xml:id="formula_7">r ui = q T i p u<label>(9)</label></formula><p>The major challenge in this is to compute q i , p u ∈ R f . We solve this problem by using deep neural networks. The deep neural architecture allows us to learn a non-linear mapping for the users and the items to the same latent space. For computing the mapping for the users, we first use a recurrent network followed by an attention layer. Fully connected layers are then used for bringing in the user and the items to the same latent space. In the final layer of the DSSM, we compute the similarity between the user and the item using the dot product of the non-linear mappings of the input vectors. The user can then be represented as Φ(u) and the item can be represented as Φ(i) (here Φ represents the learnt non-linear mapping). Finally we estimate the rating as,</p><formula xml:id="formula_8">r ui = Φ(i) T Φ(u)<label>(10)</label></formula><p>Although in <ref type="bibr" target="#b0">[1]</ref>, to compute this similarity the authors resorted to learn any arbitrary function, we learn a non-linear transformation and then utilise the dot product for computing the similarity.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">EXPERIMENTS</head><p>We conduct experiments to answer the following questions:</p><p>1. Does our proposed model outperform the state-of-the-art implicit collaborative methods? Also, how do the different variations of our model perform for the given task. 2. How does our proposed model work for solving the item cold start problem? 3. How does our proposed model work for solving the user cold start problem?</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">DATASET</head><p>For this work we use the dataset published by CLEF NewsREEL 2017. CLEF NewsREEL provides an interaction platform to compare different news recommender systems performance in an online as well as offline setting <ref type="bibr" target="#b31">[32]</ref>. As a part of their evaluation for offline setting, CLEF shared a dataset which captures interactions between users and news stories. It includes interactions of eight different publishing sites in the month of February, 2016. The recorded stream of events include 2 million notifications, 58 thousand item updates, and 168 million recommendation requests. The dataset also provides other information like the title and text of each news article, time of publication etc. Each user can be identified by a unique id. For our task, we find out the sequence in which the articles were read by the users. Along with this we also find out the content of each of these read articles. Since, we rely only on implicit feedback we only need to know whether the article was read by a user or not.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Experimental Settings</head><p>As mentioned earlier, we use the dataset provided by CLEF NewsREEL 2017. We extract the sequence in which the articles were read by the users. For each article we concatenate the body and the text and use gensim <ref type="bibr" target="#b11">[12]</ref> to learn doc2vec <ref type="bibr" target="#b26">[27]</ref> embeddings for those. The size of the embeddings is set to 300. In the given dataset, almost 77% of the users have read less than 3 articles. We choose users who have read in between 10-15 (inclusive) articles for training and testing our model for item recommendation. The frequency of users who have read more than 15 articles varies extensively and hence we restrict ourselves to the upper bound of 15. We then choose users who have read 2-4 articles for testing our model for the user cold start problem. For the item cold start problem, we again test it on users who have read in between 10-15 articles. We ensure that the chronology of the data is kept intact.</p><p>Evaluation Protocol : To evaluate the performance of the recommended item we use the leave-one-out evaluation strategy which has been widely adopted in literature <ref type="bibr" target="#b25">[26]</ref>[13] <ref type="bibr" target="#b13">[14]</ref>. For each user we held-out her latest interaction as the test set and utilized the remaining data for training. Since it is time consuming to rank all items for every user during evaluation, we followed the common strategy <ref type="bibr" target="#b8">[9]</ref>[11] that randomly samples 100 items that are not interacted by the user, ranking the test item among the 100 items. The performance of a ranked list is judged by Hit Ratio (HR) and Normalized Discounted Cumulative gain (NDCG) <ref type="bibr" target="#b14">[15]</ref>. Without special mention, we truncated the rank list at 10 for both metrics. As such, the HR@k intuitively measures whether the test item is present in the top-k list, and the NDCG accounts for the position of the hit by assigning higher scores to hits at top ranks. We calculated both metrics for each test user and reported the average score.</p><p>Baselines : We compare our proposed approach with the following methods:</p><p>-ItemPop. News articles are ranked by their popularity judged by their number of interactions. This is a non-personalized method to benchmark the recommendation performance <ref type="bibr" target="#b13">[14]</ref>.</p><p>-BPR <ref type="bibr" target="#b13">[14]</ref>. This method optimizes the matrix factorization method with a pairwise ranking loss, which is tailored to learn from implicit feedback. We report the best performance obtained by fixing and varying the learning rate. -eALS <ref type="bibr" target="#b12">[13]</ref>. This is a state-of-the-art matrix factorization method for item recommendation. It optimizes the squared loss (between actual item ratings and predicted ratings) and treats all unobserved interactions as negative instances and weighting them non-uniformly by item popularity. -NeuMF <ref type="bibr" target="#b0">[1]</ref>. This is a state-of-the-art neural matrix factorization model. It treats the problem of generating recommendation using implicit feedback as a binary classification problem. Consequently it uses the binary cross-entropy loss to optimize its model parameters.</p><p>For all the above methods we choose that number of predictive factors which maximize the performance over our given dataset. Our proposed method is based on modelling user-item relationship, hence we mainly compare it with other user-item models. We leave out the comparison with other models like SLIM <ref type="bibr" target="#b20">[21]</ref> and CDAE <ref type="bibr" target="#b17">[18]</ref> because these are item-item models and hence performance difference may be caused by the user models for personalization.</p><p>Parameter Settings For the all the conducted experiments we use an Intel i7-6700 CPU @ 3.40GHz which has a RAM of 32GB and a Tesla K40c GPU. We ran all our experiments on the GPU. We implemented our proposed method using Keras <ref type="bibr" target="#b19">[20]</ref>. As mentioned earlier, for each user who had read in between 10-15 (inclusive) articles we held out the last read article for our test set. We then construct our training set as follows:</p><p>1. We first define the reading history that we want to use. We denote the reading history by Rh. 2. For each user, we use Rh number of read articles as inputs to the user view.</p><p>Leaving the last read article out, the remaining articles are used as positive samples for the item view (right view) of the model. 3. For each positive instance of a user, we randomly sample n negative instances(news items that the user has not interacted with) which are used as inputs for the item view of the model. We experimentally set the number of negative instances n to be 4.</p><p>We then randomly divide the training set into training and validation set in a 4:1 ratio. This helps us to ensure that the two sets do not overlap. We tuned the hyper-parameters of our model using the validation set. All the model and its variants are learnt by optimizing the log loss of Equation <ref type="formula" target="#formula_6">8</ref>. We initialise the fully connected network weights with the uniform distribution in the range between − 6/(f anin + f anout) and 6/(f anin + f anout) <ref type="bibr" target="#b27">[28]</ref> . We used a batch size of 256 and used adadelta <ref type="bibr" target="#b16">[17]</ref> as a gradient based optimizer for learning the parameters of the model. Also, it is worth noticing that, just in the case of NeuMF <ref type="bibr" target="#b0">[1]</ref>, where the size of the last layer of the deep network determines the number of predictive factors, we can also treat the size of the last layer of our network (just before computing the similarity) as the number of used predictive factors.   <ref type="figure">3</ref> shows the performance of our model by varying the amount of reading history used as inputs for the user side of RA-DSSM. Overall we see that as we increase the amount of reading history used, the performance also increases. This shows that a user has multiple interests which slowly get captured as the number of articles used for the user view of RA-DSSM is increased.</p><p>Interests of the user develop and vary with time and hence we also experimented by concatenating the time at which the articles were read by each user along with the article embeddings and used these as inputs to the model. It was observed that there was no significant change in the performance. One of the prime reasons for this could be that the model is able to encode the aspect of time into itself given its sequential nature. Figure <ref type="figure">4</ref> and Figure <ref type="figure">5</ref> shows the performance of the Top-K recommended lists where the ranking position K ranges from 1 to 10. We leave out the variants of our own model here for comparison and only use the best performing model i.e using RA-DSSM. As from the figure, it can be clearly seen that our model shows consistent improvements over the other methods across all positions. The reason for this can be attributed to the fact that apart from accounting for the user's general preferences we also account for the users changing interests and the extent of those interests which the baselines do not incorporate directly. We observe major improvements in the NDCG scores of our model. There is an approximate 22% improvement over NeuMF. The reason for this is the loss function of Equation 8 used by our model. The loss function which is optimized for ranking, helps the model to recommend a better ranked list of items. For baseline methods we see that eALS outperforms BPR with a margin of 2%. We also note that ItemPop performs worst which indicates the need for modelling user's personalized preferences.</p><p>We then evaluated our model for the cold start cases as can be seen in Figure <ref type="figure">6</ref> and Figure <ref type="figure">7</ref>. For this task we segregated users who had read a new article in the end i.e they read articles which had never been seen before they read it. We found out that the number of such users were 74. Out of these 74 users, at an HR@10 we observe that around 35% of the time we were able to recommend that article. This promises us that our model is well suitable for handling the item cold-start problem. For user cold-start, we test our learned model over users who had read articles in between 2 to 4 (inclusive). The HR@10 score was around 50%. We see a gradual increase in the hit rates as we increase the value of k. The results promise the efficiency of our model to handle the problem of user cold start as well.</p><p>We then note the effects on our model by varying the kind of recurrent network used. We tested our model by using LSTMs, GRUs (Gated Recurrent Units) <ref type="bibr" target="#b32">[33]</ref> and Vanilla RNN. From Figure <ref type="figure">8</ref> and Figure <ref type="figure">9</ref>, the trend in the performance can be observed as follows: LSTM &gt;GRU &gt;RNN. One of the reasons for this could be the fact that an LSTM or a GRU is better able to encode the interests of the user. In Table <ref type="table">1</ref>, we note the performance by adding bidirectional units and attention layer to the LSTM. We note that Attention BiLSTM &gt;BiLSTM &gt;LSTM. We also note that the attention does indeed enable us to capture the extent of interests as it performs slightly better than the bidirectional LSTM.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">CONCLUSION AND FUTURE WORK</head><p>In this work we used deep neural networks for news recommendation. We combined user-item collaborative filtering with the content of the read news articles to come up with our model. We tackled the problem of changing and diverse reading interests of users using a recurrent network combined with neural attention. We also show the effectiveness of our model in solving the problem of user cold-start and item cold-start as well. We then also show the effectiveness of our model when using one-hot item encodings. This shows the adaptability of our model for other recommendation scenarios which purely rely on implicit feedback.</p><p>In future, we would like to note the effects of learning an arbitrary function instead of using the inner product to calculate the similarity between the user and the item. We would also like to evaluate our model over different recommendation scenarios. Apart from this, we would also like to explore the idea of reinforcement based for news recommendation. Implicit feedback provided by the users could be used to model their interests and recommend articles to them.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 :</head><label>1</label><figDesc>Fig. 1: Recurrent Attention DSSM Model Architecture</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 2 :Fig. 3 :Fig. 4 :Fig. 5 :Fig. 6 :Fig. 7 :Fig. 9 :</head><label>2345679</label><figDesc>Fig. 2: HR@10 of RA-DSSM w.r.t. the User's Reading History</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Neural Collaborative Filtering</title>
		<author>
			<persName><forename type="first">Xiangnan</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lizi</forename><surname>Liao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hanwang</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Liqiang</forename><surname>Nie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xia</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tat-Seng</forename><surname>Chua</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 26th International Conference on World Wide Web, WWW &apos;17</title>
				<meeting>the 26th International Conference on World Wide Web, WWW &apos;17</meeting>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Bayesian probabilistic matrix factorization using Markov chain Monte Carlo</title>
		<author>
			<persName><forename type="first">Ruslan</forename><surname>Salakhutdinov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andriy</forename><surname>Mnih</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 25th inter-national conference on Machine learning</title>
				<meeting>the 25th inter-national conference on Machine learning</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="880" to="887" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Fast maximum margin matrix factorization for collaborative prediction</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Jasson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nathan</forename><surname>Rennie</surname></persName>
		</author>
		<author>
			<persName><surname>Srebro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd international conference on Machine learning</title>
				<meeting>the 22nd international conference on Machine learning</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="713" to="719" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Restricted Boltzmann machines for collaborative ltering</title>
		<author>
			<persName><forename type="first">Ruslan</forename><surname>Salakhutdinov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andriy</forename><surname>Mnih</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Geoffrey</forename><surname>Hinton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 24th international conference on Machine learning</title>
				<meeting>the 24th international conference on Machine learning</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="791" to="798" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Improved neighborhood-based collaborative ltering</title>
		<author>
			<persName><forename type="first">M</forename><surname>Robert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yehuda</forename><surname>Bell</surname></persName>
		</author>
		<author>
			<persName><surname>Koren</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">KDD cup and workshop at the 13th ACM SIGKDD international conference on knowledge discovery and data mining</title>
				<imprint>
			<publisher>Citeseer</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="7" to="14" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Item-based collaborative filltering recommendation algorithms</title>
		<author>
			<persName><forename type="first">Badrul</forename><surname>Sarwar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">George</forename><surname>Karypis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Joseph</forename><surname>Konstan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">John</forename><surname>Riedl</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th international conference on World Wide Web</title>
				<meeting>the 10th international conference on World Wide Web</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="285" to="295" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Personalized news recommendation based on click behavior</title>
		<author>
			<persName><forename type="first">Jiahui</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Dolan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Elin</forename><forename type="middle">Rønby</forename><surname>Pedersen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 15th international conference on Intelligent user interfaces</title>
				<meeting>the 15th international conference on Intelligent user interfaces</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="31" to="40" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Learning deep structured semantic models for web search using clickthrough data</title>
		<author>
			<persName><forename type="first">Po-Sen</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiaodong</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jianfeng</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Li</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alex</forename><surname>Acero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Larry</forename><surname>Heck</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd ACM international conference on Conference on information &amp; knowledge management</title>
				<meeting>the 22nd ACM international conference on Conference on information &amp; knowledge management</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="2333" to="2338" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A multi-view deep learning approach for cross domain user modeling in recommendation systems</title>
		<author>
			<persName><forename type="first">Ali</forename><surname>Mamdouh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Elkahky</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Yang</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiaodong</forename><surname>He</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 24th International Conference on World Wide Web</title>
				<meeting>the 24th International Conference on World Wide Web</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="278" to="288" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Local collaborative ranking</title>
		<author>
			<persName><forename type="first">Joonseok</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Samy</forename><surname>Bengio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Seungyeon</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Guy</forename><surname>Lebanon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yoram</forename><surname>Singer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 23rd international conference on World wide web</title>
				<meeting>the 23rd international conference on World wide web</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="85" to="96" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Factorization meets the neighborhood: a multifaceted collaborative filtering model</title>
		<author>
			<persName><forename type="first">Yehuda</forename><surname>Koren</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining</title>
				<meeting>the 14th ACM SIGKDD international conference on Knowledge discovery and data mining</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="426" to="434" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Software Framework for Topic Modelling with Large Corpora</title>
		<author>
			<persName><forename type="first">Radim</forename><surname>Řehůřek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Petr</forename><surname>Sojka</surname></persName>
		</author>
		<ptr target="http://is.muni.cz/publication/884893/en" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks</title>
				<meeting>the LREC 2010 Workshop on New Challenges for NLP Frameworks<address><addrLine>Valletta, Malta</addrLine></address></meeting>
		<imprint>
			<publisher>ELRA</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="45" to="50" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Fast matrix factorization for online recommendation with implicit feedback</title>
		<author>
			<persName><forename type="first">Xiangnan</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hanwang</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Min-Yen</forename><surname>Kan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tat-Seng</forename><surname>Chua</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval</title>
				<meeting>the 39th International ACM SIGIR conference on Research and Development in Information Retrieval</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="549" to="558" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">BPR: Bayesian personalized ranking from implicit feedback</title>
		<author>
			<persName><forename type="first">Christoph</forename><surname>Ste En Rendle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zeno</forename><surname>Freudenthaler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lars</forename><surname>Gantner</surname></persName>
		</author>
		<author>
			<persName><surname>Schmidt-Thieme</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence</title>
				<meeting>the twenty-fifth conference on uncertainty in artificial intelligence</meeting>
		<imprint>
			<publisher>AUAI Press</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="452" to="461" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Trirank: Reviewaware explainable recommendation by modeling aspects</title>
		<author>
			<persName><forename type="first">Xiangnan</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tao</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Min-Yen</forename><surname>Kan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiao</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 24th ACM International on Conference on Information and Knowledge Management</title>
				<meeting>the 24th ACM International on Conference on Information and Knowledge Management</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="1661" to="1670" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Probabilistic Matrix Factorization</title>
		<author>
			<persName><forename type="first">Ruslan</forename><surname>Salakhutdinov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andriy</forename><surname>Mnih</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nips</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="2" to="3" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">ADADELTA: an adaptive learning rate method</title>
		<author>
			<persName><surname>Matthew D Zeiler</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1212.5701</idno>
		<imprint>
			<date type="published" when="2012">2012. 2012</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Collaborative denoising auto-encoders for top-n recommender systems</title>
		<author>
			<persName><forename type="first">Yao</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christopher</forename><surname>Dubois</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alice</forename><forename type="middle">X</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martin</forename><surname>Ester</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Ninth ACM International Conference on Web Search and Data Mining</title>
				<meeting>the Ninth ACM International Conference on Web Search and Data Mining</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="153" to="162" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Collaborative Filtering with Stacked Denoising AutoEncoders and Sparse Inputs</title>
		<author>
			<persName><forename type="first">Florian</forename><surname>Strub</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jeremie</forename><surname>Mary</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">NIPS Workshop on Machine Learning for eCommerce</title>
				<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">François</forename><surname>Chollet</surname></persName>
		</author>
		<author>
			<persName><surname>Others</surname></persName>
		</author>
		<ptr target="https://github.com/fchollet/keras." />
		<title level="m">Keras</title>
				<imprint>
			<date type="published" when="2015">2015. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Slim: Sparse linear methods for top-n recommender systems</title>
		<author>
			<persName><forename type="first">Xia</forename><surname>Ning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">George</forename><surname>Karypis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Data Mining (ICDM), 2011 IEEE 11th International Conference on. IEEE</title>
				<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="497" to="506" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Ordinal Boltzmann ma-chines for collaborative ltering</title>
		<author>
			<persName><forename type="first">Svetha</forename><surname>Dinh Q Phung</surname></persName>
		</author>
		<author>
			<persName><surname>Venkatesh</surname></persName>
		</author>
		<author>
			<persName><surname>Others</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twenty-fth Conference on Uncertainty in Arti cial Intelligence</title>
				<meeting>the Twenty-fth Conference on Uncertainty in Arti cial Intelligence</meeting>
		<imprint>
			<publisher>AUAI Press</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="548" to="556" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Autorec: Autoencoders meet collaborative ltering</title>
		<author>
			<persName><forename type="first">Suvash</forename><surname>Sedhain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aditya</forename><forename type="middle">Krishna</forename><surname>Menon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Scott</forename><surname>Sanner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lexing</forename><surname>Xie</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 24th International Conference on World Wide Web</title>
				<meeting>the 24th International Conference on World Wide Web</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="111" to="112" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Marginalized Denoising Autoencoders for Domain Adaptation</title>
		<author>
			<persName><forename type="first">Minmin</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhixiang</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fei</forename><surname>Sha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kilian</forename><forename type="middle">Q</forename><surname>Weinberger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 29th International Conference on Machine Learning (ICML-12)</title>
				<meeting>the 29th International Conference on Machine Learning (ICML-12)</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="767" to="774" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<title level="m" type="main">Neural ma-chine translation by jointly learning to align and translate</title>
		<author>
			<persName><forename type="first">Dzmitry</forename><surname>Bahdanau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kyunghyun</forename><surname>Cho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yoshua</forename><surname>Bengio</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1409.0473</idno>
		<imprint>
			<date type="published" when="2014">2014. 2014</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">A Generic Coordinate Descent Framework for Learning from Implicit Feedback</title>
		<author>
			<persName><forename type="first">Immanuel</forename><surname>Bayer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiangnan</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bhargav</forename><surname>Kanagal</surname></persName>
		</author>
		<author>
			<persName><surname>Ste En Rendle</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 26th International Conference on World Wide Web (WWW &apos;17)</title>
				<meeting>the 26th International Conference on World Wide Web (WWW &apos;17)</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Distributed representations of sentences and documents</title>
		<author>
			<persName><forename type="first">Quoc</forename><surname>Le</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tomas</forename><surname>Mikolov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 31st International Conference on Machine Learning (ICML-14)</title>
				<meeting>the 31st International Conference on Machine Learning (ICML-14)</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1188" to="1196" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Understanding the di culty of training deep feedforward neural networks</title>
		<author>
			<persName><forename type="first">Xavier</forename><surname>Glorot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yoshua</forename><surname>Bengio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Aistats</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="249" to="256" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Long short-term memory</title>
		<author>
			<persName><forename type="first">Sepp</forename><surname>Hochreiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jürgen</forename><surname>Schmidhuber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural computation</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="1735" to="1780" />
			<date type="published" when="1997">1997. 1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Sequence to sequence learning with neural networks</title>
		<author>
			<persName><forename type="first">Ilya</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Oriol</forename><surname>Vinyals</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Quoc V</forename><surname>Le</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in neural information processing systems</title>
				<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="3104" to="3112" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Bidirectional recurrent neural networks</title>
		<author>
			<persName><forename type="first">Mike</forename><surname>Schuster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kuldip K</forename><surname>Paliwal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Signal Processing</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page" from="2673" to="2681" />
			<date type="published" when="1997">1997. 1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Benchmarking news recommendations: The clef newsreel use case</title>
		<author>
			<persName><forename type="first">Frank</forename><surname>Hopfgartner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Torben</forename><surname>Brodt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jonas</forename><surname>Seiler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Benjamin</forename><surname>Kille</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andreas</forename><surname>Lommatzsch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martha</forename><surname>Larson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Roberto</forename><surname>Turrin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">András</forename><surname>Serény</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ACM SIGIR Forum</title>
				<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="page" from="129" to="136" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<title level="m" type="main">Empirical evaluation of gated recurrent neural networks on sequence modeling</title>
		<author>
			<persName><forename type="first">Junyoung</forename><surname>Chung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Caglar</forename><surname>Gulcehre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kyunghyun</forename><surname>Cho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yoshua</forename><surname>Bengio</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1412.3555</idno>
		<imprint>
			<date type="published" when="2014">2014. 2014</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
