<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Learning to Rank Research Articles A case study of collaborative filtering and learning to rank in ScienceDirect</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Daniel</forename><surname>Kershaw</surname></persName>
							<affiliation key="aff0">
								<address>
									<settlement>Lisbon</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Benjamin</forename><surname>Pettit</surname></persName>
							<affiliation key="aff0">
								<address>
									<settlement>Lisbon</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Maya</forename><surname>Hristakeva</surname></persName>
							<affiliation key="aff0">
								<address>
									<settlement>Lisbon</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Kris</forename><surname>Jack</surname></persName>
							<affiliation key="aff0">
								<address>
									<settlement>Lisbon</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Elsevier</forename><surname>Ltd</surname></persName>
							<affiliation key="aff0">
								<address>
									<settlement>Lisbon</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Learning to Rank Research Articles A case study of collaborative filtering and learning to rank in ScienceDirect</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">5CB96DC04C9C403A0900B3CB19622E4A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T21:41+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Online academic repositories help millions of researchers discover relevant articles, a domain in which there are many potential signals of relevance, including text, citation links, and how recently an article was published. In this paper we present a case study of prodictionizing learning to rank for large scale recommendation, which utilises these diverse feature sets to increase user engagement. We first introduce itemto-item collaborative filtering (CF), then how these recommendations are rescored with a LtR model. We then describe offline and online evaluation, which are essential for productionizing any recommender. The online results show that learning to rank significantly increased user engagement with the recommender. Finally we show through post-hoc analysis that the original CF solution tended to promote older articles with lower traffic. However, by learning from subjective user interactions with the recommender system, our relevance model reversed those trends.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The rate of scientific discovery is ever increasing, with new methods, theory and practice being published each day. This growth poses a challenge for researchers, who need to stay up to date. Online academic catalogues, such as ScienceDirect, <ref type="foot" target="#foot_0">1</ref> give users access to large amounts of peer reviewed scientific publications. However, the experience of using such catalogues can be characterised as a combination of information overload <ref type="bibr" target="#b3">[4]</ref> and information shortage <ref type="bibr" target="#b11">[12]</ref>. First, when encountering a large catalogue of information there is no simple way for the user to read, comprehend and critique all the documents. Additionally, when browsing they may not be discovering content that they would deem relevant.</p><p>The specific use case we aim to address is to help users of ScienceDirect find additional relevant articles that go beyond explicitly encoded relationships such as authorship or publication venue. In contrast to the personalised research article recommendations in platforms such as Mendeley <ref type="bibr" target="#b12">[13]</ref> and CiteULike <ref type="bibr" target="#b18">[19]</ref>, we want an approach that works in the absence of profile data or reading history.</p><p>In this paper, we describe how we built an initial system using item-based collaborative filtering (IBCF). Once this system is in production, we collected data on which recommendations users preferred, and then trained a Learning to Rank (LtR) model to re-rank the recommendations using a range of article features and similarity metrics. We focus on the evaluation methodology and how the recommendations surfaced by LtR differed from the pure collaborative filtering system. These give insight into what the LtR system achieves that could not be done with collaborative filtering (CF) alone.</p><p>Recommender Systems (RS) have become key tools in a researcher's content discovery toolkit, as they not only allow them to more efficiently navigate large and ever-growing catalogues, but also discover content that they would not have seen otherwise. Previous work has resulted in a distinction between methods used for personalised and non-personalised approaches. This can be seen in the use of implicit feedback and CF for personalised recommendations <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b18">19]</ref>, whereas content-based methods are used predominately for non-personalised recommendations <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b20">21,</ref><ref type="bibr" target="#b15">16]</ref>. In combining CF and LtR we overcome some of the limitations of CF by reducing the dependence on current navigation patterns, allowing us to take into account the content of the article <ref type="bibr" target="#b6">[7]</ref> and how recently it was published. Unlike purely content-based recommendations, this system adapts to how the articles are being used and which recommendations users engage with.</p><p>Through this research, we ultimately found that to train and evaluate a better model of article relevance, it is necessary to go beyond trying to predict what users will browse next. While the implicit feedback from article browsing is valuable for CF, it is limited by the very same information shortage problem that the recommender system attempts to mitigate. In addition to developing the model, we present some post-hoc analysis that sheds light on some of the biases in the CF results that are reversed by applying LtR. Our contributions can be summarised by the following points:</p><p>1. Offline evaluation should be matched to the online challenge: By comparing two offline evaluation scenarios with an online experiment, we show that higher accuracy at predicting browsing behaviour does not correspond to having the highest engagement from users.  <ref type="bibr" target="#b1">[2]</ref>, in this case collaborative filtering has a bias towards unpopular items, which the LtR system reverses.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related Work</head><p>A wide variety of techniques have been used to generate recommendations for academic publications. <ref type="bibr" target="#b20">[21]</ref> and <ref type="bibr" target="#b19">[20]</ref> focused on using hierarchical clustering of citation networks to make recommendations that distinguish between core papers in a discipline and important papers in sub-fields. While, <ref type="bibr" target="#b9">[10]</ref> and <ref type="bibr" target="#b17">[18]</ref> used author-defined keywords and tags to identify similar documents. Not only explicit article metadata are used to generate recommendations: Mendeley Suggest <ref type="bibr" target="#b12">[13]</ref>  LtR has traditionally been used in information retrieval (IR) systems such as search engines <ref type="bibr" target="#b13">[14]</ref>. For example, WalMart demonstrated that LtR could be used to improve the rankings of Grocery search results <ref type="bibr" target="#b14">[15]</ref>. They used features mined from images of the products, and experimented with optimising the models for different actions such as clicking on the item or buying the item.</p><p>To aid research into LtR, a number of frameworks have been developed which allow for the test and training of models. Microsoft released LEROT <ref type="bibr" target="#b0">[1]</ref>, an "online learning to rank framework". Additionally, RankLib <ref type="bibr" target="#b7">[8]</ref> is a JAVA framework that contains a number of standard LtR models, which will be discussed next. These frameworks have allowed research to be reproducible and applied easily across a number of domains. These frameworks implement a number of different algorithms between point-wise, pair-wise and list-wise approaches.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Data sets</head><p>The recommendations generation has two distinct phases. First we generate the recommendation candidates using CF and then re-rank them with a LtR model to get the recommendations list. We use different data sets within these two distinct phases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Article browsing logs</head><p>The main data set for CF contains implicit feedback from users as they browse ScienceDirect. This usage data set is in the format of &lt;sessionID, articleID, accessTime&gt;. We apply IBCF on this data set to find candidate related articles based on co-usage patterns. High traffic users are removed, as these can represent public access machines in institutions. Additionally, we remove traffic that was elicited by the recommender system, in order to remove a positive feedback loop.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Recommender logs</head><p>LtR requires labelled training data that represents user preferences in relation to the recommendation list. We computed relevance labels by aggregating clicks and impressions from the live CF recommender on ScienceDirect. When users visit a ScienceDirect article page, a set of related-article recommendations are displayed (i.e. impressions) and they may click on one or more of the recommendations to view or download the article. We ignored impression data for page loads where none of the recommendations were clicked, or where all of the recommendations were clicked. For each query article, we aggregated the recommended articles across all user sessions. For simplicity we treat relevance as a binary label: 1 if the recommended article was clicked at least once, otherwise 0.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Article metadata</head><p>Each article has a large amount of data and meta-data associated with it. This includes titles, authors, abstract, references, and various metrics on article, journal impact and citation information. These additional data sets are used to generate features for the LtR models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Recommendation Method</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Collaborative Filtering</head><p>At the core of IBCF is the method proposed by <ref type="bibr" target="#b16">[17]</ref>, which finds similar items (documents) based on the similarity between their usage vectors. This means that for a given document (d) the function sim(d, D) returns a set of M similar documents based on their co-usage patters. Within this nearest-neighbour method, we use cosine similarity to identify documents that have been browsed in the same sessions. Here, S d is the set of sessions in which document d was browsed.</p><formula xml:id="formula_0">cosine(d, d ) = |S d ∩ S d | |S d | × |S d |<label>(1)</label></formula><p>Using cosine similarity to score neighbours ignores the statistical confidence in the correlation between usage patters. For example, many documents were not viewed in very many sessions, so a similar document d may have only one or two sessions in common with the focal document d, but nonetheless rank highly in terms of cosine(d, d ) because |S d | is also small. Therefore, we scale cosine similarity with a significance weighting computed from the number of articles in common:</p><formula xml:id="formula_1">score(d, d ) = min(1, |S d ∩ S d | q ) × cosine(d, d )<label>(2)</label></formula><p>If the documents have fewer than q sessions in common, then its contribution to the CF score is scaled down. This means that preference is given to recommendations that are generated from high co-occurrence neighbours, who are more likely related to the focal document d. An alternative would be to discard pairs with low co-occurrence, but to keep catalogue coverage high we chose to keep them and reduce their scores instead.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Learning to Rank</head><p>Once the set of recommendations (R d ) have been generated for each document (d ∈ D) it is re-scored using a pre-trained LtR model. The premise of LtR is to rank items higher which users are more likely to engage with through observing their past action. Training takes the form of n labelled query documents, q i (i = 1, ..., n) and their associated recommended documents with feature vectors with relevance judgements</p><formula xml:id="formula_2">x i = {x (i) j } m (i)</formula><p>j=1 , where n is the number of recommendations in the query.</p><p>For this work we focus on methods which have been implemented in the LtR package RankLib <ref type="bibr" target="#b7">[8]</ref>. This is a JAVA application which can apply popular LtR methods to an SVMRank formatted file. The algorithms we compared included RankNet, LambdaRank, MART, and LambdaMART. These LtR models represent both pair-wise and list-wise objectives.</p><p>RankNet <ref type="bibr" target="#b4">[5]</ref> is a pair-wise neural network algorithm. The objective function is cross entropy, which aims to minimise the number of inversions in the ranking. However, this pair-wise objective does not optimise for the whole list, unlike list-wise methods such as LambdaRank and LambdaMART. LambdaRank <ref type="bibr" target="#b5">[6]</ref> built on RankNet by modifying the cost function to use gradients, λ, which also take into account the change in a list-wise IR metric such as NDCG. LambdaMART <ref type="bibr" target="#b5">[6]</ref> combines LambdaRank and multiple additive regression trees (MART), using the cost function from LambdaRank rather than from MART, thus optimizing for the whole list. Out of the available models, LambdaMART is generally considered state-of-the-art, and has performed well in competitions <ref type="bibr" target="#b5">[6]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Feature Extraction</head><p>The features for the LtR models are taken from the query document and the recommended document as well as from the interaction between the two. These features can be grouped into seven categories.</p><p>CF score: A measure of co-usage of the query document and the candidate recommendation (Equation <ref type="formula" target="#formula_1">2</ref>).</p><p>Popularity: Popularity (number of views) of a document potentially indicates its quality or its future engagement.</p><p>Citations: Two documents (the query document and the recommendation) share references then this could indicate a quality recommendation, and likewise if two articles are both cited by the same article. To quantify this, we compute two measures, the first being the Jaccard index between the citation neighbourhoods,</p><formula xml:id="formula_3">cite_sim(C d , C d ) = |C d ∩ C d | |C d ∪ C d |<label>(3)</label></formula><p>where the neighbourhood C d is the set of articles that either cite document d or are cited by document d, plus document d itself. The second measure is the total number of citations a document has received. Journal Metrics: Impact factors are potential predictors of the quality of the research, although a weakness is the huge variation in article impact within a journal. We added to the feature set several impact metrics of the journal where the recommended article was published .</p><p>Temporal: We represent age as the number of years since the cover date.</p><p>Topics: All articles published are tagged with a scientific taxonomy, indicating which topics and subjects they cover. We calculate binary similarity between the sets of topics associated with each document .</p><p>Text: We included the similarity of the recommended document text to the query document text, where text is represented as an n-gram tf-idf vector of a document's title and abstract.</p><p>Training We used query-recommendation pairs with relevance labels inferred from recommender logs, as described in Section 3.2. We held out 20% of the query articles from the training data as a validation set, and trained on the remaining 80%. The validation set was used for hyper-parameter tuning and feature selection.</p><p>For hyper-parameter tuning, LambdaMART and MART require choosing the learning rate, the number of trees, maximum leaves per tree, and minimum training examples per leaf. With LambdaMART, we used 20 leaves per tree, with at least 200 examples per leaf and a learning rate of 0.1. The number of trees (∼ 250) was determined by early stopping, again based on the validation set. The final model used was LambdaMART, optimising for NDCG@3.</p><p>We pruned features through backwards elimination: removing the least important feature (highest NDCG@3 when removed), and then repeating the process as long as NDCG@3 increased on the validation set. In Section 6.3 we compare the importance of the different types of features.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Dithering</head><p>The candidate recommendations are ranked using the LtR model or, in the initial version of the system, by their CF scores. Before selecting the top portion of the list to display to the user, we apply dithering <ref type="bibr" target="#b8">[9]</ref>, so that a larger proportion of the list is explored. Dithering is the process of adding Gaussian noise to the items' ranks, thus slightly shuffling the list; new_score = log(rank)+N (0, log ε) , where ε = ∆rank rank and typically ε ∈ [1.5, 3]. Over time, items at lower ranks will eventually be shown to users, allowing us to collect feedback on their quality as recommendations.  This is important because the LtR model is trained on impressions and clicks of recommendations (Section 3.2). Dithering makes the training data for LtR less constrained by the previous iteration of the recommender system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Evaluation Method</head><p>We used two different tasks to evaluate LtR offline. First, we test it on recommendation click prediction, the same task that the model was trained on. Then we test whether the model transfers to a session prediction task.</p><p>In the recommendation click prediction task, each test case consists of a query article and its recommended articles that were displayed to users. Some of the recommendations were clicked (labelled 1) and some were not clicked (labelled 0). The recommender is evaluated on its ability to rank the clicked items higher than the non-clicked, using NDCG@k. This is the same ranking task that the LtR model was trained on, but the test data came from a time interval after the training and validation data. Its limitation is that it only evaluates performance on the recommendations that were displayed by the incumbent recommender system, so it cannot judge articles that a new ranker will introduce into the top k.</p><p>In the session prediction task, each test case consists of a sequence of articles that were browsed in the same user session, excluding browsed articles elicited by the recommender system. The top k recommendations for the first item are compared to what the user actually browsed next. The recommender is evaluated on its ability to rank the browsed items highly (NDCG@k). We have found this evaluation procedure useful for tuning CF candidate selection, where it correctly predicted which CF variant had higher click through rate (CTR).</p><p>For both types of evaluation, we split the data set on a time boundary (τ ), train the model on data generated before τ , and then test the model to see if it predicts the actions after τ . This means before τ we use the clicks in sessions (browsing between articles) to train the CF model, and then clicks on the already deployed CF recommender is used to train the LtR model. We use actions after τ as the ground truth, where sessions browsed are used for sessions prediction and click on the deployed recommender for click predictions. Although they were collected in the same time window, the test sets from the two evaluation tasks use mutually exclusive subsets of the logs, e.g. the first only uses recommender clicks, whereas the second excludes recommender clicks.</p><p>We calculated a random baseline for each evaluation task. For session prediction, the baseline is a random permutation of the candidate recommendation list for each query article. For recommendation click prediction, the baseline is a random permutation of the displayed recommendations for each query article.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1">Offline evaluation</head><p>LambdaMART performed best in terms of NDCG@3 on a time-split test set of recommender clicks and impressions (Table <ref type="table" target="#tab_2">1</ref>).</p><p>Having chosen a modelling approach for LtR, we tested whether Lamb-daMART's performance gains transferred to the session-prediction task. As shown in Figure <ref type="figure" target="#fig_0">1</ref>, the ranking using LambdaMART was an improvement over CF score on the click prediction task, but not on the session prediction task. The Lamb-daMART and CF score rankings outperformed the random baseline in both tasks (Figure <ref type="figure" target="#fig_0">1</ref>). Nonetheless, we decided to proceed with online evaluation, because the click prediction task is based on user behaviour in the recommendation context, and we therefore thought it would be a better proxy for online performance. We discuss the discrepancy between the two offline evaluation tasks in more detail in Section 7.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2">Online evaluation</head><p>Our best candidate algorithms from offline evaluation are compared against the current production model via A/B testing. Prior to the work on LtR, we had A/B tested a number of iterative improvements on the CF stage, for example by varying the duration of log data ingested to generate the recommendations. A/B tests performed included increasing the time period of input data for CF candidate selection which resulted in a CTR change of +9.6%, and Re-rank using LambdaMART mode which increase CTR by +9.1%.</p><p>For LtR, we carried out an online A/B experiment comparing LtR (using the LambdaMART model in Figure <ref type="figure" target="#fig_0">1</ref>) to our best CF algorithm. Each user is randomly allocated to one of two treatments, either receiving recommendations from the incumbent CF system or from the LtR variant. The LtR variant resulted in a statistically significant 9.1% increase in CTR. This result agreed with the offline performance of LtR on the click prediction task, but it disagreed with their performance on the session prediction task (Figure <ref type="figure" target="#fig_0">1</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.3">Feature importance</head><p>Figure <ref type="figure" target="#fig_1">2</ref> shows the effect of removing each category of features, as introduced in Section 3. Using backwards elimination (Section 4.2), we obtained a slight increase in the validation metric (NDCG@3). Using fewer features also reduced training time and simplified the feature extraction pipeline. The journal metrics were all removed, and we retained one or two features from each of the other categories. The evaluation steps described above used this tuned LambdaMART model with a reduced feature set.</p><p>As one can see in Figure <ref type="figure" target="#fig_1">2</ref>, CF score was the most important feature, in that removing it resulted in the greatest drop in NDCG@3 on the held-out validation set. The feature importance for our ranking model should not be interpreted as what makes an article "relevant" in general, because the training data were collected through a specific user interface that displayed some article metadata and not others, and furthermore the order of the articles was determined by CF score. Instead, the impact of features within the model guided us as to which could be safely discarded (e.g. journal impact metrics). </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.4">Post-hoc analysis</head><p>Item Age One of the main motivations behind research article discovery tools is to help people stay up to date with recent developments in their fields. Therefore we examined the mean age of recommended documents in each rank position, where age is defined as the current year minus the publication year of the recommend article. As shown in Figure <ref type="figure" target="#fig_2">3a</ref>, recommendations ranked by CF scores are biased against newly published articles, with higher-ranking articles tending to be older. When ranked by the LtR model, there is a bias towards younger articles instead.</p><p>The bias towards older articles, when ranked by CF, is especially pronounced where the query article is relatively cold, in other words it had less usage data available for collaborative filtering (Figure <ref type="figure" target="#fig_2">3b</ref>). Figure <ref type="figure" target="#fig_2">3b</ref> bins the query items based on their popularity within the CF input data, then for each query item it computes the Spearman's rank correlation (ρ s ) between the recommendation age and rank (ordered either by LtR or CF score), and finally takes the mean of ρ s within each bin. A positive value of ρ s indicates that older recommended articles tend to be ranked higher.</p><p>However, for recommendations ranked by LtR this effect of bias towards older articles has been reduced (lower values of ρ s in every bin), resulting in warm articles having recommendations that favour newer publications.</p><p>Popularity In addition to favouring older articles, our CF method also tends to give higher scores to less popular articles (Figure <ref type="figure" target="#fig_2">3b</ref>), where popularity is measured as the number of unique users viewing the item in the 12 months of input data for CF (this is assessed using the same method as in Section 6.4). This is the opposite pattern to the popularity bias often found in UBCF <ref type="bibr" target="#b1">[2]</ref>. Like the age bias, it is most pronounced for colder source articles in the bottom two quartiles of popularity. LtR reverses this trend and instead biases the recommendations towards more popular items. Although this is not necessarily a desirable outcome for all recommender systems, it is not so surprising in this case, where popularity metrics were available as a feature for the relevance model. Despite the bias towards popular items, LtR caused only a 2.1% reduc- tion in item coverage@3, with over 90% of recommendable items appearing in the top three recommendations for at least one query article.</p><p>Diversity One of the challenges of RS is to highlight content from across a catalogue. Diversity in recommendations has been shown to increase users' interactions and perception of fairness <ref type="bibr" target="#b1">[2]</ref>. However one of the issues with our system is that traffic can be concentrated within the same journal, which leads to some recommendation lists being all from the same journal. Thus we aim to understand how LtR and CF affect diversity within the set of recommendations.</p><p>To quantify diversity within our recommendations we look at the number of recommendations that come from different journals. With journals identified as distinct ISSN numbers.</p><p>As one can see in Table <ref type="table" target="#tab_3">2</ref>, LtR caused a slight increase diversity, in terms of the number of different journals in the top three recommendations for each query article. To further confirm the significance of change in diversity we perform a chi-square test (χ 2 ) of the contingency table showed a significant dependency between the ranker and number of distinct journals (p &lt; 10 −16 ).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Discussion</head><p>In this paper we set out to show how LtR is applied to item-to-item academic paper recommendations. In doing so we were able to significantly improve user engagement with the RS, using a model trained on user interaction with the recommender system. The model learned how to combine many indicators of article relevance, including co-usage, popularity, age, text, topic tags, and shared citations. As with previous studies, we found that gradient boosted decision trees (MART, LambdaMART) performed well for this task <ref type="bibr" target="#b5">[6]</ref>, and that list-wise optimisation of the LambdaMART model was an additional advantage.</p><p>However, when it came to predicting what users would browse next, in the absence of recommendations, the model did not improve on IBCF. This is surprising, because we had found session prediction to be a good indicator of online performance, when tuning the collaborative filtering system. Recommendation is often framed as a problem of session prediction, for example in session-based recommendations with recurrent neural networks <ref type="bibr" target="#b10">[11]</ref>, but we found it had shortcomings.</p><p>Designing offline evaluation raises the question of what the recommender system is designed to do. The session-prediction task assumes the goal of recommendation is to predict human browsing behaviour. Although it was a good proxy for online performance of the candidate selection step, the session-prediction task did not agree with online results when it came to choosing a ranking method.</p><p>One explanation for why CF outperformed the LtR model on the session prediction task is that users' browsing behaviour outside of the recommender system is at odds with what they actually find useful as recommendations. Users might struggle to find the most relevant articles, which is the very problem of information shortage that motivated us to create the recommender. For example, a recently published article may be less well known within its field, and yet when it is presented in a recommendations list it attracts more attention than older articles.</p><p>The post-hoc analysis demonstrated some trends in the types of items recommended by IBCF, which LtR counteracted. Cold start and data sparsity are well-known challenges in collaborative filtering. Even among warm items with usage data, those with fewer users are expected to have less accurate recommendations. We found that the less popular the query article, the more the CF system tended to promote old or unpopular articles as recommendations. Based on which articles users clicked, the LtR system learned to promote articles that were published more recently or had more activity in the past year.</p><p>Based on these results, we should be able to improve the model by including as a feature the popularity of the query article (not just the recommended article). That would allow the tree-ensemble model to learn different relevance functions depending on how warm the source article is, for example giving less importance to CF score for items with sparser usage data. In theory, the model could also counteract the unpopularity-bias for colder articles without adding a popularity bias to the warmer articles' recommendations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8">Conclusion</head><p>In this work we have demonstrated the benefits of LtR for improving upon a production CF system, which is especially important in a domain such as research articles where there are vast catalogues of recommendable items, each of which has high quality structured metadata. We showed how changes to the production system were ultimately decided on the basis of online evaluation, but given the number of models and parameter choices, we also needed an offline evaluation that was a good proxy for online performance.</p><p>The winning LambdaMART model combines co-usage, semantic similarity, shared citations, popularity, and recency. The original IBCF solution tended to promote older articles with lower traffic, but by learning from subjective user interactions with the recommender system, our relevance model reversed those trends and therefore overcame some of the limitations of CF.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. LambdaMART performance on two offline evaluation tasks, compared to ranking by CF scores and a random baseline.</figDesc><graphic coords="7,169.35,266.34,276.66,138.33" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. Impact of removing each category of feature from LambdaMART model</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. Post-hoc analysis results</figDesc><graphic coords="10,137.84,126.38,110.66,73.77" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>The ranking model increases diversity: On average, the ranking model increases the number of distinct journals in each list of related items. 4. The ranking model promotes recently published items that have more traffic in the past year: Although other collaborative filtering systems have reported a popularity bias</figDesc><table /><note><ref type="bibr" target="#b1">2</ref>. The winning ranking model depends on text, usage, article age, and the citation network: By training a number of LtR models we show that relevance of a document to a user is a combination of textual similarity, recency, popularity, usage similarity, and proximity in the citation network. Traditional bibliometric features have limited impact. 3.</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 1 .</head><label>1</label><figDesc>Comparison of LtR models using click prediction task on both the time-split test set and validation set.</figDesc><table><row><cell>Ranker</cell><cell>NDCG@3 Validation</cell><cell>NDCG@3 Test</cell></row><row><cell>Random</cell><cell cols="2">0.753 0.768</cell></row><row><cell>CF score</cell><cell cols="2">0.802 0.787</cell></row><row><cell>LambdaMART</cell><cell cols="2">0.814 0.807</cell></row><row><cell>MART</cell><cell cols="2">0.813 0.804</cell></row><row><cell>RankBoost</cell><cell cols="2">0.805 0.797</cell></row><row><cell>AdaRank</cell><cell cols="2">0.802 0.787</cell></row><row><cell>Random forests</cell><cell cols="2">0.808 0.798</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 2 .</head><label>2</label><figDesc>Journal diversity</figDesc><table><row><cell></cell><cell>% of Query Articles</cell></row><row><cell cols="2">Distinct Journals@3 CF score LambdaMART</cell></row><row><cell>1</cell><cell>13.78% 12.27%</cell></row><row><cell>2</cell><cell>29.65% 32.16%</cell></row><row><cell>3</cell><cell>56.57% 55.57%</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://www.sciencedirect.com BIR</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2020" xml:id="foot_1">Workshop on Bibliometric-enhanced Information Retrieval</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_2">https://www.mendeley.com/suggest</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_3">https://www.citeulike.com</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Lerot: an online learning to rank framework</title>
		<author>
			<persName><forename type="first">S</forename><surname>Whiteson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>De Rijke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Schuth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hofmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Living Labs for Information Retrieval Evaluation workshop at CIKM&apos;13</title>
				<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Controlling popularity bias in learning-to-rank recommendation</title>
		<author>
			<persName><forename type="first">Himan</forename><surname>Abdollahpouri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Robin</forename><surname>Burke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bamshad</forename><surname>Mobasher</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Eleventh ACM Conference on Recommender Systems, RecSys &apos;17</title>
				<meeting>the Eleventh ACM Conference on Recommender Systems, RecSys &apos;17<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="42" to="46" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Research-paper recommender systems: a literature survey</title>
		<author>
			<persName><forename type="first">Joeran</forename><surname>Beel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bela</forename><surname>Gipp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stefan</forename><surname>Langer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Corinna</forename><surname>Breitinger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal on Digital Libraries</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="305" to="338" />
			<date type="published" when="2016-11">Nov 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Game-theoretic models of information overload in social networks</title>
		<author>
			<persName><forename type="first">Christian</forename><surname>Borgs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jennifer</forename><surname>Chayes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Brian</forename><surname>Karrer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Brendan</forename><surname>Meeder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ravi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ray</forename><surname>Reagans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Amin</forename><surname>Sayedi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Algorithms and Models for the Web-Graph</title>
				<editor>
			<persName><forename type="first">Ravi</forename><surname>Kumar</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Dandapani</forename><surname>Sivakumar</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin, Heidelberg; Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="146" to="161" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Learning to rank using gradient descent</title>
		<author>
			<persName><forename type="first">Chris</forename><surname>Burges</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tal</forename><surname>Shaked</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Erin</forename><surname>Renshaw</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ari</forename><surname>Lazier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Matt</forename><surname>Deeds</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nicole</forename><surname>Hamilton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Greg</forename><surname>Hullender</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22Nd International Conference on Machine Learning, ICML &apos;05</title>
				<meeting>the 22Nd International Conference on Machine Learning, ICML &apos;05<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="89" to="96" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">From ranknet to lambdarank to lambdamart: An overview</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Christopher</surname></persName>
		</author>
		<author>
			<persName><surname>Burges</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Comparison of collaborative filtering algorithms: Limitations of current techniques and proposals for scalable, high-performance recommender systems</title>
		<author>
			<persName><forename type="first">Fidel</forename><surname>Cacheda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Víctor</forename><surname>Carneiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Diego</forename><surname>Fernández</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vreixo</forename><surname>Formoso</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Web</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page">33</biblScope>
			<date type="published" when="2011-02">February 2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">Van</forename><surname>Dang</surname></persName>
		</author>
		<author>
			<persName><surname>Ranklib</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Practical Machine Learning: Innovations in Recommendation</title>
		<author>
			<persName><forename type="first">Ted</forename><surname>Dunning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ellen</forename><surname>Friedman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M D Ellen</forename><surname>Friedman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2014-08">August 2014</date>
			<publisher>O&apos;Reilly Media, Inc</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A keyphrase-based paper recommender system</title>
		<author>
			<persName><forename type="first">Felice</forename><surname>Ferrara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nirmala</forename><surname>Pudota</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Carlo</forename><surname>Tasso</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Digital Libraries and Archives</title>
				<editor>
			<persName><forename type="first">Maristella</forename><surname>Agosti</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Floriana</forename><surname>Esposito</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Carlo</forename><surname>Meghini</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Nicola</forename><surname>Orio</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin, Heidelberg; Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="14" to="25" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Session-based recommendations with recurrent neural networks</title>
		<author>
			<persName><forename type="first">Balázs</forename><surname>Hidasi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexandros</forename><surname>Karatzoglou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Linas</forename><surname>Baltrunas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Domonkos</forename><surname>Tikk</surname></persName>
		</author>
		<idno>CoRR, abs/1511.06939</idno>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Learning to rank social update streams</title>
		<author>
			<persName><forename type="first">Liangjie</forename><surname>Hong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ron</forename><surname>Bekkerman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Joseph</forename><surname>Adler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Brian</forename><forename type="middle">D</forename><surname>Davison</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR &apos;12</title>
				<meeting>the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR &apos;12<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="651" to="660" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Building recommender systems for scholarly information</title>
		<author>
			<persName><forename type="first">Maya</forename><surname>Hristakeva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Daniel</forename><surname>Kershaw</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>Rossetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Petr</forename><surname>Knoth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Benjamin</forename><surname>Pettit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Saúl</forename><surname>Vargas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kris</forename><surname>Jack</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st Workshop on Scholarly Web Mining, SWM &apos;17</title>
				<meeting>the 1st Workshop on Scholarly Web Mining, SWM &apos;17<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="25" to="32" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Optimizing search engines using clickthrough data</title>
		<author>
			<persName><forename type="first">Thorsten</forename><surname>Joachims</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;02</title>
				<meeting>the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;02<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="133" to="142" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">On application of learning to rank for e-commerce search</title>
		<author>
			<persName><forename type="first">Shubhra</forename><surname>Kanti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Karmaker</forename><surname>Santu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Parikshit</forename><surname>Sondhi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chengxiang</forename><surname>Zhai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 40th BIR 2020 Workshop on Bibliometric-enhanced Information Retrieval International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR &apos;17</title>
				<meeting>the 40th BIR 2020 Workshop on Bibliometric-enhanced Information Retrieval International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR &apos;17<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="475" to="484" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">PubMed related articles: a probabilistic topicbased model for content similarity</title>
		<author>
			<persName><forename type="first">Jimmy</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">John</forename><surname>Wilbur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">BMC Bioinformatics</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page">423</biblScope>
			<date type="published" when="2007-10">oct 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">com recommendations: Item-to-item collaborative filtering</title>
		<author>
			<persName><forename type="first">Greg</forename><surname>Linden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Brent</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jeremy</forename><surname>York</surname></persName>
		</author>
		<author>
			<persName><surname>Amazon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Internet Computing</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="76" to="80" />
			<date type="published" when="2003-01">January 2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">A source independent framework for research paper recommendation</title>
		<author>
			<persName><forename type="first">Cristiano</forename><surname>Nascimento</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alberto</forename><forename type="middle">H F</forename><surname>Laender</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Altigran</forename><forename type="middle">S</forename><surname>Da Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marcos</forename><forename type="middle">André</forename><surname>Gonçalves</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 11th Annual International ACM/IEEE Joint Conference on Digital Libraries, JCDL &apos;11</title>
				<meeting>the 11th Annual International ACM/IEEE Joint Conference on Digital Libraries, JCDL &apos;11<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="297" to="306" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Collaborative topic modeling for recommending scientific articles</title>
		<author>
			<persName><forename type="first">Chong</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><forename type="middle">M</forename><surname>Blei</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;11</title>
				<meeting>the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;11<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="448" to="456" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Babel: A platform for facilitating research in scholarly article discovery</title>
		<author>
			<persName><forename type="first">Ian</forename><surname>Wesley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">-</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jevin</forename><forename type="middle">D</forename><surname>West</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 25th International Conference Companion on World Wide Web, WWW &apos;16 Companion</title>
				<meeting>the 25th International Conference Companion on World Wide Web, WWW &apos;16 Companion<address><addrLine>Republic and Canton of Geneva, Switzerland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="389" to="394" />
		</imprint>
	</monogr>
	<note>International World Wide Web Conferences Steering Committee</note>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">A recommendation system based on hierarchical clustering of an article-level citation network</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>West</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Wesley-Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">T</forename><surname>Bergstrom</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Big Data</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="113" to="123" />
			<date type="published" when="2016-06">June 2016</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
