<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Unbiasing Collaborative Filtering for Popularity-Aware Recommendation</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Luciano</forename><surname>Caroprese</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">ICAR-CNR</orgName>
								<address>
									<addrLine>Via Bucci, 8-9c</addrLine>
									<settlement>Rende</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giuseppe</forename><surname>Manco</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">ICAR-CNR</orgName>
								<address>
									<addrLine>Via Bucci, 8-9c</addrLine>
									<settlement>Rende</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Marco</forename><surname>Minici</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">ICAR-CNR</orgName>
								<address>
									<addrLine>Via Bucci, 8-9c</addrLine>
									<settlement>Rende</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Francesco</forename><forename type="middle">Sergio</forename><surname>Pisani</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">ICAR-CNR</orgName>
								<address>
									<addrLine>Via Bucci, 8-9c</addrLine>
									<settlement>Rende</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ettore</forename><surname>Ritacco</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">ICAR-CNR</orgName>
								<address>
									<addrLine>Via Bucci, 8-9c</addrLine>
									<settlement>Rende</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Unbiasing Collaborative Filtering for Popularity-Aware Recommendation</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">E6383F0D8CC197311E3BDA74FB1C946A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T02:54+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Recommender Systems</term>
					<term>Collaborative Filtering</term>
					<term>Deep Learning</term>
					<term>Big Data</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We analyze the behavior of recommender systems relative to the popularity of the items to recommend. Our findings show that most popular ranking-based recommenders are biased towards popular items, thus affecting the quality of recommendation. Based on these observations, we propose a new deep learning architecture with an improved learning strategy that significantly improves the performance of such recommenders on low-popular items. The proposed technique is based on two main aspects: resampling of negatives and ensembling of multiple instances of the algorithm. Experimental results on traditional benchmark datasets show that the proposed approach substantially improves the recommendation ability by balancing accurate contributions almost independently from the popularity of the items to recommend.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The massive amount of information available to users in the form of digital catalogs and online services allows users to access and consume online content, but at the same time it poses a choice paradox. Purchasing items on e-commerce sites, selecting movies on streaming platforms or connecting with other peers on social networks implies making choices among a huge amount of elements. Recommender Systems play a crucial role within this context since they provide users with a better experience through the recommendation of new content and data that users are likely to appreciate.</p><p>Recommendations can have a disruptive impact: if on one side, they assist users in their choices, on the other side they can influence the choices themselves. Indeed, Recommender systems can favor particular categories of items or particular brands over others, thus introducing a bias within the catalog <ref type="bibr" target="#b0">[1]</ref>. A typical bias is intrinsic to the nature of recommendation. In systems involving interaction between significant amounts of users and items, we observe the presence of very few very popular items and many less popular others. This distribution follows the so-called 80-20 rule that refers to the fact that 80% of users express preferences for only 20% SEBD 2021: The 29th Italian Symposium on Advanced Database Systems, September 5-9, 2021, Pizzo Calabro (VV), Italy Envelope luciano.caroprese@icar.cnr.it (L. Caroprese); giuseppe.manco@icar.cnr.it (G. Manco); marco.minici@icar.cnr.it (M. Minici); francescosergio.pisani@icar.cnr.it (F. S. Pisani); ettore.ritacco@icar.cnr.it (E. Ritacco) of the available items. In practice, user preferences follow a long tail distribution. Recommender Systems based on collaborative filtering <ref type="bibr" target="#b1">[2]</ref>, who tend to characterize user preferences from interactions, are affected by a relevant problem: they tend to suggest very popular items and neglect less popular ones. This way, there is a reinforced effect because the most popular items will become more and more popular than the less known ones. This phenomenon is called popularity bias.</p><p>The quality of the recommendations is inevitably affected by the popularity bias, since most recommendations are prone to contain objects that the user will consider trivial and well known. By contrast, the capability of recommending items belonging to the long tail (i.e., less popular) can disclose new perspectives: niche items, little-known objects, hidden gems that satisfy the user's preferences can greatly improve user engagement, provided that the recommended items are consistent with user's taste.</p><p>The current literature has extensively studied the problems of fairness and bias <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5]</ref> within recommender systems, with particular emphasis to popularity bias <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b7">8]</ref>. Notably, in <ref type="bibr" target="#b8">[9]</ref>, it is shown that unfair recommendations are concentrated on groups of users interested in long-tail and less popular items. In this paper we focus our attention on recommender sytems based on ranking <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref>, which are extremely flexible and as a consequence particularly interesting to unbias. Typical approaches <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13]</ref> consider techniques to increase the representation of less popular items, by either post-processing techniques or constraining the recommendation score. Despite such recent efforts, unbiasing the recommendation from popular items is still an open problem.</p><p>In this paper we devise a ranking-based recommender system for implicit feedback (RVAE), based on a variational autoencoder architecture. The proposed model is a substantial extension of the MVAE model proposed in <ref type="bibr" target="#b13">[14]</ref> and in fact it inherits its accuracy and computational efficiency. We analyze the behavior of RVAE and show that the model is characterized by the popularity bias problem. We then propose an experimental study of specific techniques to overcome it. The proposed technique is based on two main concepts: resampling/reweighting items and ensembling of multiple instances of the algorithm. Our experiments show that these simple strategies allow to unbias the algorithm and hence provide more effective recommendations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Basic Framework</head><p>We start by setting the notation that we shall use throughout the paper. In the context of collaborative filtering, 𝑢 ∈ 𝑈 = {1, … , 𝑀} indexes a user and 𝑖, 𝑗 ∈ 𝐼 = {1, … , 𝑁 } index items for which the user can express a preference. We model implicit feedback, thus assuming a preference matrix X ∈ {0, 1} 𝑀×𝑁 , so that 𝑥 𝑢,𝑖 = 1 whenever user 𝑢 expressed a preference for item 𝑖, and 𝑥 𝑢,𝑖 = 0 otherwise. Also, x 𝑢 is the (binary) row indexed at 𝑢, representing all the item preferences for user 𝑢. Given x 𝑢 , we define 𝐼 𝑢 = {𝑖 ∈ 𝐼 |𝑥 𝑢,𝑖 = 1} (with 𝑁 𝑢 = |𝐼 𝑢 |). The preference matrix induces a natural ordering between items: 𝑖 ≺ 𝑢 𝑗 has the meaning that 𝑢 prefers 𝑖 to 𝑗, i.e. 𝑥 𝑢,𝑖 &gt; 𝑥 𝑢,𝑗 in the rating matrix. Our objective is to devise a model for such an ordering. Preference Modeling. We consider a general framework where preferences are modeled as the effect of latent factors ultimately characterizing users and/or items. We shall consider two basic instantiations of this general idea, and will provide a unified framework.</p><p>The first situation we consider is the Multinomial Variational Autoencoder (MVAE) framework proposed in <ref type="bibr" target="#b13">[14]</ref>. Within this framework, for a given user 𝑢 the related x 𝑢 is modeled as the effect of a multinomial distribution governed by a prior z, i.e.</p><formula xml:id="formula_0">x 𝑢 ∼ Discrete (𝜋(z)) 𝜋(z) ∝ exp {𝑓 𝜙 (z)}</formula><p>Here, 𝑓 𝜙 (⋅) represents a neural network parameterized by 𝜙. The latent variable z is modeled by a prior 𝑃(z) (typically a gaussian distribution). Thus, the probability of preferences for a given user can be expressed as</p><formula xml:id="formula_1">𝑃(x 𝑢 ) = ∫ 𝑃(x 𝑢 |z)𝑃(z) dz</formula><p>Due to the intractability of the above integral, <ref type="bibr" target="#b14">[15]</ref> devise a variational approach based on a proposal 𝑄(z|x 𝑢 ) that approximates the posterior distribution. Again, 𝑄 is modeled as a gaussian distribution 𝑄(z|x 𝑢 ) = 𝒩 (z; 𝜇 𝑢 , 𝜎 𝑢 ),</p><p>where 𝜇 𝑢 , 𝜎 𝑢 = 𝑔 𝜃 (x 𝑢 ) and 𝑔 𝜃 is a neural network parameterized by 𝜃. By exploiting the inequality</p><formula xml:id="formula_2">log 𝑃(x 𝑢 ) ≥ 𝔼 z∼𝑄(⋅|x 𝑢 ) [log 𝑃(x 𝑢 |z) − 𝑃(z)]</formula><p>we can finally learn the 𝜙, 𝜃 parameters by optimizing the loss</p><formula xml:id="formula_3">ℓ 𝑀𝑉 𝐴𝐸 (𝜙, 𝜃) = ∑ 𝑢 {𝔼 z∼𝑄 𝜃 (⋅|x 𝑢 ) [log 𝑃 𝜙 (x 𝑢 |z)] − 𝕂𝕃[𝑄 𝜃 (z|x 𝑢 )‖𝑃(z)]}</formula><p>The overall framework is based hence on regularized encoder-decoder scheme, where 𝑄 𝜃 (z|x 𝑢 ) represents the encoder, 𝑃 𝜃 (x 𝑢 |z) represents the decoder and the term 𝔼 z∼𝑄 𝜃 (⋅|x 𝑢 ) [𝑃(z)] acts as a regularizer. In the training phase, for each 𝑢 a latent variable z ∼ 𝑄 𝜃 (⋅|x 𝑢 ) is devised. Next, z is exploited to devise the probability 𝑃 𝜙 (x 𝑢 |z). Users with low probability are penalized within the loss and the network parameters can be updated accordingly.</p><p>Prediction for new items is accomplished by resorting to the learned functions 𝑃 𝜙 and 𝑄 𝜃 : given a (partial) user history x 𝑢 , we compute z = 𝜇 𝑢 and then devise the probabilities for the whole item set through 𝜋(z). Unseen items can then be ranked according to their associated probabilities.</p><p>The second formulation we consider is inspired by the Bayesian Personalized Ranking (BPR) model introduced in <ref type="bibr" target="#b9">[10]</ref>. The idea underlying this model is that a preference 𝑖 ≺ 𝑢 𝑗 can be directly explained as closeness in a latent space where both items and users can be mapped. Mathematically this can be devised by computing a factorization rank p 𝑇 𝑢 q 𝑖 for each pair (𝑢, 𝑖), and modeling preferences by means of a Bernoulli process:</p><formula xml:id="formula_4">𝑖 ≺ 𝑢 𝑗 ∼ Bernoulli(𝑝) 𝑝 = 𝜎 (p 𝑇 𝑢 (q 𝑖 − q 𝑗 ))</formula><p>where 𝜎(𝑎) = (1 + 𝑒 −𝑎 ) −1 is the logistic function. The optimal embeddings P and Q can hence be obtained by opimizing the loss</p><formula xml:id="formula_5">ℓ 𝐵𝑃𝑅 (P, Q) ≈ ∑ 𝑢 ∑ 𝑖,𝑗 𝑖≺ 𝑢 𝑗 log 𝜎 (p 𝑇 𝑢 (q 𝑖 − q 𝑗 )</formula><p>We combine the two frameworks by adapting the BPR loss to the MVAE model. In particular, instead of modeling 𝑃(x 𝑢 |z), we directly model 𝑃(𝑖 ≺ 𝑢 𝑗|z) within a similar variational framework. In short, the current preferences are encoded within a latent variable z that is further exploited to decode all ranks:</p><formula xml:id="formula_6">𝑖 ≺ 𝑢 𝑗 ∼ Bernoulli(𝑝 𝑖,𝑗 ) 𝑝 𝑖,𝑗 = 𝜎 (𝜁 𝑖 − 𝜁 𝑗 ) 𝜁 = 𝑓 𝜙 (z)</formula><p>Here, 𝜁 represents the output of a neural network parameterized by 𝜙. For a given item 𝑖, the value 𝜁 𝑖 represents then the associated rank which can be used sort all preferences. The model can be obtained by optimizing the loss:</p><formula xml:id="formula_7">ℓ 𝑅𝑉 𝐴𝐸 (𝜙, 𝜃) = ∑ 𝑢 {𝔼 z∼𝑄 𝜃 (⋅|x 𝑢 ) [∑ 𝑖,𝑗 𝑖≺ 𝑢 𝑗 log 𝑃 𝜙 (𝑖 ≺ 𝑢 𝑗|z)] − 𝕂𝕃[𝑄 𝜃 (z|x 𝑢 )‖𝑃(z)]}<label>(1)</label></formula><p>We call this model Ranking Variational Autoencoder (RVAE).</p><p>Learning by Negative Sampling. In the above formulation there are some details worth further discussion. When learning the RVAE model, optimizing the likelihood requires that all pairs of items are considered within Eq. ( <ref type="formula" target="#formula_7">1</ref>). This is unrealistic with large item bases, and it is usually customary to only consider a subset 𝒫 𝑢 ⊂ {(𝑖, 𝑗)|𝑖, 𝑗 ∈ 𝐼 ; 𝑖 ≺ 𝑢 𝑗}. The sampling of 𝒟 𝑢 is critical for determining the behavior of any predictive model; the most used approach in literature is to uniformly sample, for each user 𝑢 and item 𝑖 (called positive item), a fixed number of items {𝑗 1 , … 𝑗 𝑛 } ⊂ 𝐼 − 𝐼 𝑢 with the underlying assumption that ∀𝑘 ∶ 𝑖 ≺ 𝑢 𝑗 𝑘 . Thus, Eq.</p><p>(1) can be rewritten as:</p><formula xml:id="formula_8">ℓ 𝑅𝑉 𝐴𝐸 (𝜙, 𝜃) = ∑ 𝑢 {𝔼 z∼𝑄 𝜃 (⋅|x 𝑢 ) [ ∑ (𝑖,𝑗)∈𝒟 𝑢 log 𝑃 𝜙 (𝑖 ≺ 𝑢 𝑗|z)] − 𝕂𝕃[𝑄 𝜃 (z|x 𝑢 )‖𝑃(z)]}<label>(2)</label></formula><p>This will be the basis loss upon which we develop our study next.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">The Impact of Popularity</head><p>We start our analysis on the following popular benchmark datasets: i) Movielens, a time series dataset containing user-item ratings pairs along with the corresponding timestamp; ii) Pinterest, based on the social media that allows users to save or pin an image (item) to their board. The dataset denotes as 1 the pinned images for each user; iii) CiteUlike, a dataset obtained from the homonymous service which provides a digital catalog to save and share 0 0.5k 1.0k 1.5k 2.0k 2.5k 3.0k 3.5k academic papers. Within fig. <ref type="figure" target="#fig_0">1</ref> we plot all the items within the datasets, by increasing popularity.</p><p>For each dataset we identify three classes: low, mid and high popular items.</p><p>We then study the behavior of the RVAE model with respect to the popularity classes defined. To do so, we adopt the following protocol. For each dataset, 70% of users are randomly sampled with all user's items. Each such user is associated with x 𝑢 and a set 𝒟 𝑢 of positive/negative item pairs. In particular, we consider all positive items within x 𝑢 , and for each positive item 𝑖 we sample 𝑛 = 4 negative items. The remaining 30% users are uniformly split into validation and test. In particular, for each user 𝑢 we consider a random subset 𝑃 𝑢 ⊂ 𝐼 𝑢 representing the 30% of the positive items, and 𝑁 𝑢 represents a subset of 100 negative items sampled from 𝐼 − 𝐼 𝑢 . The vector x 𝑢 is masked to remove all elements in 𝑃 𝑢 . We then feed the masked x 𝑢 to obtain the score vector 𝜁 𝑢 . Now, for a given cutoff value 𝑐, let us consider the 𝑐 − 1 negative items for 𝑢 for which RVAE gives the highest score, and, among them the item 𝑗 having the minimum score. There is an hit with cutoff 𝑐, for the user 𝑢 and the item 𝑖 ∈ 𝑃 𝑢 , if 𝜁 𝑢,𝑖 ≥ 𝜁 𝑢,𝑗 . Let 𝐻 𝑐 𝑢 the number of hits for the user 𝑢 with cutoff 𝑐. We define the Hit-Rate at 𝑐 on 𝑇 as HR@𝑐 =</p><formula xml:id="formula_9">∑ 𝑢∈𝑇 𝐻 𝑐 𝑢 ∑ 𝑢∈𝑈 |𝑃 𝑢 | .</formula><p>We can trivially specialize this definition for items within the low, medium and high popularity, by considering only items in 𝑃 𝑢 that belong to the specific class. The results of the evaluation are summarized in table 1a. We can see that the model suffers mainly on low-popular items. As a matter of fact, the overexposure of popular items is predominant and the model learn to predict essentially those items. The fact is that popular items are easy to predict. However, it is in the mid and especially on low popular items that the most interesting predictions can take place: niche items are difficult to discover by an end user and hence their accurate suggestions can greatly improve user engagement. The research question is hence: how can we boost the model to improve the performance on low-popular items? Table <ref type="table" target="#tab_0">2</ref> summarizes the results of the evaluation. We see that, in general, all the strategies considerably improve the response of the model to the low popular items. However, RVAE 𝑊 has a low response on the high popular items. By contrast, RVAE 𝑆 succeeds in boosting performance on both the low popular and the mid popular items. Overall, the ensemble RVAE 𝐸 provides the best response, by boosting low popular items without substantially degrading over the other classes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions</head><p>The approach proposed in this paper is a preliminary study: We introduce a Ranking collaborative filtering algorithm (RVAE) and study how the algorithm is affected by popularity bias. Next, we show how simple techniques based on reweighting/resampling and/or ensembling can recalibrate the recommendations. There are several aspects that are worth further investigation. First of all, both the weighting and the inverse stratified sampling schemes are based on hyperparameters that need to be carefully tuned. Also, the ensemble strategies are simple and more complex schemes that also take into account other model instantiations can be studied. We reserve the attention to these challenges in a future work.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Item popularity distributions.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 2</head><label>2</label><figDesc>Comparative analysis.</figDesc><table /></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Unbiased Recommendation</head><p>In a simple experiment, we retrain the model by only considering pairs (𝑖, 𝑗) ∈ 𝒟 𝑢 such that 𝑖 is in the low popular class. We call this model RVAE 𝐿 . Compared to the results in table 1a, the results for this restricted model (shown in table 1b) show that if attention is placed on low popular items, their response on prediction accuracy can be improved. Similar results can be observed for the mid popular items. Thus, in order to unbias the model we need to rebalance the contribution on low (and mid) popular items with regards to the high popular ones. To achieve this goal, we study three different strategies.</p><p>The first strategy consists in weighting, for each pair (𝑖, 𝑗) ∈ 𝒟 𝑢 , the contribution to the loss with a factor inversely proportional to the popularity of the item 𝑖:</p><p>Here, 𝑓 𝑖 represents the number of occurrences of 𝑖, and 𝛼, 𝛽, 𝛾 the parameters representing the steep, center and scale of the decay of high popular items. We experiment with 𝛼 = 0.01, 𝛽 representing the average frequency of mid-popular items and 𝛾 = 100 and call this variant RVAE 𝑊 . The above strategy has the advantage of reweighing the contributions of low and mid popular items with respect to high popular ones: the ratio between the most popular and the lowest popular is approximately 1/𝛾. However, this weighting scheme has a main disadvantage. The weight is relative to a positive item 𝑖, but it is associated with pairs (𝑖, 𝑗) ∈ 𝑃 𝑢 . That is, besides weighting the contribution of 𝑖, this scheme also overexposes (or dually underexposes) the contribution of the negative item 𝑗. To avoid this, an alternative strategy consists in changing the sampling scheme that produces 𝒟 𝑢 . In practice, rather than uniformly sampling, for each positive item, a fixed number 𝑛 of negative items, we can apply an inversely stratified sampling where 𝑛 𝑖 negatives are sampled, with 𝑛 𝑖 being inversely proportional to the popularity of the item:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>⌉.</head><p>The ratio with the above formula is to provide the same visibility to each positive item in the loss function. Thus, the most popular item will be associated with exactly 𝑛 pairs. By contrast, low and mid popular items will be overexposed in the comparison. We call this variant RVAE 𝑆 .</p><p>The third strategy consists in combining the baseline RVAE model with RVAE 𝐿 . For a user 𝑢, let 𝜁 𝐵 the the score vector produced by RVAE, and 𝜁 𝐿 the score produced by RVAE 𝐿 . We define RVAE 𝐸 and the model that produces the score 𝜁 𝐸 defined as 𝜁 𝐸 = Softmax(𝜁 𝐵 ) + 𝛿m ⋅ Softmax(𝜁 𝐿 ).</p><p>The vector m masks all items but the low popular. The scores are normalized (via the Softmax function) to make the two models comparable. Finally, 𝛿 is a weight aimed at tuning the boost for the low popular items, as devised by RVAE 𝐿 . We experimentally found an optimal tuning with 𝛿 = 0.4. </p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Bias disparity in recommendation systems</title>
		<author>
			<persName><forename type="first">V</forename><surname>Tsintzou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Pitoura</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Tsaparas</surname></persName>
		</author>
		<idno>CoRR abs/1811.01461</idno>
		<ptr target="http://arxiv.org/abs/1811.01461" />
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">C</forename><surname>Aggarwal</surname></persName>
		</author>
		<title level="m">Recommender Systems</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Bias in computer systems</title>
		<author>
			<persName><forename type="first">B</forename><surname>Friedman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Nissenbaum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Inf. Syst</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="330" to="347" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Fairness-aware tensor-based recommendation</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Caverlee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ACM International Conference on Information and Knowledge Management, CIKM &apos;18</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1153" to="1162" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A flexible framework for evaluating user and item fairness in recommender systems</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Deldjoo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">W</forename><surname>Anelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zamani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bellogin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">Di</forename><surname>Noia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">User Modeling and User-Adapted Interaction</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">From hits to niches?: Or how popular artists can bias music recommendation and discovery</title>
		<author>
			<persName><forename type="first">O</forename><surname>Celma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Cano</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2nd KDD Workshop on Large-Scale Recommender Systems and the Netflix Prize Competition</title>
				<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Item popularity and recommendation accuracy</title>
		<author>
			<persName><forename type="first">H</forename><surname>Steck</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fifth ACM Conference on Recommender Systems, RecSys &apos;11</title>
				<meeting>the Fifth ACM Conference on Recommender Systems, RecSys &apos;11</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="125" to="132" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">On measuring popularity bias in collaborative filtering data</title>
		<author>
			<persName><forename type="first">R</forename><surname>Borges</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Stefanidis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">EDBT Workshop on BigVis 2020: Big Data Visual Exploration and Analytics</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">The unfairness of popularity bias in recommendation</title>
		<author>
			<persName><forename type="first">H</forename><surname>Abdollahpouri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mansoury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Burke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mobasher</surname></persName>
		</author>
		<idno>CoRR abs/1907.13286</idno>
		<ptr target="http://arxiv.org/abs/1907.13286" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Bpr: Bayesian personalized ranking from implicit feedback</title>
		<author>
			<persName><forename type="first">S</forename><surname>Rendle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Freudenthaler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Gantner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Schmidt-Thieme</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conf. on Uncertainty in Artificial Intelligence, UAI &apos;09</title>
				<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="452" to="461" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Collaborative memory network for recommendation systems</title>
		<author>
			<persName><forename type="first">T</forename><surname>Ebesu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Fang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR &apos;18</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Measuring and mitigating item under-recommendation bias in personalized ranking systems</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Caverlee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR &apos;20</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="449" to="458" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Managing popularity bias in recommender systems with personalized re-ranking</title>
		<author>
			<persName><forename type="first">H</forename><surname>Abdollahpouri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Burke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mobasher</surname></persName>
		</author>
		<idno>CoRR abs/1901.07555</idno>
		<ptr target="http://arxiv.org/abs/1901.07555" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Variational autoencoders for collaborative filtering</title>
		<author>
			<persName><forename type="first">D</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">G</forename><surname>Krishnan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hoffman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Jebara</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ACM Conf, on World Wide Web, WWW &apos;</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page" from="689" to="698" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Auto-encoding variational bayes</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>Kingma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Welling</surname></persName>
		</author>
		<ptr target="://" />
	</analytic>
	<monogr>
		<title level="m">2nd International Conference on Learning Representations, ICLR&apos;14</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
	<note>a r X i v : h t t p</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
