<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Towards Explanations for Visual Recommender Systems of Artistic Images</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Vicente</forename><surname>Dominguez</surname></persName>
							<email>vidominguez@uc.cl</email>
						</author>
						<author>
							<persName><forename type="first">Pablo</forename><surname>Messina</surname></persName>
							<email>pamessina@uc.cl</email>
						</author>
						<author>
							<persName><forename type="first">Christoph</forename><surname>Tra</surname></persName>
							<email>christoph.trattner@uib.no</email>
						</author>
						<author>
							<persName><forename type="first">Denis</forename><surname>Parra</surname></persName>
							<email>dparra@ing.puc.cl</email>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">IMFD</orgName>
								<orgName type="institution" key="instit2">PUC Chile Santiago</orgName>
								<address>
									<country key="CL">Chile</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="institution" key="instit1">IMFD</orgName>
								<orgName type="institution" key="instit2">PUC Chile Santiago</orgName>
								<address>
									<country key="CL">Chile</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="institution">University of Bergen Bergen</orgName>
								<address>
									<country key="NO">Norway</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff3">
								<orgName type="institution" key="instit1">IMFD</orgName>
								<orgName type="institution" key="instit2">PUC Chile Santiago</orgName>
								<address>
									<country key="CL">Chile</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff4">
								<orgName type="department">IntRS Workshop</orgName>
								<orgName type="laboratory">Towards Explanations for Visual Recommender Systems of Artistic Images . In Proceedings</orgName>
								<address>
									<postCode>2018, 2018</postCode>
									<settlement>Vancouver</settlement>
									<region>October</region>
									<country key="CA">Canada</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Towards Explanations for Visual Recommender Systems of Artistic Images</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">9B160665FA6F2616618B475BD145135F</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T04:31+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Recommender systems</term>
					<term>Artwork Recommendation</term>
					<term>Explainable Interfaces</term>
					<term>Visual Features</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Explaining automatic recommendations is an active area of research since it has shown an important e ect on users' acceptance over the items recommended. However, there is a lack of research in explaining content-based recommendations of images based on visual features. In this paper, we aim to ll this gap by testing three di erent interfaces (one baseline and two novel explanation interfaces) for artistic image recommendation. Our experiments with N=121 users con rm that explanations of recommendations in the image domain are useful and increase user satisfaction, perception of explainability, relevance, and diversity. Furthermore, our experiments show that the results are also dependent on the underlying recommendation algorithm used. We tested the interfaces with two algorithms: Deep Neural Networks (DNN), with high accuracy but with di cult to explain features, and the more explainable method based on A ractiveness Visual Features (AVF). e be er the accuracy performance -in our case the DNN method-the stronger the positive e ect of the explainable interface. Notably, the explainable features of the AVF method increased the perception of explainability but did not increase the perception of trust, unlike DNN, which improved both dimensions. ese results indicate that algorithms in conjunction with interfaces play a signi cant role in the perception of explainability and trust for image recommendation. We plan to further investigate the relationship between interface explainability and algorithmic performance in recommender systems.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Online artwork recommendation has received li le a ention compared to other areas such as movies <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b9">10]</ref>, music <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b15">16]</ref> or pointsof-interest <ref type="bibr" target="#b24">[25,</ref><ref type="bibr" target="#b27">28,</ref><ref type="bibr" target="#b28">29]</ref>. e rst works in the area date from 2006-2007 such as the CHIP <ref type="bibr" target="#b1">[2]</ref> project, which implemented traditional techniques such as content-based and collaborative ltering for artwork recommendation at the Rijksmuseum, and the m4art system by Van den Broek et al. <ref type="bibr" target="#b25">[26]</ref>, which used histograms of color to retrieve similar artworks where the input query was a painting image. More recently, deep neural networks (DNN) have been used for artwork recommendation and are the current state-of-the-art model <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b11">12]</ref>, which is rather expected considering that DNNs are the top performing models for obtaining visual features for several tasks, such as image classi cation <ref type="bibr" target="#b14">[15]</ref>, and scene identi cation <ref type="bibr" target="#b22">[23]</ref>. However, no user study has been conducted to validate the performance of DNNs versus other visual features. is aspect is important since past works have shown that o -line results might not always replicate when tested with actual users <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b16">17]</ref>. Moreover, we provide evidence of the important value of explanations in artwork recommender systems over several dimensions of user perception. Visual features obtained from DNNs are still di cult to explain to users, despite current e orts to understand them and explain them <ref type="bibr" target="#b19">[20]</ref>. In contrast, features of visual a ractiveness could be easily explained, based on color, brightness or contrast <ref type="bibr" target="#b20">[21]</ref>. Explanations in recommender systems have been shown to have a signi cant e ect on user satisfaction <ref type="bibr" target="#b23">[24]</ref>, and, to the best of our knowledge, no previous work has shown how to explain recommendations of images based on visual features. Hence, there is no study of the e ect on users when explaining images recommended by a Visual Content-based Recommender (Hereina er, VCBR).</p><p>Objective. In this paper, we research the e ect of explaining artistic image suggestions. In particular, we conduct a user study on Amazon Mechanical Turk under three di erent interfaces and two di erent algorithms. e three interfaces are: i) no explanations, ii) explanations based on similar images, and iii) explanations based on visual features. Moreover, the two algorithms are: Deep Neural Networks (DNN) and A ractiveness Visual Features (AVF). In our study, we used images provided by the online store UGallery (h p://www.UGallery.com/).</p><p>Research estions To drive our research, the following two questions were de ned:   • RQ1. Given three di erent types of interfaces, one baseline interface without explanations and two with them, employing similar image explanations and a feature bar chart, which one is perceived as most useful? • RQ2. Furthermore, based on the visual and content-based recommender algorithm chosen, are there observable di erences in how the three interfaces are perceived?</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">RELATED WORK</head><p>Relevant related research is collated in two sub-sections: First, we review research on recommending artistic images to people. Second we summarize studies on explaining recommender systems.</p><p>Both are important to our problem at hand. e nal paragraph in this section highlights the di erences to previous work and our contributions to the existing literature in the area.</p><p>Recommendations of Artistic Images. e works of Aroyo et al. <ref type="bibr" target="#b1">[2]</ref> with the CHIP project and Semeraro et al. <ref type="bibr" target="#b21">[22]</ref> with FIRSt (Folksonomy-based Item Recommender syStem) made early contributions to this area using traditional techniques. More complex methods were implemented recently by Benouaret et al. <ref type="bibr" target="#b2">[3]</ref>, using context obtained through a mobile application, that makes a museum tour recommendation. Finally, the work of He et al. addresses digital artwork recommendations based on pre-trained deep neural visual features <ref type="bibr" target="#b11">[12]</ref>, and the work of Dominguez et al. <ref type="bibr" target="#b6">[7]</ref> and Messina et al. <ref type="bibr" target="#b17">[18]</ref> compared neural against traditional visual features. None of the aforementioned works performed a user study under explanation interfaces to generalize their results.</p><p>Explaining Recommender Systems. ere are some related works on explanations for recommender systems <ref type="bibr" target="#b23">[24]</ref>. ough a good amount of research has been published in the area, to the best of our knowledge, no previous research has conducted a user study to understand the e ect of explaining recommendation of artwork images based on di erent visual features.</p><p>e closest works in this aspect are researches oriented to automatically add caption to images <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b18">19]</ref> or to explain image classi cations <ref type="bibr" target="#b12">[13]</ref>, but they are not directly related to personalized recommender systems.</p><p>Di erences to Previous Research &amp; Contributions. Although we focus on artistic images, to the best of our knowledge this is the rst work which studies the e ect of explaining recommendations of images based on visual features. Our contributions are two-fold: i) we analyze and report the positive e ect of explaining artistic recommendations especially for the VCBR based on neural features, and ii) by a user study we validate o -line results stating the superiority of neural visual features compared to a ractiveness visual features over several dimensions, such as users' perception of explainability, relevance, trust and general satisfaction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">METHODS</head><p>In the following section we describe in detail our study methods. First, we introduce the dataset chosen for the purpose of our study. Second we introduce the three di erent explainable visual interfaces implemented which we evaluate. ird the two algorithms chosen for our study are revealed. Finally, the user study procedure is explained.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Materials</head><p>For the purpose of our study we rely on a dataset provided by the online web store UGallery, which has been selling artwork for more than 10 years <ref type="bibr" target="#b26">[27]</ref>. ey support emergent artists by helping them sell their artwork online. For our research, UGallery provided us with an anonymized dataset of 1,371 users, 3,490 items and 2,846 purchases (transactions) of artistic artifacts, where all users have made at least one transaction. On average, each user bought 2-3 items over recent years .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">e Explainable Recommender Interfaces</head><p>In our study we explore the e ect of explanations in visual contentbased artwork recommender systems. As such, our study contains conditions depending on how recommendations are displayed: i) no explanations, as shown in Figure <ref type="figure" target="#fig_0">1</ref>, ii) explanations given by text and based on the top-3 most similar images a user liked in the past, as shown in Figure <ref type="figure" target="#fig_1">2</ref>, and iii ) explanations employing a visual a ractiveness bar chart and showing the most similar image of the user's item pro le, as presented in Figure <ref type="figure" target="#fig_2">3</ref>. In all three cases the interfaces are vertically scrollable. While Interface 1 (baseline) is able to show 5 images in a row at the same time, interfaces 2 and 3 are capable of showing one recommended image at the same time in one row to the user.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Visual Recommendation Approaches</head><p>As mentioned earlier in this paper, we make use of two di erent content-based visual recommender approaches in our work.</p><p>e reason for choosing content-based methods over collaborative ltering-based methods is grounded in the fact that once an item is sold via the UGallery store, it is not available anymore (every item is unique) and hence traditional collaborative ltering approaches do not apply. DNN Visual Feature (DNN) Algorithm. e rst algorithmic approach we employed was based on image similarity, itself based on features extracted with a deep neural network. e output vector representing the image is usually called an image's visual embedding. e visual embedding in our experiment was a vector of features obtained from an AlexNet, a convolutional deep neural network developed to classify images <ref type="bibr" target="#b14">[15]</ref>. In particular, we use an AlexNet model pre-trained with the ImageNet dataset <ref type="bibr" target="#b5">[6]</ref>. Using the pre-trained weights, for every image a vector of 4,096 dimensions was generated with the Ca e (h p://ca e.berkeleyvision.org/) framework. We resized every image to a 227x227 image. is is the standard pre-processing needed to use the AlexNet.</p><p>Attractiveness Visual Features (AVF) Algorithm. e second content-based algorithmic recommender approach employed was a method based on visual a ractiveness features. San Pedro and Siersdorfer in <ref type="bibr" target="#b20">[21]</ref> proposed several explainable visual features that to a great extent, can capture the a ractiveness of an image posted on Flickr. Following their procedure, for every image in our UGallery dataset we calculated: (a) average brightness, (b) saturation, (c) sharpness, (d) RMS-contrast, (e) colorfulness and (f) naturalness. In addition, we added (g) entropy, which is a good way to characterize and measure the texture of an image <ref type="bibr" target="#b10">[11]</ref>. ese metrics have also been used in another study <ref type="bibr" target="#b7">[8]</ref>, where we show how to nudge people with a ractive images to take up more healthy recipe recommendations. To compute these features, we used the original size of the images and did not pre-process them.</p><p>Due space constrains, the details to calculate the features are described in the article by Messina et al. <ref type="bibr" target="#b17">[18]</ref> Computing Recommendations. Given a user u who has consumed a set of artworks P u , a constrained pro le size K, and an arbitrary artwork i from the inventory, the score of this item i to be recommended to u is:</p><formula xml:id="formula_0">score(u, i) X = min{K,|P u |} ∑ r =1 max jϵ P u (r ) {sim(V X i , V X j )} min{K, |P u |} ,<label>(1)</label></formula><p>where V X z is a feature vector of item z obtained with method X , where X can be either a pre-trained AlexNet (DNN) or a ractiveness visual features (AVF). max (r ) denotes the r -th maximum value, e.g., if r = 1 it is the overall maximum, if r = 2 it is the second maximum, and so on. We compute the average similarity of the top-K most similar images because as shown in Messina et al. <ref type="bibr" target="#b17">[18]</ref>, for di erent users, the recommendations match be er using smaller subsets of the entire user pro le. Users do not always look to buy a painting similar to one they bought before, but they look for one that resembles a set of artworks that they liked. sim(V i , V j ) denotes a similarity function between vectors V i and V j . In this particular case, the similarity function used was cosine similarity:</p><formula xml:id="formula_1">sim(V i , V j ) = cos(V i , V j ) = V i ⋅ V j ∥V i ∥∥V j ∥<label>(2)</label></formula><p>Both methods use the same formula to calculate the recommendations. e di erence is in the origin of the visual features. For the DNN method, the features were extracted with the AlexNet <ref type="bibr" target="#b14">[15]</ref>, and in the case of AVF, the features were extracted based on San Pedro et al. <ref type="bibr" target="#b20">[21]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">User Study Procedure</head><p>To evaluate the performance of our explainable interfaces we conducted a user study in Amazon Mechanical Turk using a 3x2 mixed design: 3 interfaces (between-subjects) and 2 algorithms (withinsubjects, DNN and AVF). e interface conditions were: Interface 1: interface without explanations, as in Figure <ref type="figure" target="#fig_0">1</ref>; Interface 2: each item recommendation is explained based on the top 3 most similar images in the user pro le, as in Figure <ref type="figure" target="#fig_1">2</ref>; and Interface 3: only for AVF, based on a bar chart of visual features, as in Figure <ref type="figure" target="#fig_2">3</ref>. Notice that in the condition Interface 3, for DNN we used the explanation based on top 3 most similar images, because the neural embedding of 4,096 dimensions has no human-interpretable features to show in a bar chart.</p><p>To compute the recommendations for each of the three interface conditions two recommender algorithms were chosen: one based on DNN visual features, and the other based on a ractiveness visual features (AVF). e order in which the algorithms were presented was chosen at random to diminish the chance of a learning e ect.</p><p>e full study procedure is shown in Figure <ref type="figure" target="#fig_3">4</ref>. Participants accepted the study on Mechanical Turk (h ps://www.mturk.com) and were redirected to a web application. A er accepting a consent form, they are redirected to the pre-study survey, which collects demographic data (age, gender) and a subject's previous knowledge of art based on the test by Cha erjee et al. <ref type="bibr" target="#b4">[5]</ref>.</p><p>Table <ref type="table">2</ref>: Results of users' perception over several evaluation dimensions, de ned in Table <ref type="table">1</ref> . Scale 1-100 (higher is better), except for Average rating (scale 1-5). DNN: Deep Neural Network, and AVF: Attractiveness visual features. e symbol ↑ 1 indicates interface-wise signi cant di erence (di erences between interfaces using the same algorithms). e * symbol denotes algorithm-wise statistical di erence (comparing a dimension between algorithms, using the same interface). Following this, they had to perform a preference elicitation task. In this step, the users had to "like" at least ten paintings, using a Pinterest-like interface. Next, they were randomly assigned to one interface condition. In each condition, they again provided feedback (rating with 1-5 scale to each image) to top ten recommendations of images with employing either the DNN or the AVF algorithm (also assigned at random as discussed before). Finally, the participants were asked to next answer a post-algorithm survey.</p><p>e dimensions evaluated in the post-algorithm survey are the same for DNN and AVF algorithms, and they are shown in Table <ref type="table">1</ref>. is process is repeated for the second algorithm as well. Once the participants nished answering the second post study survey, they were redirected to the nal view, where they received a survey code for later payment in Amazon Mechanical Turk.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">RESULTS</head><p>e study was nished by in total 200 users out of which 121 were able to answer our validation questions successfully and hence were included in the results. In total, we had two validation questions set to check for a ention of our study participants. Filtering out users not responding properly to these questions allowed us to include 41 users for the Interface 1 condition, 41 users for Interface 2 condition and 39 users for Interface 3 condition. In total, participants were paid an amount of 0.40 USD per study, which took them around 10 minutes to complete.</p><p>Our subjects were between 18 to over 60 years old. 36% were between 25 to 32 years old, and 29% between 32 to 40 years old. Females made up 55.4% . 12% just nished high school, 31% had a some college degree, 57% had a bachelor's, master's or Ph.D. degree. Only 8% reported some visual impairment. W.r.t. their understanding about art, 20% had null experience, 48% had a ended 1 or 2 lessons, and 32% reported to have a ended 3 or more at high school level or above. 20% of our subjects also reported that they had almost never visited a museum or an art gallery; 36% do this once a year; and 44% do this once every 1 or 6 months.</p><p>Di erences between Interfaces. Table <ref type="table">2</ref> summarizes the results of the user study. First we compared interface performance and then we looked at the algorithmic performance. e explainable interfaces (Interface 2 and 3) signi cantly improved the perception of explainability compared to Interface 1 under both algorithms.</p><p>ere is also a signi cant improvement over Interface 1 in terms of relevance and diversity, but this is only achieved by the DNN method when this is compared against the AVF method using the interface 3. Interestingly, this is the condition where the interface is more transparent, since it explains exactly what is used to recommend (brightness, saturation, sharpness, etc.). People report that they understand why the images are recommended (70.4), but since the relevance is rather insu cient (56.2), the perception of trust is reported as low (55.4).</p><p>Di erences between Algorithms. With the only exception of the dimension Diverse where AVF was signi cantly be er, DNN was perceived more positively than AVF at large. In interfaces 2 and 3, the DNN method was perceived signi cantly be er in 5 dimensions (explainability, relevance, interface satisfaction, interest for eventual use, and trust), as well as higher average rating.</p><p>Overall, the results indicate that the explainable interface based on top 3 similar images works be er than an interface without explanation. Moreover, this e ect is enhanced by the accuracy of the algorithm, so even if the algorithm has no explainable features (DNN) it could induce more trust if the user perceives a larger predictive preference accuracy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">CONCLUSIONS &amp; FUTURE WORK</head><p>In this paper, we have studied the e ect of explaining recommendation of images employing three di erent recommender interfaces, as well as interactions with two di erent visual content-based recommendation algorithms: one with high predictive accuracy but with unexplainable features (DNN), and another with lower accuracy but with higher potential for explainable features (AVF). e rst result, which answers RQ1, shows that explaining the images recommended has a positive e ect vs. no explanation. Moreover, the explanation based on top 3 similar images presents the best results, but we need to consider that the alternative method, explanations based on visual features, was only used with the AVF. is result is preliminary and opens a path of research in terms of new interfaces which could help to explain the features learned by a deep neural network of images.</p><p>Regarding RQ2, we see that the algorithm used plays an important role in conjunction with the interface. DNN is perceived be er than AVF in most dimensions evaluated, showing that further research should focus on the interaction between algorithm and explainable interfaces. In the future we will expand this work to other datasets, beyond artistic images, to generalize our results.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Interface 1: Baseline recommendation interface without explanations.</figDesc><graphic coords="2,-25.05,76.21,256.02,122.57" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Interface 2: Explainable recommendation interface with textual explanations and top-3 similar images.</figDesc><graphic coords="2,123.29,58.18,313.64,150.32" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Interface 3: Explainable recommendation interface with features' bar chart and top-1 similar image.</figDesc><graphic coords="2,278.37,59.32,316.44,151.79" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Study procedure. A er the pre-study survey and the preference elicitation, users were assigned to one of three possible interfaces. In each interface they evaluated recommendations of two algorithms: DNN and AVF.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Table 1 :</head><label>1</label><figDesc>Evaluation dimensions and statements asked in the post-study survey. Users indicated their agreement with the statement on a scale from 0 to 100 (= totally agree). am satis ed with the recommender interface.Use Again I would use this recommender system again for nding art images in the future. Trust I trusted the recommendations made.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>Stat. signi cance between interfaces by multiple t-tests, Bonferroni corr. α bonf = α/n = 0.05/3 = 0.0017. Stat. signi cance between algorithms using pairwise t-test, α = 0.05.</figDesc><table><row><cell></cell><cell cols="2">Explainable</cell><cell></cell><cell cols="2">Relevance</cell><cell cols="2">Diverse</cell><cell>Interface Satisfaction</cell><cell>Use Again</cell><cell>Trust</cell><cell>Average Rating</cell></row><row><cell>Condition</cell><cell cols="2">DNN AVF</cell><cell></cell><cell cols="2">DNN AVF</cell><cell cols="2">DNN AVF</cell><cell>DNN AVF</cell><cell>DNN AVF</cell><cell>DNN AVF</cell><cell>DNN AVF</cell></row><row><cell>Interface 1 (No Explanations)</cell><cell cols="2">66.2* 51.4</cell><cell></cell><cell cols="2">69.0* 53.6</cell><cell cols="2">46.1 69.4*</cell><cell>69.9 62.1</cell><cell>65.8 59.7</cell><cell>69.3 63.7</cell><cell>3.55* 3.23</cell></row><row><cell>Interface 2 (DNN &amp; AVF: Top-3 similar images)</cell><cell>83.5*↑</cell><cell>1 74.0↑</cell><cell>1</cell><cell cols="2">80.0* 61.7</cell><cell cols="2">58.8 69.9*</cell><cell>76.6* 61.7</cell><cell>76.1* 65.9</cell><cell>75.9* 62.7</cell><cell>3.67* 3.00</cell></row><row><cell>Interface 3 (DNN: Top-3 similar, AVF: feature bar chart)</cell><cell>84.2*↑</cell><cell>1 70.4↑</cell><cell>1</cell><cell>82.3*↑</cell><cell>1 56.2</cell><cell>65.3↑</cell><cell>1 71.2</cell><cell>69.9* 63.3</cell><cell>78.2* 58.7</cell><cell>77.7* 55.4</cell><cell>3.90* 2.99</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0">IntRS'18, October 2018, Vancouver, Canada Dominguez et al.</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">ACKNOWLEDGEMENTS</head><p>e authors from PUC Chile were funded by Conicyt, Fondecyt grant 11150783, as well as by the Millennium Institute for Foundational Research on Data (IMFD).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Mining large streams of user data for personalized recommendations</title>
		<author>
			<persName><forename type="first">Xavier</forename><surname>Amatriain</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM SIGKDD Explorations Newsle er</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="37" to="48" />
			<date type="published" when="2013">2013. 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Personalized museum experience: e Rijksmuseum use case</title>
		<author>
			<persName><surname>Lm Aroyo</surname></persName>
		</author>
		<author>
			<persName><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Brussee</surname></persName>
		</author>
		<author>
			<persName><surname>Gorgels</surname></persName>
		</author>
		<author>
			<persName><surname>Rutledge</surname></persName>
		</author>
		<author>
			<persName><surname>Stash</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Museums and the Web</title>
				<meeting>Museums and the Web</meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Personalizing the Museum Experience through Context-Aware Recommendations</title>
		<author>
			<persName><forename type="first">Idir</forename><surname>Benouaret</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dominique</forename><surname>Lenne</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on. IEEE</title>
				<imprint>
			<date type="published" when="2015">2015. 2015</date>
			<biblScope unit="page" from="743" to="748" />
		</imprint>
	</monogr>
	<note>Systems, Man, and Cybernetics (SMC)</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Music recommendation</title>
		<author>
			<persName><forename type="first">Oscar</forename><surname>Celma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Music Recommendation and Discovery</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="43" to="85" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">e Assessment of Art A ributes</title>
		<author>
			<persName><forename type="first">Anjan</forename><surname>Cha Erjee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Page</forename><surname>Widick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rebecca</forename><surname>Sternschein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">William</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">I</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Bianca</forename><surname>Bromberger</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2010">2010. 2010</date>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="207" to="222" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Imagenet: A large-scale hierarchical image database</title>
		<author>
			<persName><forename type="first">Jia</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wei</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Richard</forename><surname>Socher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Li-Jia</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kai</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Li</forename><surname>Fei-Fei</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computer Vision and Pa ern Recognition</title>
				<imprint>
			<date type="published" when="2009">2009. 2009</date>
			<biblScope unit="page" from="248" to="255" />
		</imprint>
	</monogr>
	<note>IEEE Conference on. IEEE</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Comparing Neural and A ractiveness-based Visual Features for Artwork Recommendation</title>
		<author>
			<persName><forename type="first">Vicente</forename><surname>Dominguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pablo</forename><surname>Messina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Denis</forename><surname>Parra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Domingo</forename><surname>Mery</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christoph</forename><surname>Tra Ner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alvaro</forename><surname>Soto</surname></persName>
		</author>
		<idno type="DOI">10.1145/3125486.3125495</idno>
		<idno type="arXiv">arXiv:1706.07515</idno>
		<ptr target="http://dx.doi.org/10.1145/3125486.3125495arXiv" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Workshop on Deep Learning for Recommender Systems, co-located at RecSys</title>
				<meeting>the Workshop on Deep Learning for Recommender Systems, co-located at RecSys</meeting>
		<imprint>
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Exploiting food choice biases for healthier recipe recommendation</title>
		<author>
			<persName><forename type="first">David</forename><surname>Elsweiler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christoph</forename><surname>Tra Ner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Morgan</forename><surname>Harvey</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 40th international acm sigir conference on research and development in information retrieval</title>
				<meeting>the 40th international acm sigir conference on research and development in information retrieval</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="575" to="584" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">From captions to visual concepts and back</title>
		<author>
			<persName><forename type="first">Saurabh</forename><surname>Hao Fang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Forrest</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rupesh</forename><surname>Iandola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Li</forename><surname>Srivastava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Piotr</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jianfeng</forename><surname>Dollár</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiaodong</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Margaret</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">John</forename><surname>Mitchell</surname></persName>
		</author>
		<author>
			<persName><surname>Pla</surname></persName>
		</author>
		<author>
			<persName><surname>Others</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">e net ix recommender system: Algorithms, business value, and innovation</title>
		<author>
			<persName><forename type="first">Carlos</forename><forename type="middle">A</forename><surname>Gomez-Uribe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Neil</forename><surname>Hunt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Management Information Systems (TMIS)</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page">13</biblScope>
			<date type="published" when="2016">2016. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Digital Image Publishing Using MATLAB</title>
		<author>
			<persName><forename type="first">Steven</forename><forename type="middle">L</forename><surname>Rafael C Gonzalez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Richard</forename><forename type="middle">E</forename><surname>Eddins</surname></persName>
		</author>
		<author>
			<persName><surname>Woods</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<publisher>Prentice Hall</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Vista: A Visually, Socially, and Temporally-aware Model for Artistic Recommendation</title>
		<author>
			<persName><forename type="first">Ruining</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chen</forename><surname>Fang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhaowen</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Julian</forename><surname>Mcauley</surname></persName>
		</author>
		<idno type="DOI">10.1145/2959100.2959152</idno>
		<idno>DOI:</idno>
		<ptr target="http://dx.doi.org/10.1145/2959100.2959152" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th ACM Conference on Recommender Systems (RecSys &apos;16)</title>
				<meeting>the 10th ACM Conference on Recommender Systems (RecSys &apos;16)<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="309" to="316" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Generating visual explanations</title>
		<author>
			<persName><forename type="first">Anne</forename><surname>Lisa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zeynep</forename><surname>Hendricks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marcus</forename><surname>Akata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Je</forename><surname>Rohrbach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bernt</forename><surname>Donahue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Trevor</forename><surname>Schiele</surname></persName>
		</author>
		<author>
			<persName><surname>Darrell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">European Conference on Computer Vision</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="3" to="19" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Recommender systems: from algorithms to user experience</title>
		<author>
			<persName><forename type="first">A</forename><surname>Joseph</surname></persName>
		</author>
		<author>
			<persName><forename type="first">John</forename><surname>Konstan</surname></persName>
		</author>
		<author>
			<persName><surname>Riedl</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">User Modeling and User-Adapted Interaction</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="101" to="123" />
			<date type="published" when="2012">2012. 2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Imagenet classi cation with deep convolutional neural networks</title>
		<author>
			<persName><forename type="first">Alex</forename><surname>Krizhevsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ilya</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Geo</forename><surname>Rey E Hinton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in neural information processing systems</title>
				<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="1097" to="1105" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Agents that reduce work and information overload</title>
		<author>
			<persName><forename type="first">Pa Ie</forename><surname>Maes</surname></persName>
		</author>
		<author>
			<persName><surname>Others</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="30" to="40" />
			<date type="published" when="1994">1994. 1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Don&apos;t look stupid: avoiding pitfalls when recommending research papers</title>
		<author>
			<persName><forename type="first">Nishikant</forename><surname>Sean M Mcnee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Joseph</forename><forename type="middle">A</forename><surname>Kapoor</surname></persName>
		</author>
		<author>
			<persName><surname>Konstan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work</title>
				<meeting>the 2006 20th anniversary conference on Computer supported cooperative work</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="171" to="180" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Content-Based Artwork Recommendation: Integrating Painting Metadata with Neural and Manually-Engineered Visual Features</title>
		<author>
			<persName><forename type="first">Pablo</forename><surname>Messina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vicente</forename><surname>Dominguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Denis</forename><surname>Parra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christoph</forename><surname>Tra Ner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alvaro</forename><surname>Soto</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11257-018-9206-9</idno>
		<idno>DOI:</idno>
		<ptr target="http://dx.doi.org/10.1007/s11257-018-9206-9" />
	</analytic>
	<monogr>
		<title level="m">User Modeling and User-Adapted Interaction</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Midge: Generating Image Descriptions from Computer Vision Detections</title>
		<author>
			<persName><forename type="first">Margaret</forename><surname>Mitchell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xufeng</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jesse</forename><surname>Dodge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alyssa</forename><surname>Mensch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Amit</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alex</forename><surname>Berg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kota</forename><surname>Yamaguchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tamara</forename><surname>Berg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Karl</forename><surname>Stratos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hal</forename><surname>Daumé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Iii</forename></persName>
		</author>
		<ptr target="http://dl.acm.org/citation.cfm?id=2380816.2380907" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL &apos;12). Association for Computational Linguistics</title>
				<meeting>the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL &apos;12). Association for Computational Linguistics<address><addrLine>Stroudsburg, PA, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="747" to="756" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">Chris</forename><surname>Olah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexander</forename><surname>Mordvintsev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ludwig</forename><surname>Schubert</surname></persName>
		</author>
		<idno type="DOI">10.23915/distill.00007</idno>
		<idno>DOI:</idno>
		<ptr target="http://dx.doi.org/10.23915/distill.00007hps://distill.pub/2017/feature-visualization" />
		<title level="m">Feature Visualization</title>
				<imprint>
			<publisher>Distill</publisher>
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Ranking and Classifying A ractiveness of Photos in Folksonomies</title>
		<author>
			<persName><forename type="first">Jose</forename><surname>San</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pedro</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Stefan</forename><surname>Siersdorfer</surname></persName>
		</author>
		<idno type="DOI">10.1145/1526709.1526813</idno>
		<idno>DOI:</idno>
		<ptr target="http://dx.doi.org/10.1145/1526709.1526813" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 18th International Conference on World Wide Web (WWW &apos;09)</title>
				<meeting>the 18th International Conference on World Wide Web (WWW &apos;09)<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="771" to="780" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">A folksonomy-based recommender system for personalized access to digital artworks</title>
		<author>
			<persName><forename type="first">Giovanni</forename><surname>Semeraro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pasquale</forename><surname>Lops</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>De Gemmis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Cataldo</forename><surname>Musto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fedelucio</forename><surname>Narducci</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal on Computing and Cultural Heritage (JOCCH)</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page">11</biblScope>
			<date type="published" when="2012">2012. 2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">CNN features o -the-shelf: an astounding baseline for recognition</title>
		<author>
			<persName><forename type="first">Ali</forename><surname>Sharif Razavian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hossein</forename><surname>Azizpour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Josephine</forename><surname>Sullivan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stefan</forename><surname>Carlsson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE Conference on Computer Vision and Pa ern Recognition Workshops</title>
				<meeting>the IEEE Conference on Computer Vision and Pa ern Recognition Workshops</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="806" to="813" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Explaining recommendations: Design and evaluation</title>
		<author>
			<persName><forename type="first">Nava</forename><surname>Tintarev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Judith</forename><surname>Mastho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Recommender Systems Handbook</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="353" to="382" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Understanding the Impact of Weather for POI Recommendations</title>
		<author>
			<persName><forename type="first">Christoph</forename><surname>Tra Ner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexander</forename><surname>Oberegger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lukas</forename><surname>Eberhard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Denis</forename><surname>Parra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Leandro</forename><surname>Marinho</surname></persName>
		</author>
		<author>
			<persName><surname>Others</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of RecTour Workshop</title>
				<meeting>RecTour Workshop</meeting>
		<imprint>
			<publisher>co-located at ACM RecSys</publisher>
			<date type="published" when="2016">2016. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Multimedia for art retrieval (m4art)</title>
		<author>
			<persName><surname>Egon L Van Den Broek</surname></persName>
		</author>
		<author>
			<persName><surname>Kok</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Eduard</forename><surname>Schouten</surname></persName>
		</author>
		<author>
			<persName><surname>Hoenkamp</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Multimedia Content Analysis, Management, and Retrieval</title>
				<imprint>
			<date type="published" when="2006">2006. 2006</date>
			<biblScope unit="volume">6073</biblScope>
			<biblScope unit="page">60730Z</biblScope>
		</imprint>
	</monogr>
	<note>International Society for Optics and Photonics</note>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<author>
			<persName><forename type="first">Deborah</forename><surname>Weinswig</surname></persName>
		</author>
		<ptr target="https://www.forbes.com/sites/deborahweinswig/2016/05/13/art-market-cooling-but-online-sales-booming/" />
		<title level="m">Art Market Cooling, But Online Sales Boom</title>
				<imprint>
			<date type="published" when="2016-03-21">2016. 2016. 21-March-2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Exploiting geographical in uence for collaborative point-of-interest recommendation</title>
		<author>
			<persName><forename type="first">Mao</forename><surname>Ye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peifeng</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wang-Chien</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dik-Lun</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval</title>
				<meeting>the 34th international ACM SIGIR conference on Research and development in Information Retrieval</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="325" to="334" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">Gao</forename><surname>An Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zongyang</forename><surname>Cong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aixin</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nadia</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><surname>Magnenat Almann</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Time-aware point-of-interest recommendation</title>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval</title>
				<meeting>the 36th international ACM SIGIR conference on Research and development in information retrieval</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<biblScope unit="page" from="363" to="372" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
