<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Affective Computing and Bandits: Capturing Context in Cold Start Situations</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Sebastian</forename><surname>Oehme</surname></persName>
							<email>sebastian.oehme@tum.de</email>
							<affiliation key="aff0">
								<orgName type="department">School of Engineering</orgName>
								<orgName type="institution">Technical University of Munich</orgName>
								<address>
									<settlement>Munich, Garching</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Linus</forename><forename type="middle">W</forename><surname>Dietz</surname></persName>
							<email>linus.dietz@tum.de</email>
							<affiliation key="aff1">
								<orgName type="department">Department of Informatics Technical</orgName>
								<orgName type="institution">University of Munich Garching</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Affective Computing and Bandits: Capturing Context in Cold Start Situations</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">D1818C57BED67781A82110143F80FFA2</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T04:31+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Recommender systems</term>
					<term>affective computing</term>
					<term>bandit algorithms</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The cold start problem describes the initial phase of a collaborative recommender where the quality of recommendation is low due to an insufficient number of ratings. Overcoming this is crucial because the system's adoption will be impeded by low recommendation quality. In this paper, we propose capturing context via computer vision to improve recommender systems in the cold start phase. Computer vision algorithms can derive stereotypes such as gender or age, but also the user's emotions without explicit interaction. We present an approach based on the statistical framework of bandit algorithms to incorporate stereotypic information and affective reactions into the recommendation. In a preliminary evaluation in a lab study with 21 participants, we already observe an improvement of the number of positive ratings. Furthermore, we report additional findings of experimenting with affective computing for recommender systems.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Recommender systems (RS) match items to users, therefore the accuracy of recommendations is highly dependent on the quality of information the system has about these. Collaborative filtering (CF) has frequently been used if the items' characteristics are unknown or it is costly to derive them. CF systems are, however, not suited for scenarios where the user is anonymous and interacts with the RS only for a short period. For example, a smart display inside a fashion store could provide recommendations, however, the interaction will be brief and tentative. In such cold start scenarios, literature suggests including context and stereotypes into the recommendations <ref type="bibr" target="#b0">[1]</ref>. If the weather is hot, suggest bathing attire; a male customer will need shorts instead of a bikini. Motivated by this kind of a scenario, we develop an affective RS <ref type="bibr" target="#b12">[13]</ref> based on stereotypes derived via computer vision with little user collaboration. Our research was guided by the following questions: RQ 1: How can stereotypic information be incorporated into a RS? RQ 2: Can facial classification and affective reactions be a surrogate for explicit feedback? In the following section, we describe the foundations of our RS: bandit strategies and facial classification using computer vision.</p><p>Then, an in-depth description of the proposed approach and a preliminary evaluation in a user study follow in Section 3. Finally, we draw our conclusions and point out future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">FOUNDATIONS</head><p>Ever since Grundy <ref type="bibr" target="#b9">[10]</ref>, it has been known that using stereotypic information can be used to model users <ref type="bibr" target="#b1">[2]</ref> and thereby improve recommendation accuracy. Driven by our research questions, we discuss a combination of two concepts applied for recommender systems: contextual bandits and facial classification using computer vision.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Bandit Strategies</head><p>In real-world applications, recommendations are often linked to a reward. For example, the purpose of recommendations in a shop is to improve revenue by suggesting products to customers that they are more likely to buy. However, calculating the probabilities of a successful recommendation directly is usually not possible due to a lack of information about the customer's taste and the attractiveness of items.</p><p>Bandit strategies provide a computational framework that trades off profit-maximization via items that are known to sell well and experimentation with items whose potential is yet to be determined. The terminology stems from the probability theory of gambling <ref type="bibr" target="#b11">[12]</ref>. A gambler at a row of one-armed bandits (slot machines) has to decide based on incomplete knowledge: what arm to play, how often to pull and when to play <ref type="bibr" target="#b5">[6]</ref>. A bandit recommender engine seeks to find the right balance between experimenting with new recommendations, i.e., exploration, and exploiting items that are already known to have a high chance of reward. A classic algorithm for handling exploration vs. exploitation is the ε-Greedy algorithm <ref type="bibr" target="#b10">[11]</ref>. It chooses with a probability of ε to either exploit the best available arm at the moment or to randomly explore any other arm. In cold start situations, however, a bandit recommender suffers similar limitations as traditional methods, such as collaborative filtering. This can be overcome by adding context information, e.g., demographic information <ref type="bibr" target="#b7">[8]</ref> to augment the bandit's choice between exploration or exploitation with more data. These types of bandit strategies are referred to as contextual bandits. In contrast to the ε-Greedy algorithm, they incorporate contextual information and are able to choose their action based on the situation. The classic algorithm is the Contextual-ε-Greedy strategy <ref type="bibr" target="#b2">[3]</ref>. At each turn, it compares the user's situation (e.g., location, time, social activity) to a set of high-level 'critical situations'. If the situation is critical, the algorithm exploits this by showing items that are known to be well suited and similar. Consequently, it explores other items if the situation is not critical. It has been shown that the Contextualε-Greedy algorithm generally achieves better click-through rates than ε-Greedy algorithms or pure exploration.</p><p>In our approach, we propose using facial classification through the use of computer vision to infer age, gender and emotions as contextual information within a contextual bandit algorithm.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Facial Classification</head><p>Computer vision has already been used to improve systems situated in public places. For example, Müller et al. <ref type="bibr" target="#b4">[5]</ref> described a system for digital signage. However, this and similar early approaches were ahead of their time: due to low face-detection accuracy, the outcomes of these experiments were not significant. Computer vision-based approaches analyze users' faces frame by frame via facial recognition software during an experimental task such as watching videos. Zhao et al. <ref type="bibr" target="#b14">[15]</ref> drew affective cues from users' affection changes. They used emotional changes to segment videos, classified the video's category and then presented recommendations. Tkalčič et al. propose a framework for affective recommender systems, where they distinguish between three phases of user interaction: the entry, consumption, and exit stage <ref type="bibr" target="#b12">[13]</ref>. The affective cues drawn while watching content in the consumption stage are compared to the emotional state in the entry phase. The exit stage can simultaneously be the following entry stage when the next item is recommended and the looped process continues. Affective labeling of users' faces has been applied e.g., to RSs <ref type="bibr" target="#b13">[14]</ref> and commercials <ref type="bibr" target="#b3">[4]</ref>, where they show promising results in terms of accuracy and user satisfaction.</p><p>The accuracy of classification and the runtime performance of computer vision algorithms have improved over the past years and with YOLO <ref type="bibr" target="#b8">[9]</ref>, the breakthrough to real time object detection has been achieved. In emotion detection, the state-of-the-art algorithms are closed source and only available using web APIs. Prominent vendors like Microsoft Face<ref type="foot" target="#foot_0">1</ref> , Kairos<ref type="foot" target="#foot_1">2</ref> and Affectiva<ref type="foot" target="#foot_2">3</ref> offer RESTful client libraries and respective pricing models. The centralization of this technology to few market players that cloak their algorithms in secrecy should be seen with concern. Nevertheless, it should also be mentioned that such systems improve with the size of the training set and enable researchers to work with this technology without hardware requirements. In our recommender system, we use the Microsoft Face service to detect the age, gender and emotions of our test subjects. The Face Emotion Recognition API returns continuous values [0;1] for the following emotions: anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise at a small cost of about e1.40 per 1000 requests.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">CONTEXTUAL RECOMMENDER MODEL</head><p>In our RS, the items are displayed to the user successively. While the user inspects the items, she is observed by a camera whose imagery is continuously analyzed by computer vision. In this section, we first present how we incorporated computer vision into the recommendation task, followed by the experimental setup and our findings.</p><p>Our model extends the approach of Bouneffouf et al. <ref type="bibr" target="#b2">[3]</ref> and likewise proceeds in discrete trials t = 1 . . . T . At each t, the following tasks are performed: Task 1: Let U t be the current user's profile and P the set of other known user profiles. The system compares U t with the user profiles in P in order to choose the most similar one, U P :</p><formula xml:id="formula_0">U P = argmax U c ∈P (sim(U t , U c ))<label>(1)</label></formula><p>Our adapted similarity metric is the weighted sum of the similarity metrics for age, gender, and EF, the combination of emotions and feedback. α, β, γ are weights associated with these metrics, defined in the following subsection:</p><formula xml:id="formula_1">sim(U t , U c ) = α • sim(a t , a c ) + β • sim(д t , д c ) + γ • EF (2)</formula><p>EF , short for emotional feedback, corresponds to the sum of k affective reactions sim k (e t k , e c k ) ∈ [0, 1] depending on equal feedback sim k (f t k , f c k ) ∈ {0, 1} of the current user with respect to other users' profiles. This feedback, called reward in the bandit terminology, can be any explicit or implicit feedback to the item, e.g., the user's rating or adding the item to the shopping basket. If the feedback differs for an item, this item's affective reaction will not contribute to the sum, hence it will be 0. EF is normalized to the number of items i which U t has seen so far.</p><formula xml:id="formula_2">EF = k sim k (f t k , f c k ) • 1 + sim k (e t k , e c k ) 2i<label>(3)</label></formula><p>Task 2: Let M be the set of items, M t the items seen by the current user U t and M P ∈ {M \M t } the items recommended to the user U P , but not to U t . After retrieving M P , the system displays the next item m ∈ M P to U t while observing the user's affective reactions during presentation. Task 3: After receiving the user's reward, the algorithm refines its item selection strategy with the new observation: user U P gives item m P a binary reward. The expected reward for an item is the average reward per total number of ratings n.</p><p>Our adapted Contextual-ε-Greedy recommends items as follows:</p><formula xml:id="formula_3">m =        argmax M P (expectedReward(m)) if q &gt; ε random ((M \ M t )) otherwise<label>(4)</label></formula><p>In Equation <ref type="formula" target="#formula_3">4</ref>, the random variable q is responsible for the exploration versus exploitation behavior. In our approach it is uniformly distributed over [0,1]. If q is larger than ε, the item with the highest expected reward from M P = {m 1 , . . . , m P } will be selected, which are all items rated by the most similar user. For this at least one unseen and positively rated item by the past user is required. In case all suitable items have been exploited or the current user is the first user and hence no other user profiles exist, the algorithms falls back to exploration, where random(M) selects a random item.</p><p>To influence the original ε-Greedy algorithm with contextual information, ε is computed by maximizing Equation 2, the similarity of the current user's profile U t to the profile U P of the most similar other user:</p><formula xml:id="formula_4">ε = 1 − argmax(sim(U t , U c ))<label>(5)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Similarity Measures</head><p>The Contextual-ε-Greedy strategy is driven by the stereotypic similarity of the current user to previously seen users. In this first experiment, we used α = β = 0.25 and γ = 0.5 as weights for Equation <ref type="formula">2</ref>.</p><p>Gender similarity is binary, due to output of the employed facial classification algorithm. Either it matches, or it does not: sim(д t , д c ) ∈ {0, 1}.</p><p>Age similarity is more fuzzy and we have not found an established similarity measure in literature. Therefore, we constructed an ad-hoc similarity measure sim(a t , a c ) ∈ [0, 1], which considers age differences of up to 15 years as somewhat similar <ref type="bibr" target="#b6">[7]</ref>.</p><p>Emotional similarity measures the affective response to a displayed item in comparison to the emotional reaction of previous users to it. As previously mentioned, today's computer vision algorithms are capable of detecting several emotions at once. Therefore, it is calculated by the cosine similarity of two emotion vectors, as can be seen in Equation <ref type="formula">6</ref>.</p><formula xml:id="formula_5">sim(e t , e c ) = n i=1 ēt i • ēc i n i=1 (ē t i ) 2 n i=1 (ē c i ) 2 (6)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Capturing Affective Cues</head><p>Microsoft Face analyzes the user's face for age, gender, and up to eight emotions. Experimenting with the computer vision service before the main experiment showed that users tend to express their emotional reactions shortly before requesting the next item and maintain their facial expression for some time when the next item is already shown. We call this 'overflowing emotions', as the user's emotional reaction to the previous item overflows to the current item and is then adjusted during the consumption and exit stage.</p><p>Since we are interested in the actual response to the item after the content has been processed, we used the following weighted average over all analyzed frames n as the aggregated metric to emphasize the emotions from the exit stage.</p><formula xml:id="formula_6">ē = n i=1 2 i • e i n i=1 2 i (7)</formula><p>Figure <ref type="figure" target="#fig_0">1</ref> shows the comparison of the mean value to our proposed weighted average. Over the course of three items, the level of  observed happiness is shown in orange for 15 frames in the case of Item A. Since we assume that the important reaction to the content is at the end of the item display period, we are quite satisfied with our weighted mean calculation. Note that we used a sampling rate of one analyzed frame per second.</p><p>An alternative would have been to aggregate over the last p% of the frames. While we think that our measure is more robust, an in-depth analysis of different aggregation strategies is left for future work. Another idea for separating successive content is to show a neutral screen for some time before showing the next. It is, however, unclear what an adequate time is for that, as users tend to show emotions for an unknown duration and may find this delay annoying.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Prototype and Experiment</head><p>To evaluate our approach, we implemented an image recommender prototype using Python. Figure <ref type="figure" target="#fig_1">2</ref> shows the high-level architecture: the core part is a Flask<ref type="foot" target="#foot_3">4</ref> web server that serves web pages with the recommendations based on context information (age, gender, emotions) from the computer vision service and the history of user interactions retrieved from a PostgresSQL<ref type="foot" target="#foot_4">5</ref> database.</p><p>To answer our second research question, we compare our variant of the Contextual-ε-Greedy with the traditional ε-Greedy in a controlled lab experiment. The experimental procedure was the following: The participant's task is to rate images. Hoping to evoke a large spectrum of emotions, we used a self-scraped data set of 3000 memes from the social web platform 9gag<ref type="foot" target="#foot_5">6</ref> over the period from January 24, to February 9, 2018. The subject is instructed to take a seat in front of the screen with a webcam, it pointed out that the camera is recording and information is being stored according to local data privacy protection laws. She is asked to view consecutively displayed images and provide feedback for each one in the form of a 'like' or 'dislike' rating. The recommendation engine attempts to optimize the amount of positive feedback using our Contextual-ε-Greedy or the baseline ε-Greedy. Each subject is shown 60 images per strategy, which is our independent variable. The order of strategy is selected at random without the subject being aware of this.</p><p>We conducted the experiment in April 2018 in Garching with 21 volunteers (11 f / 10 m) affiliated with the Technical University of Munich. The subjects' ages varied between 19 to 31 years with a mean value of 24.09. The dependent variables are the users' feedback   to the item, the detected affective cues from the computer vision service and additional information collected with a questionnaire.</p><formula xml:id="formula_7">M F M M M M F M M F F F M F F F M F F M M ID<label>1</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">Evaluation Results</head><p>In the convergence analysis of the algorithms, we observe an improvement of the accuracy of time, i.e., the number of positive ratings, in both recommendation strategies. To showcase this, we fit a linear model over the algorithm convergence described in Table <ref type="table" target="#tab_2">1</ref>. Over the course of 21 observations, the Contextual-ε-Greedy starts slightly worse with 46.64% positive rewards; however, it improves faster over time reaching 60.7% at the end of the experiment. Note that the difference between the strategies is not significant and this model should not be used to predict further observations. Clearly, 21 observations with 60 ratings each are not enough for the bandit algorithms to converge. ε-Greedy f ( x) = 0.47754 + 0.0047835 • x 0.578 Contextual-ε-Greedy f (x) = 0.463968 + 0.0068831 • x 0.607 A closer look into the properties of the Contextual-ε-Greedy algorithm reveals avenues for improvement. Figure <ref type="figure" target="#fig_2">3</ref> depicts the similarity of a participant's stereotypic attributes to the previous subjects. The most similar user pair per column has the lowest ε and was leveraged by the Contextual algorithm for recommending the next item (cf. Equation <ref type="formula" target="#formula_3">4</ref>). A clearly visible pattern is that the same gender plays a dominant role in the distance measure. Depending on the recommended items, this could be adjusted in future studies.</p><p>Further, we notice that the Microsoft Face algorithm mostly detected two emotions. Overall, happiness and neutral make up 93.65% of the observed emotions, with neutral being the dominant emotion. However, as seen in Table <ref type="table" target="#tab_3">2</ref>, positive feedback is more likely if the affective response was happiness instead of neutral.</p><p>Overall, the subjects rated 53.97% of the items positively, although this varied a lot per user, ranging from only 3 positive ratings up to 47 of 60. Also, the experiment showed that the duration of item consumption varies, underlining the need for a dynamic aggregation of the analyzed frame as in Equation <ref type="formula">7</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">CONCLUSIONS AND FUTURE WORK</head><p>Bandit algorithms provide a robust framework not only for online advertisement, but also for personalized recommendations. The possibility of calibrating the exploration vs. exploitation probabilities using weighted similarity measures is an elegant way for the hybridization of recommendation and active learning. Although computer vision has not yet reached its full potential, it is sufficiently affordable and accurate to experiment with for RS research.</p><p>In this paper, we have presented an approach for recommending images using bandit algorithms and computer vision focusing on improving recommendations in the cold start phase. Although our contextual bandit algorithm was not significantly better than the baseline, our work comprises the following contributions: (1) We have developed a practical approach for using information from facial classification within RSs, (2) we presented an adaptation of the Contextual-ε-Greedy suited for incorporating stereotypic information, (3) we developed a strategy with a weighted average to mitigate the overflowing emotions problem, and (4) we have shown using a lab study that by putting the pieces together, an improvement of the recommendation accuracy could be achieved. While this study was conducted with the informed consent of the participants, the unconscious measuring of people's emotions in real-world applications is critical with respect to privacy concerns.</p><p>Having realized this prototype based on many assumptions, we can highlight the path for further research: Our post-mortem analysis has shown the necessity of an evidence-based method for adjusting the weights of the hybrid similarity measure. Having identified the 'overflowing emotions' problem in sequential recommendations, an in-depth analysis thereof would be interesting. Finally, we plan to analyze the long term convergence of our bandit recommender algorithm in a larger field experiment against simpler baselines, e.g., random items, and to investigate the accuracy of emotional classification and its potential impact on performance.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Overflowing Emotions. Happiness Example</figDesc><graphic coords="3,53.80,609.25,240.23,74.89" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Prototype System Architecture</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Values of ε Throughout the Contextual Experiment</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>IntRS'18, October 2018, Vancouver, Canada Sebastian Oehme and Linus W. Dietz</figDesc><table><row><cell>Age</cell><cell>20</cell><cell>26</cell><cell>19</cell><cell>25</cell><cell>23</cell><cell>29</cell><cell>31</cell><cell>26</cell><cell>24</cell><cell>21</cell><cell>25</cell><cell>21</cell><cell>28</cell><cell>24</cell><cell>24</cell><cell>21</cell><cell>21</cell><cell>26</cell><cell>23</cell><cell>25</cell><cell>24</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 1 :</head><label>1</label><figDesc>Linear Trend Models of Rewards</figDesc><table><row><cell>Strategy</cell><cell>Linear Equation</cell><cell>f(21)</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 2 :</head><label>2</label><figDesc>Correlation of Emotions with Rating Feedback</figDesc><table><row><cell cols="4">Feedback happiness neutral other n</cell></row><row><cell>positive</cell><cell>25.06%</cell><cell>68.90%</cell><cell>6.04% 680</cell></row><row><cell>negative</cell><cell>7.24%</cell><cell>86.04%</cell><cell>6.72% 580</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://azure.microsoft.com/en-us/services/cognitive-services/face/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://www.kairos.com/emotion-analysis-api</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">https://www.affectiva.com/product/emotion-sdk/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">http://flask.pocoo.org</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">https://www.postgresql.org</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_5">https://9gag.com</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Context-Aware Recommender Systems</title>
		<author>
			<persName><forename type="first">Gediminas</forename><surname>Adomavicius</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexander</forename><surname>Tuzhilin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Recommender Systems Handbook</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="191" to="226" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">User Profiling Approaches for Demographic Recommender Systems</title>
		<author>
			<persName><forename type="first">Mohammad</forename><surname>Yahya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Al-Shamri</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">100</biblScope>
			<biblScope unit="page" from="175" to="187" />
			<date type="published" when="2016">2016. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A Contextual-Bandit Algorithm for Mobile Context-Aware Recommender System</title>
		<author>
			<persName><forename type="first">Djallel</forename><surname>Bouneffouf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Amel</forename><surname>Bouzeghoub</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alda</forename><forename type="middle">Lopes</forename><surname>Gançarski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Neural Information Processing</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="324" to="331" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Collaborative Filtering with Facial Expressions for Online Video Recommendation</title>
		<author>
			<persName><forename type="first">Young</forename><surname>Il</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Myung</forename><forename type="middle">Geun</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jae</forename><forename type="middle">Kyeong</forename><surname>Oh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Young</forename><forename type="middle">U</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><surname>Ryu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Information Management</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="397" to="402" />
			<date type="published" when="2016">2016. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">eMir: Digital Signs that React to Audience Emotion</title>
		<author>
			<persName><forename type="first">Juliane</forename><surname>Exeler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Markus</forename><surname>Buzeck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jörg</forename><surname>Müller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2nd Workshop on Pervasive Advertising</title>
				<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="38" to="44" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Bandit Processes and Dynamic Allocation Indices</title>
		<author>
			<persName><forename type="first">John</forename><forename type="middle">C</forename><surname>Gittins</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the Royal Statistical Society: Series B (Statistical Methodology)</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="148" to="177" />
			<date type="published" when="1979">1979. 1979</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Utilizing Facial Classification for Improving Recommender Systems</title>
		<author>
			<persName><forename type="first">Sebastian</forename><surname>Oehme</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
		<respStmt>
			<orgName>Technical University of Munich</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Bachelor&apos;s thesis</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A Framework for Collaborative, Content-Based and Demographic Filtering</title>
		<author>
			<persName><forename type="first">J</forename><surname>Michael</surname></persName>
		</author>
		<author>
			<persName><surname>Pazzani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial intelligence review</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="393" to="408" />
			<date type="published" when="1999-12">1999. Dec. 1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">You Only Look Once: Unified, Real-Time Object Detection</title>
		<author>
			<persName><forename type="first">Joseph</forename><surname>Redmon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Santosh</forename><surname>Divvala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ross</forename><surname>Girshick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ali</forename><surname>Farhadi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conference on Computer Vision and Pattern Recognition (CVPR &apos;16)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="779" to="788" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">User Modeling via Stereotypes</title>
		<author>
			<persName><forename type="first">Elaine</forename><surname>Rich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cognitive Science</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="329" to="354" />
			<date type="published" when="1979-10">1979. Oct. 1979</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Reinforcement Learning</title>
		<author>
			<persName><forename type="first">G</forename><surname>Andrew</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Richard</forename><forename type="middle">S</forename><surname>Barto</surname></persName>
		</author>
		<author>
			<persName><surname>Sutton</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1998">1998</date>
			<publisher>MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Some Aspects of the Sequential Design of Experiments</title>
		<author>
			<persName><forename type="first">Herbert</forename><surname>Robbins</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Herbert Robbins Selected Papers</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1985">1985</date>
			<biblScope unit="page" from="169" to="177" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Emotion-Aware Recommender Systems-a Framework and a Case Study</title>
		<author>
			<persName><forename type="first">Marko</forename><surname>Tkalčič</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Urban</forename><surname>Burnik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ante</forename><surname>Odić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrej</forename><surname>Košir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jurij</forename><surname>Tasič</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICT Innovations</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2012">2013. 2012</date>
			<biblScope unit="page" from="141" to="150" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Affective Labeling in a Content-Based Recommender System for Images</title>
		<author>
			<persName><forename type="first">Marko</forename><surname>Tkalčič</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ante</forename><surname>Odic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrej</forename><surname>Kosir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jurij</forename><surname>Tasic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Multimedia</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="391" to="400" />
			<date type="published" when="2013-02">2013. Feb. 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Video Classification and Recommendation Based on Affective Analysis of Viewers</title>
		<author>
			<persName><forename type="first">Sicheng</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hongxun</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiaoshuai</forename><surname>Sun</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">119</biblScope>
			<biblScope unit="page" from="101" to="110" />
			<date type="published" when="2013">2013. 2013</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
