<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Impact of implicit and explicit affective labeling on a recommender system&apos;s performance</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Marko</forename><surname>Tkalčič</surname></persName>
							<email>marko.tkalcic@fe.uni-lj.si</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of electrical engineering</orgName>
								<orgName type="institution">University of Ljubljana</orgName>
								<address>
									<addrLine>Tržaška 25</addrLine>
									<postCode>1000</postCode>
									<settlement>Ljubljana</settlement>
									<country>Sovenia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ante</forename><surname>Odić</surname></persName>
							<email>ante.odic@fe.uni-lj.si</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of electrical engineering</orgName>
								<orgName type="institution">University of Ljubljana</orgName>
								<address>
									<addrLine>Tržaška 25</addrLine>
									<postCode>1000</postCode>
									<settlement>Ljubljana</settlement>
									<country>Sovenia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andrej</forename><surname>Košir</surname></persName>
							<email>andrej.kosir@fe.uni-lj.si</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of electrical engineering</orgName>
								<orgName type="institution">University of Ljubljana</orgName>
								<address>
									<addrLine>Tržaška 25</addrLine>
									<postCode>1000</postCode>
									<settlement>Ljubljana</settlement>
									<country>Sovenia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jurij</forename><surname>Tasič</surname></persName>
							<email>jurij.tasic@fe.uni-lj.si</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of electrical engineering</orgName>
								<orgName type="institution">University of Ljubljana</orgName>
								<address>
									<addrLine>Tržaška 25</addrLine>
									<postCode>1000</postCode>
									<settlement>Ljubljana</settlement>
									<country>Sovenia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Impact of implicit and explicit affective labeling on a recommender system&apos;s performance</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">DE6FFDB295B9482CE1CEE83D0C89F462</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T05:13+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>content-based recommender system</term>
					<term>affective labeling</term>
					<term>emotion detection</term>
					<term>facial expressions</term>
					<term>affective user modeling</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Affective labeling of multimedia content can be useful in recommender systems. In this paper we compare the effect of implicit and explicit affective labeling in an image recommender system. The implicit affective labeling method is based on an emotion detection technique that takes as input the video sequences of the users' facial expressions. It extracts Gabor low level features from the video frames and employs a kNN machine learning technique to generate affective labels in the valence-arousal-dominance space. We performed a comparative study of the performance of a content-based recommender (CBR) system for images that uses three types of metadata to model the users and the items: (i) generic metadata, (ii) explicitly acquired affective labels and (iii) implicitly acquired affective labels with the proposed methodology. The results showed that the CBR performs best when explicit labels are used. However, implicitly acquired labels yield a significantly better performance of the CBR than generic metadata while being an unobtrusive feedback tool.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Recently, investigations, that evaluate the use of affective metadata (AM) in content-based recommender (CBR) systems, were carried out <ref type="bibr" target="#b1">[Arapakis et al., 2009</ref><ref type="bibr">, Tkalčič et al., 2010a]</ref> and showed an increase of the accuracy of recommended items. This improvement of CBR systems that use affective metadata over systems that use generic metadata (GM), like the genre, represents the motivation for the work presented in this paper. Such systems require that the content items are labeled with affective metadata which can be done in two ways: (i) explicitly (i.e. asking the user to give an explicit affective label for the observed item) or (ii) implicitly (i.e. automatically detecting the user's emotive response).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1">Problem statement and proposed solution</head><p>Each of the two approaches for affective labeling, explicit and implicit, has its pros and cons. The explicit approach provides unambiguous labels but <ref type="bibr" target="#b12">Pantic and Vinciarelli [2009]</ref> argue that the truthfulness of such labels is questionable as users can be driven by different motives (egoistic labeling, reputation-driven labeling and asocial labeling). Another drawback of the explicit labeling approach is the intrusiveness of the process. On the other hand implicit affective labeling is completely unobtrusive and harder to be cheated by the user. Unfortunately the accuracy of the algorithms that detect affective responses might be too low and thus yield ambiguous/inaccurate labels.</p><p>Given the advantages of implicit labeling over explicit there is a need to assess the impact of the low emotion detection accuracy on the performance of recommender systems.</p><p>In this paper we compare the performance of a CBR system using explicit affective labeling vs. the proposed implicit affective labeling. The baseline results of the CBR with explicit affective labeling are those published in <ref type="bibr">Tkalčič et al. [2010a]</ref>. The comparative results of the implicit affective labeling are obtained using the same CBR procedure as in <ref type="bibr">Tkalčič et al. [2010a]</ref>, the same user interaction dataset <ref type="bibr" target="#b18">[Tkalčič et al., 2010c]</ref> but with affective labels acquired implicitly.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2">Related work</head><p>As anticipated by <ref type="bibr" target="#b12">Pantic and Vinciarelli [2009]</ref>, affective labels are supposed to be useful in content retrieval applications. Work related to this paper is divided in (i) the acquisition of affective labels and (ii) the usage of affective labels.</p><p>The acquisition of explicit affective labels is usually performed through an application with a graphical user interface (GUI) where users consume the multimedia content and provide appropriate labels. An example of such an application is the one developed by <ref type="bibr" target="#b4">Eckhardt and Picard [2009]</ref>.</p><p>On the other hand, the acquisition of implicit affective labels is usually reduced to the problem of non-intrusive emotion detection. Various modalities are used, such as video of users' faces, voice or physiological sensors (heartbeat, galvanic skin response etc.) <ref type="bibr" target="#b13">[Picard and Daily, 2005]</ref>. A good overview of such methods is given in <ref type="bibr" target="#b21">Zeng et al. [2009]</ref>. In our work we use implicit affective labeling from videos of users' faces. Generally, the approach taken in related work in automatic detection of emotions from video clips of users' faces is composed of three stages: (i) pre-processing, (ii) low level features extraction and (iii) classification. Related work differ mostly in the last two stages. <ref type="bibr" target="#b2">Bartlett et al. [2006]</ref>, <ref type="bibr" target="#b20">Wang and Guan [2008]</ref>, <ref type="bibr" target="#b22">Zhi and Ruan [2008]</ref> used Gabor wavelets based features for emotion detection. Beside these, which are mostly used, <ref type="bibr" target="#b22">Zhi and Ruan [2008]</ref> report the usage of other facial features in related work: active appearance models (AAM), action units, various facial points and motion units, Haar based features and textures. Various classification schemes were used successfully in video emotion detection. <ref type="bibr" target="#b2">Bartlett et al. [2006]</ref> employed both the Support Vector Machine (SVM) and AdaBoost classifiers. <ref type="bibr" target="#b22">Zhi and Ruan [2008]</ref> used the knearest neighbours (k-NN) algorithm. Before using the classifier they performed a dimensionality reduction step using the locality preserving projection (LPP) technique. In their work, <ref type="bibr" target="#b20">Wang and Guan [2008]</ref> compared four classifiers: the Gaussian Mixture Model (GMM), the k-NN, neural networks (NN) and Fisher's Linear Discriminant Analysis (FLDA). The latter turned out to yield the best performance. The survey <ref type="bibr" target="#b21">Zeng et al. [2009]</ref> reports the use of other classifiers like the C4.5, Bayes Net and rule based classifiers. <ref type="bibr" target="#b7">Joho et al. [2009]</ref> used an emotion detection techique that uses video sequences of users' face expressions to provide affective labels for video content.</p><p>Another approach is to extract affective labels directly from the content itself, without observing the users. <ref type="bibr" target="#b5">Hanjalic and Xu [2005]</ref> used low level features extracted from the audio track of video clips to identify moments in video sequences that induce high arousal in viewers.</p><p>In contrast to emotion detection techniques the usage of affective labels for information retrieval has only recently started to gain attention. <ref type="bibr" target="#b3">Chen et al. [2008]</ref> developed the EmoPlayer which has a similar user interface to the tool developed by <ref type="bibr" target="#b4">Eckhardt and Picard [2009]</ref> but with a reversed functionality: it assists users to find specific scenes in a video sequence. <ref type="bibr" target="#b15">Soleymani et al. [2009]</ref> built a collaborative filtering system that retrieves video clips based on affective queries. Similarly, but for music content, <ref type="bibr" target="#b14">Shan et al. [2009]</ref> have developed a system that performs emotion based queries. <ref type="bibr" target="#b1">Arapakis et al. [2009]</ref> built a complete video recommender system that detects the users' affective state and provides recommended content. <ref type="bibr" target="#b8">Kierkels and Pun [2009]</ref> used physiological sensors (ECG and EEG) to implicitly detect the emotive responses of users. Based on implicit affective labels they observed an increase of content retrieval accuracy compared to explicit affective labels. <ref type="bibr">Tkalčič et al. [2010a]</ref> have shown that the usage of affective labels significantly improves the performance of a recommender system over generic labels.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Affective modeling in CBR systems</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Emotions during multimedia items consumption</head><p>In a multimedia consumption scenario a user is watching multimedia content. During the consumption of multimedia content (images in our case), the emotive state of a user is continuously changing between different emotive states ǫ j ∈ E, as different visual stimuli h i ∈ H induce these emotions (see Fig. <ref type="figure">1</ref>). The facial expressions of the user are being continuously monitored by a video camera for the purpose of the automatic detection of the emotion expressions.</p><p>The detected emotion expressions of the users, along with the ratings given to the content items, can be used in two ways: (i) to model the multimedia content item (e.g. the multimedia item h i is funny -it induces laughter in most of the viewers) and (ii) to model individual users (e.g. the user u likes images that induce fear).</p><formula xml:id="formula_0">ǫ N ǫ 1 ǫ 2 ǫ 3 ǫ 4 t E t(h 1 ) t(h 2 ) t(h 3 ) t(h 4 ) t T</formula><p>Fig. <ref type="figure">1</ref>: The user's emotional state ǫ is continuously changing as the the time sequence of the visual stimuli h i ∈ H induce different emotions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Affective modeling in a CBR system</head><p>Item modeling with affective metadata We use the valence-arousal-dominance (VAD) emotive space for describing the users' emotive reactions to images. In the VAD space each emotive state is described by three parameters, namely valence, arousal and dominance. A single user u ∈ U consumes one or more content items (images) h ∈ H. As a consequence of the image h being a visual stimulus, the user u experiences an emotive response which we denote as er(u, h) = (v, a, d) where v, a and d are scalar values that represent the valence, arousal and dominance dimensions of the emotive response er. The set of users that have watched a single item h are denoted with U h . The emotive responses of all users U h , that have watched the item h form the set ER h = {er(u, h) : u ∈ U h }. We model the image h with the item profile that is composed of the first two statistical moments of the VAD values from the emotive responses ER h which yields the six tuple</p><formula xml:id="formula_1">V = (v, σ v , ā, σ a , d, σ d ) (1)</formula><p>where v, ā and d represent the average VAD values and σ v , σ a and σ d represent the standard deviations of the VAD values for the observed content item h. An example of the affective item profile is shown in Tab. 1.</p><p>User modeling with affective metadata The preferences of the user are modeled based on the explicit ratings that she/he has given to the consumed items. The observed user u rates each viewed item either as relevant or nonrelevant. A machine learning (ML) algorithm is trained to separate relevant from non-relevant items using the affective metadata in the item profiles as features and the binary ratings (relevant/non-relevant) as classes. The user profile up(u) of the observed user u is thus an ML algorithm dependent data structure. Fig. <ref type="figure" target="#fig_0">2</ref> shows an example of a user profile when the tree classifier C4.5 is being used.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Experiment</head><p>We used our implementation of an emotion detection algorithm (see <ref type="bibr">Tkalčič et al. [2010b]</ref>) for implicit affective labeling and we compared the performance of the CBR system that uses explicit vs. implicit affective labels.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Overview of the emotion detection algorithm for implicit affective labeling</head><p>The emotion detection procedure used to give affective labels to the content images involved three stages: (i) pre-processing, (ii) low level feature extraction and (iii) emotion detection. We formalized the procedure with the mappings</p><formula xml:id="formula_2">I → Ψ → E (2)</formula><p>where I represents the frame from the video stream, Ψ represents the low level features corresponding to the frame I and E represents the emotion corresponding to the frame I.</p><p>In the pre-processing stage we extracted and registered the faces from the video frames to allow precise low level feature extraction. We used the eye tracker developed by <ref type="bibr" target="#b19">Valenti et al. [2009]</ref> to extract the locations of the eyes. The detection of emotions from frames in a video stream was performed by comparing the current video frame I t of the user's face to a neutral face expression. As the LDOS-PerAff-1 database is an ongoing video stream of users consuming different images we averaged all the frames to get the neutral frame. This method is applicable when we have a non supervised video stream of a user with different face expressions.</p><p>The low level features used in the proposed method were drawn from the images filtered by a Gabor filter bank. We used a bank of Gabor filters of 6 different orientation and 4 different spatial sub-bands which yielded a total of 24 Gabor filtered images per frame. The final feature vector had the total length of 240 elements.</p><p>The emotion detection was done by a k-NN algorithm after performing dimensionality reduction using the principal component analysis (PCA).</p><p>Each frame from the LDOS-PerAff-1 dataset was labeled with a six tuple of the induced emotion V. The six tuple was composed of scalar values representing the first two statistical moments in the VAD space. However, for our purposes we opted for a coarser set of emotional classes ǫ ∈ E. We divided the whole VAD space into 8 subspaces by thresholding each of the three first statistical moments v, ā and d. We thus gained 8 rough classes. Among these, only 6 classes actually contained at least one item so we reduced the emotion detection problem to a classification into 6 distinct classes problem as shown in Tab. 2. Table <ref type="table">2</ref>: Division of the continuous VAD space into six distinct classes E = {ǫ 1 . . . ǫ 6 } with the respective centroid values.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Overview of the CBR procedure</head><p>Our scenario consisted in showing end users a set of still color images while observing their facial expressions with a camera. These videos were used for implicit affective labeling. The users were also asked to give explicit binary ratings to the images. They were instructed to select images for their computer wallpapers. The task of the recommender system was to select the relevant items for each user as accurate as possible. This task falls in the category find all good items for the recommender systems' tasks taxonomy proposed by <ref type="bibr" target="#b6">Herlocker et al. [2004]</ref>.</p><p>Figure <ref type="figure" target="#fig_2">3</ref> shows the overview of the CBR experimental setup. After we collected the ratings and calculated the affective labels for the item profiles, we trained the user profiles with four different machine learning algorithms: the SVM, NaiveBayes, AdaBoost and C4.5. We split the dataset in the train and test sets using the ten-fold cross validation technique. We then performed ten training/classifying iterations which yielded the confusion matrices that we used to assess the performance of the CBR system.  The set of images h ∈ H that the users were consuming, had a twofold meaning: (i) they were used as content items and (ii) they were used as emotion induction stimuli for the affective labeling algorithm. We used a subset of 70 images from the IAPS dataset <ref type="bibr" target="#b9">Lang et al. [2005]</ref>. The IAPS dataset of images is annotated with the mean and standard deviations of the emotion responses in the VAD space which was useful as the ground truth in the affective labeling part of the experiment.</p><p>The affective labeling algorithm described in Sec. 3.1 yielded rough classes in the VAD space. In order to build the affective item profiles we used the classes' centroid values (see Tab. 2) in the calculation of the first two statistical moments. We applied the procedure from Sec. 2.2.</p><p>We had 52 users taking part in our experiment (mean = 18.3 years, 15 males).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Affective CBR system evaluation methodology</head><p>The results of the CBR system were the confusion matrices of the classification procedure that mapped the images H into one of the two possible classes: relevant or non-relevant class. From the confusion matrices we calculated the recall, precision and F measure as defined in <ref type="bibr" target="#b6">Herlocker et al. [2004]</ref>.</p><p>We also compared the performances of the CBR system with three types of metadata: (i) generic metadata (genre and watching time as done by <ref type="bibr">Tkalčič et al. [2010a]</ref>), (ii) affective metadata given explicitly and (iii) affective metadata acquired implicitly with the proposed emotion detection algorithm. For that purpose we transferred the statistical testing of the confusion matrices into the testing for the equivalence of two estimated discrete probability distributions <ref type="bibr" target="#b10">[Lehman and Romano, 2005]</ref>. To test the equivalence of the underlying distributions we used the Pearson χ 2 test. In case of significant differences we used the scalar measures precision, recall and F measure to see which approach was significantly better.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Results</head><p>We compared the performance of the classification of items into relevant or non relevant through the confusion matrices in the following way: (i) Explicitly acquired affective metadata vs Implicitly acquired metadata, (ii) explicitly acquired metadata vs. generic metadata and (iii) implicitly acquired metadata vs. generic metadata. In all three cases the p value was p &lt; 0.01. Table <ref type="table" target="#tab_3">3</ref> shows the scalar measures precision, recall and F measures for all three approaches.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Discussion</head><p>As we already reported in <ref type="bibr">Tkalčič et al. [2010b]</ref>, the application of the emotion detection algorithm on spontaneous face expression videos has a low performance. We identified three main reasons for that: (i) weak supervision in learning, (ii) non-optimal video acquisition and (iii) non-extreme facial expressions. In supervised learning techniques there is ground truth reference data to which we compare our model. In the induced emotion experiment the ground truth data is weak because we did not verify whether the emotive response of the user equals to the predicted induced emotive response.</p><p>Second, the acquisition of video of users' expressions in real applications takes place in less controlled environments. The users change their position during the session. This results in head orientation changes, size of the face changes and changes of camera focus. All these changes require a precise face tracker that allows for fine face registration. Further difficulties are brought by various face occlusions and changing lighting conditions (e.g. a light can be turned on or off, the position of the curtains can be changed etc.) which confuse the face tracker. It is important that the face registration is done in a precisely manner to allow the detection of changes in the same areas of the face.</p><p>The third reason why the accuracy drops is the fact that face expressions in spontaneous videos are less extreme than in posed videos. As a consequence the changes on the faces are less visible and are hidden in the overall noise of the face changes. The dynamics of face expressions depend on the emotion amplitude as well as on the subjects' individual differences.</p><p>The comparison of the performance of the CBR with explicit vs. implicit affective labeling shows significant differences regardless of the ML technique employed to predict the ratings. The explicit labeling yields superior CBR performance than the implicit labeling. However, another comparison, that between the implicitly acquired affective labels and generic metadata (genre and watching time) shows that the CBR with implicit affective labels is significantly better than the CBR with generic metadata only. Although not as good as explicit labeling, the presented implicit labeling technique brings additional value to the CBR system used.</p><p>The usage of affective labels is not present in state-of-the-art commercial recommender systems, to the best of the authors' knowledge. The presented approach allows to upgrade an existing CBR system by adding the unobtrusive video acquisition of users' emotive responses. The results showed that the inclusion of affective metadata, although acquired with a not-so-perfect emotion detection algorithm, significantly improves the quality of the selection of recommended items. In other words, although there is a lot of noise in the affective labels acquired with the proposed method, these labels still describe more variance in users' preferences than the generic metadata used in state-of-the-art recommender systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Pending issues and future work</head><p>The usage of affective labels in recommender systems has not reached a production level yet. There are several open issues that need to be addressed in the future.</p><p>The presented work was verified on a sample of 52 users of a narrow age and social segment and on 70 images as content items. The sample size is not big but it is in line with sample sizes used in related work <ref type="bibr" target="#b1">[Arapakis et al., 2009</ref><ref type="bibr">, Joho et al., 2009</ref><ref type="bibr" target="#b8">, Kierkels and Pun, 2009]</ref>. Although we correctly used the statistical tests and verified the conditions before applying the tests a repetition of the experiment on a larger sample of users and content items would increase the strength of the results reported.</p><p>Another aspect of the sample size issue is the impact of the size on the ML techniques used. The sample size in the emotion detection algorithm (the kNN classifier) is not problematic. It is, however, questionable the sample size used in the CBR. In the ten fold cross validation scheme we used 63 items for training the model and seven for testing. Although it appears that this is small, a comparison with other recommender system reveals that this is a common issue, and is usually referred as the sparsity problem. It occurs when, even if there are lots of users and lots of items, each user usually rated only few items and there are few data to build the models upon <ref type="bibr">[Adomavicius and Tuzhilin, 2005]</ref>.</p><p>The presented work also lacks a further user satisfaction study. Besides just aiming at the prediction of user ratings for unseen items research should also focus on the users' satisfaction with the list of recommended items.</p><p>But the most important thing to do in the future is to improve the emotion detection algorithms used for implicit affective labeling. In the ideal case, the perfect emotion detection algorithm would yield CBR performance that is identical to the CBR performance with explicit labeling.</p><p>The acquisition of video of users raises also privacy issues that need to be addressed before such a system can go in production.</p><p>Last, but not least, we believe that implicit affective labeling should be complemented with context modeling to provide better predictions of users' preferences. In fact, emotional responses of users and their tendencies to seek one kind of emotion over another, is tightly connected with the context where the items are consumed. Several investigations started to explore the influence of various contextual parameters, like being alone or being in company, on the users' preferences <ref type="bibr">[Adomavicius et al., 2005</ref><ref type="bibr">, Odić et al., 2010]</ref>. We will include this information in our future affective user models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Conclusion</head><p>We performed a comparative study of a CBR system for images that uses three types of metadata: (i) explicit affective labels, (ii) implicit affective labels and (iii) generic metadata. Although the results showed that the explicit labels yielded better recommendations than implicit labels, the proposed approach significantly improves the CBR performance over generic metadata. Because the approach is unobtrusive it is feasible to upgrade existing CBR systems with the proposed solution. The presented implicit labeling technique takes as input video sequences of users' facial expressions and yields affective labels in the VAD emotive space. We used Gabor filtering based low level features, PCA for dimensionality reduction and the kNN classifier for affective labeling.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 2 :</head><label>2</label><figDesc>Fig.2: Example of a user profile when the C4.5 tree classifier is used for inferring the user's preferences. The labels C 0 and C 1 represent the relevant and nonrelevant classes, respectively.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>0 ā &lt; 0 d &lt; 0 0.5 −0.5 −0.5 ǫ2 v &lt; 0 ā &gt; 0 d &lt; 0 −0.5 0.5 −0.5 ǫ3 v &gt; 0 ā &gt; 0 d &lt; 0 0.5 0.5 −0.5 ǫ4 v &lt; 0 ā &lt; 0 d &gt; 0 −0.5 −0.5 0.5 ǫ5 v &gt; 0 ā &lt; 0 d &gt; 0 0.5 −0.5 0.5 ǫ6 v &gt; 0 ā &gt; 0 d &gt; 0 0.5 0.5 0.5</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 3 :</head><label>3</label><figDesc>Fig. 3: Overview of the CBR experiment.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Example of an affective item profile V (first two statistical moments of the induced emotion values v, a and d) .</figDesc><table><row><cell cols="2">Metadata</cell><cell>Value</cell></row><row><cell>field</cell><cell></cell></row><row><cell>v</cell><cell></cell><cell>3.12</cell></row><row><cell>σv</cell><cell></cell><cell>1.13</cell></row><row><cell>ā</cell><cell></cell><cell>4.76</cell></row><row><cell>σa d</cell><cell></cell><cell>0.34 6.28</cell></row><row><cell>σ d</cell><cell></cell><cell>1.31</cell></row><row><cell></cell><cell>Valence</cell></row><row><cell></cell><cell>mean</cell></row><row><cell cols="2">&lt;=4.23</cell><cell>&gt;4.23 &gt;4.23</cell></row><row><cell>Class = C 0</cell><cell></cell><cell>Valence mean</cell></row><row><cell></cell><cell></cell><cell>&lt;=6.71</cell><cell>&gt;6.71 &gt;6.71</cell></row><row><cell></cell><cell>Dominance mean</cell><cell>Class = C 1</cell></row><row><cell>&lt;=5.92 &lt;=5.92</cell><cell></cell><cell>&gt;5.92 &gt;5.92</cell></row><row><cell>Valence</cell><cell></cell></row><row><cell>mean</cell><cell></cell><cell>Class = C 0</cell></row><row><cell>&lt;=5.21 &lt;=5.21</cell><cell>&lt;=5.21 &lt;=5.21</cell></row><row><cell>Class = C 1</cell><cell>Class = C 0</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3 :</head><label>3</label><figDesc>The scalar measures P , R, F for the CBR system</figDesc><table /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgement</head><p>This work was partially funded by the European Commission within the FP6 IST grant number FP6-27312 and partially by the Slovenian Research Agency ARRS. All statements in this work reflect the personal ideas and opinions of the authors and not necessarily the opinions of the EC or ARRS.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions</title>
		<author>
			<persName><forename type="first">G</forename><surname>Adomavicius</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tuzhilin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Knowledge and Data Engineering</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="734" to="749" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Integrating facial expressions into user profiling for the improvement of a multimodal recommender system</title>
		<author>
			<persName><forename type="first">G</forename><surname>Adomavicius</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sankaranarayanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tuzhilin ; Arapakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Joho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Hannah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Jose</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Gar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">-</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. IEEE Int&apos;l Conf. Multimedia &amp; Expo</title>
				<meeting>IEEE Int&apos;l Conf. Multimedia &amp; Expo</meeting>
		<imprint>
			<date type="published" when="2005">2005. 2009</date>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="1440" to="1443" />
		</imprint>
	</monogr>
	<note>Incorporating contextual information in recommender systems using a multidimensional approach</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Automatic recognition of facial actions in spontaneous expressions</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Bartlett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">C</forename><surname>Littlewort</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">G</forename><surname>Frank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lainscsek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Fasel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Movellan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Multimedia</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="22" to="35" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Emoplayer: A media player for video clips with affective annotations</title>
		<author>
			<persName><forename type="first">Ling</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gen-Cai</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Cheng-Zhe</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jack</forename><surname>March</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Steve</forename><surname>Benford</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.intcom.2007.06.003</idno>
		<idno>doi:</idno>
		<ptr target="http://dx.doi.org/10.1016/j.intcom.2007.06.003" />
	</analytic>
	<monogr>
		<title level="j">Interacting with Computers</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="17" to="28" />
			<date type="published" when="2008-01">January 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A more effective way to label affective expressions</title>
		<author>
			<persName><forename type="first">Micah</forename><surname>Eckhardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rosalind</forename><surname>Picard</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACII.2009.5349528</idno>
		<ptr target="http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5349528" />
	</analytic>
	<monogr>
		<title level="m">International Conference on Affective Computing and Intelligent Interaction and Workshops</title>
				<imprint>
			<date type="published" when="2009-09">2009. September 2009</date>
			<biblScope unit="page" from="1" to="2" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Affective video content representation and modeling</title>
		<author>
			<persName><forename type="first">Alan</forename><surname>Hanjalic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Li-Qun</forename><surname>Xu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Multimedia</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="143" to="154" />
			<date type="published" when="2005-02">February 2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Evaluating collaborative filtering recommender systems</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Herlocker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Konstan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">G</forename><surname>Terveen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">T</forename><surname>Riedl</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Information Systems</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page">53</biblScope>
			<date type="published" when="2004-01">January 2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Exploiting facial expressions for affective video summarisation</title>
		<author>
			<persName><forename type="first">H</forename><surname>Joho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Jose</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Valenti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Sebe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceeding of the ACM International Conference on Image and Video Retrieval</title>
				<meeting>eeding of the ACM International Conference on Image and Video Retrieval</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Simultaneous exploitation of explicit and implicit tags in affect-based multimedia retrieval</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J M</forename><surname>Kierkels</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pun</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Affective Computing and Intelligent Interaction and Workshops</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2009">2009. 2009</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
	<note>ACII 2009. 3rd International Conference on</note>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Lang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Bradley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">N</forename><surname>Cuthbert</surname></persName>
		</author>
		<title level="m">International affective picture system (iaps): Affective ratings of pictures and instruction manual</title>
				<meeting><address><addrLine>Gainesville, FL</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
		<respStmt>
			<orgName>University of Florida</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical report</note>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Testing Statistical Hypotheses</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">L</forename><surname>Lehman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Romano</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2005">2005</date>
			<publisher>Springer Science + Business Inc</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Open issues with contextual information in existing recommender system databases</title>
		<author>
			<persName><forename type="first">Ante</forename><surname>Odić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Matevž</forename><surname>Kunaver</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jurij</forename><surname>Tasič</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrej</forename><surname>Košir</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE ERK</title>
				<meeting>the IEEE ERK</meeting>
		<imprint>
			<date type="published" when="2010-09">2010. September 2010</date>
			<biblScope unit="page" from="217" to="220" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Implicit Human-Centered Tagging</title>
		<author>
			<persName><forename type="first">M</forename><surname>Pantic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vinciarelli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Signal Processing Magazine</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="173" to="180" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Evaluating affective interactions: Alternatives to asking what users feel</title>
		<author>
			<persName><forename type="first">Rosalind</forename><surname>Picard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Shaundra Briant</forename><surname>Daily</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CHI Workshop on Evaluating Affective Interfaces: Innovative Approaches</title>
				<meeting><address><addrLine>Portland, OR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2005-04">April 2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Emotionbased music recommendation by affinity discovery from film music</title>
		<author>
			<persName><forename type="first">Man-Kwan</forename><surname>Shan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fang-Fei</forename><surname>Kuo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Meng-Fen</forename><surname>Chiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Suh-Yin</forename><surname>Lee</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.eswa.2008.09.042</idno>
		<idno>doi:</idno>
		<ptr target="http://dx.doi.org/10.1016/j.eswa.2008.09.042" />
	</analytic>
	<monogr>
		<title level="j">Expert Syst. Appl</title>
		<idno type="ISSN">0957-4174</idno>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="7666" to="7674" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">A collaborative personalized affective video retrieval system</title>
		<author>
			<persName><forename type="first">Mohammad</forename><surname>Soleymani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jeremy</forename><surname>Davis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thierry</forename><surname>Pun</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACII.2009.5349526</idno>
		<ptr target="http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5349526" />
	</analytic>
	<monogr>
		<title level="m">International Conference on Affective Computing and Intelligent Interaction and Workshops</title>
				<imprint>
			<date type="published" when="2009-09">2009. September 2009</date>
			<biblScope unit="page" from="1" to="2" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Using affective parameters in a content-based recommender system. User Modeling and User-Adapted Interaction</title>
		<author>
			<persName><forename type="first">Marko</forename><surname>Tkalčič</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Urban</forename><surname>Burnik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrej</forename><surname>Košir</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Journal of Personalization Research</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page">2010</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Comparison of an emotion detection technique on posed and spontaneous datasets</title>
		<author>
			<persName><forename type="first">Marko</forename><surname>Tkalčič</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ante</forename><surname>Odić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrej</forename><surname>Košir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jurij</forename><surname>Tasič</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE ERK</title>
				<meeting>the IEEE ERK</meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page">2010</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">The LDOS-PerAff-1 Corpus of Face Video Clips with Affective and Personality Metadata</title>
		<author>
			<persName><forename type="first">Marko</forename><surname>Tkalčič</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jurij</forename><surname>Tasič</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrej</forename><surname>Košir</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the LREC 2010 Workshop on Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality</title>
				<editor>
			<persName><forename type="first">Michael</forename><surname>Kipp</surname></persName>
		</editor>
		<meeting>the LREC 2010 Workshop on Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality</meeting>
		<imprint>
			<date type="published" when="2010">2010c</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Robustifying eye center localization by head pose cues</title>
		<author>
			<persName><forename type="first">R</forename><surname>Valenti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Yucel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gevers</surname></persName>
		</author>
		<ptr target="http://www.science.uva.nl/research/publications/2009/ValentiCVPR2009" />
	</analytic>
	<monogr>
		<title level="m">IEEE Conference on Computer Vision and Pattern Recognition</title>
				<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Recognizing human emotional state from audiovisual signals</title>
		<author>
			<persName><forename type="first">Yongjin</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ling</forename><surname>Guan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactionson multimedia</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="936" to="946" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">A survey of affect recognition methods: Audio, visual, and spontaneous expressions. Pattern Analysis and Machine Intelligence</title>
		<author>
			<persName><forename type="first">Zhihong</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Maja</forename><surname>Pantic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Glenn</forename><forename type="middle">I</forename><surname>Roisman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thomas</forename><forename type="middle">S</forename><surname>Huang</surname></persName>
		</author>
		<idno type="DOI">10.1109/TPAMI.2008.52</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on</title>
		<idno type="ISSN">0162-8828</idno>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="39" to="58" />
			<date type="published" when="2009-01">Jan. 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Facial expression recognition based on two-dimensional discriminant locality preserving projections</title>
		<author>
			<persName><forename type="first">R</forename><surname>Zhi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Ruan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">71</biblScope>
			<biblScope unit="issue">7-9</biblScope>
			<biblScope unit="page" from="1730" to="1734" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
