<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Classifeye: Classification of Personal Characteristics Based on Eye Tracking Data in a Recommender System Interface</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Martijn</forename><surname>Millecamp</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of computer science</orgName>
								<orgName type="institution">KU Leuven</orgName>
								<address>
									<addrLine>Celestijnenlaan 200A bus 2402</addrLine>
									<settlement>Leuven</settlement>
									<country key="BE">Belgium</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Cristina</forename><surname>Conati</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Department of computer Science</orgName>
								<orgName type="institution">ICICS</orgName>
								<address>
									<addrLine>CS 107, 2366 Main Mall</addrLine>
									<settlement>Vancouver</settlement>
									<region>BC</region>
									<country key="CA">Canada</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Katrien</forename><surname>Verbert</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of computer science</orgName>
								<orgName type="institution">KU Leuven</orgName>
								<address>
									<addrLine>Celestijnenlaan 200A bus 2402</addrLine>
									<settlement>Leuven</settlement>
									<country key="BE">Belgium</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Classifeye: Classification of Personal Characteristics Based on Eye Tracking Data in a Recommender System Interface</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">E7BC2CCD09D617DAA2EFAA262BD8555D</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T08:14+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>eye tracking</term>
					<term>classification</term>
					<term>recommender system</term>
					<term>openness</term>
					<term>need for cognition</term>
					<term>musical sophistication</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Due to the increasing importance of recommender systems in our life, the call to make these systems more transparent becomes louder. However, providing explanations is not as easy as it seems, as research has shown that different users have varying reactions to explanations. So not only the recommendations, but also the explanations should be personalised. As a first step towards these personalised explanations, we explore the possibility to classify users based on their gaze pattern during the interaction with a music recommender system. More specifically, we classify three personal characteristics that have been shown to play a role in the interaction with music recommendations: need for cognition, openness and musical sophistication. Our results show that classification based on eye tracking has potential for need for cognition and openness, as we are able to do better than random, but not for musical sophistication as no classifier did better than a uniform random baseline.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In the field of recommender systems (RS), researchers are increasingly aware that optimizing accuracy is not enough to reach the full potential of recommender systems (RS) <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. For example, users will not choose a recommended item unless they have trust in the system <ref type="bibr" target="#b2">[3]</ref>. One possible way to increase this trust is providing explanations which reveal (a part of) the internal reasoning of the HUMANIZE: Joint Proceedings of the ACM IUI 2021 Workshops, <ref type="bibr">April 13-17, 2021</ref>, College Station, USA martijn.millecamp@kuleuven.be (M. Millecamp); conati@cs.ubc.ca (C. Conati); katrien.verbert@kuleuven.be (K. Verbert) 0000-0002-5542-0067 (M. Millecamp); 0000-0002-8434-9335 (C. Conati); 0000-0001-6699-7710 (K. Verbert) RS to the user <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>. Especially the combination of these explanations with control can help users not only to understand the RS, but also to steer the RS with input and feedback <ref type="bibr" target="#b4">[5]</ref>. Despite the increased interest in explanations for RS, it is still not clear how to implement explanations in practice as users have varying reactions to them which shows the need to personalize explanation to the user <ref type="bibr" target="#b5">[6]</ref>.</p><p>However, before the system could adapt explanations to personal characteristics (PCs), it needs to be aware of the PCs of the user. A possible way to obtain these characteristics is by explicitly asking the users to fill in questionnaires <ref type="bibr" target="#b6">[7]</ref> or by implicitly inferring PCs through an analysis of the social media of the user <ref type="bibr" target="#b7">[8]</ref>. Nonetheless, asking users to fill in questionnaires or to give access to their social media is often not desirable. Moreover, to personalize explanations it is not necessary to obtain a fine-grained result, but a classification into two categories suffices <ref type="bibr" target="#b8">[9]</ref>.</p><p>For this reason, we explore in this paper whether it is possible to classify users' personality traits during the interaction with a music RS with explanations by analyzing their gaze. We will focus on three different PCs: openness, need for cognition (NFC) and musical sophistication (MS) <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10]</ref>. These PCs will be explained in detail in Section 2.</p><p>Openness is one of the Big Five personality traits which measures how open a person is to new experiences. Millecamp et al. <ref type="bibr" target="#b8">[9]</ref> showed that there was a significant difference in the gaze pattern between low and high openness users. This is the reason we hypothesize that classifying openness based on gaze might be possible.</p><p>Similarly, we hypothesize that inferring MS, which is a measure of domain knowledge in the music domain, from gaze data might be possible as the study of Millecamp et al. <ref type="bibr" target="#b8">[9]</ref> also found significant differences in gaze pattern between low and high MS.</p><p>NFC is a cognitive style which influences the way a person prefers to process information and thus looks at information. Previous studies already showed that NFC moderates the perception of explanations in a music recommender system, which was the motivation to explore whether inferring NFC from gaze would be possible.</p><p>Next to exploring the general accuracy, we also want to explore how much data we need to infer these PCs.</p><p>The contribution of this paper is twofold. First, to our knowledge, we are the first to explore whether it is possible to infer PCs during the interaction with a RS in the presence of explanations. Second, we make the gathered dataset publicly available to support the research in this area. This dataset is unique because it provides both gaze data and data about PCs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related work</head><p>With the increasing role of RS in our daily lives, the call for explainable, transparent RS also becomes louder so that users can make better informed decisions whether or not to follow the recommendations <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b5">6]</ref>. In combination with controls, this transparency also enables users to correct the RS whenever they feel it makes wrong assumptions <ref type="bibr" target="#b4">[5]</ref>. However, research has shown that different users have different reactions to explanations <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13]</ref>. In the field of music RS, recent research has shown that there are three PCs that could influence the way users perceive explanations: openness, NFC and MS <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b8">9]</ref>.</p><p>Openness is one of the five factors of the Five Factor Model, also known as the Big 5 model <ref type="bibr" target="#b13">[14]</ref>. This model describes personality in five different traits and it has been used in several studies which showed the positive impact of considering personality in RS <ref type="bibr" target="#b14">[15]</ref>. The factor openness describes the breadth, depth and complexity of an individualś mental and experiental life <ref type="bibr" target="#b15">[16]</ref>. It has been shown that openness is related to the preferred amount of diversity in RS and to the willingness to use a system with explanations <ref type="bibr" target="#b16">[17,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b8">9]</ref>.</p><p>Need for cognition has been shown to influence the success of a RS <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b18">19,</ref><ref type="bibr" target="#b19">20]</ref> and is defined as "a measure of the tendency for an individual to engage in, and enjoy, effortful cognitive activities" <ref type="bibr" target="#b20">[21]</ref>. NFC has been shown to have an impact on the willingness of users to rely on a RS <ref type="bibr" target="#b11">[12]</ref>, on the confidence in a playlist created in a music RS with explanations <ref type="bibr" target="#b9">[10]</ref>, on preference matching <ref type="bibr" target="#b21">[22]</ref>, on the style of explanations they prefer <ref type="bibr" target="#b12">[13]</ref> and on the reason why users need a transparent RS <ref type="bibr" target="#b22">[23]</ref>.</p><p>Musical sophistication is defined by Mullensiefen et al. <ref type="bibr" target="#b23">[24]</ref> as a concept to describe the multi-faceted nature of musical expertise. In the music domain, Millecamp et al. <ref type="bibr" target="#b8">[9]</ref> showed that users with high MS feel more supported to make a decision in a RS interface that provided explanations than an interface without such explanations, while this made no difference for users with low MS. Another study showed that users with high domain experience perceive a higher diversity in a scatter plot than in a simpler bubble chart <ref type="bibr" target="#b24">[25]</ref>.</p><p>To acquire the PCs of users, the most common way is to ask users to fill in validated questionnaire <ref type="bibr" target="#b6">[7]</ref>, but there exist also other approaches such as inferring PCs by analyzing the social media of the user <ref type="bibr" target="#b25">[26,</ref><ref type="bibr" target="#b7">8]</ref>, by analyzing a conversation with a chatbot <ref type="bibr" target="#b26">[27]</ref> or by analyzing the physical signals such as brain activity <ref type="bibr" target="#b27">[28]</ref> and gaze data <ref type="bibr" target="#b6">[7]</ref>.</p><p>The previously mentioned works rely on fine-grained personality scores. In contrast, in our work we focus on adapting interfaces to users for which we only need a classification in two groups. We aim to base this classification on the gaze pattern during the interaction with a music RS interface instead of asking users to watch carefully selected stimuli, to fill in questionnaires or to share their social media profile. Previous studies which classified users based on their gaze pattern during normal activities are almost all only focused on cognitive abilities and visualization experience <ref type="bibr" target="#b28">[29,</ref><ref type="bibr" target="#b29">30,</ref><ref type="bibr" target="#b30">31,</ref><ref type="bibr" target="#b31">32]</ref>. One exception is the study of Hoppe et al. <ref type="bibr" target="#b32">[33]</ref> which inferred the Big Five personality traits by studying the gaze of a walk through a campus. This study is different from our work, as we investigate if it is possible to infer PCs while interacting with a music RS and also focus on different PCs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Data</head><p>The gaze data that is used in this study was generated in a user study by Millecamp et al. <ref type="bibr" target="#b8">[9]</ref>. We will provide a brief summary of this experiment, but a more elaborate description can be found in <ref type="bibr" target="#b8">[9]</ref>. As mentioned in Section 2, we focus in this study on openness, NFC and MS as previous research has found that these PCs could affect the perception of explanations in a music RS <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10]</ref> and the study of Millecamp et al. <ref type="bibr" target="#b8">[9]</ref> already showed that openness and MS change the gaze pattern between an interface in the presence or absence of explanations. To measure these three characteristics, users were asked to fill out three questionnaires before the experiment started. To measure openness, we used the 44-item Big Five Inventory <ref type="bibr" target="#b33">[34]</ref> and selected afterwards the questions related to openness. For NFC, we used the 18-items questionnaire of Cacioppo et al. <ref type="bibr" target="#b20">[21]</ref> and for MS the Goldsmiths Musical Sophistication Index<ref type="foot" target="#foot_0">1</ref> was used. The dataset we used in this study consists of the gaze data of 30 participants (21 male). For the three PCs, the participants were divided into a high and low group based on a median split. This resulted in equally distributed groups for MS and NFC and almost equally groups for openness (16 in the low and 14 in the high openness group). An short overview of the characteristics of the participants can be found in Table <ref type="table" target="#tab_0">1</ref>.</p><p>The gaze data was recorded with a Tobii 4C remote eye tracker at a sampling rate of 90Hz. Each sample contained information about the focus point on the screen denoted as an x and y coordinate, the distance between the participant and the screen, and the validity of these measures. To calibrate the eye tracker, the experiment started with a standard calibration procedure provided by Tobii Core Software. After the calibration, users were asked to explore the interface of a music RS in the presence of feature-based explanations until they understood all functionalities. A screenshot of the interface is shown in Figure <ref type="figure" target="#fig_0">1</ref>.</p><p>As shown in Part A of this figure, users first can search for an artist they like through a search bar in the top left corner. When they add the artist, this artist is shown in Part B. Based on this artist, the system starts to generate recommendations which were listed in a two-column format as shown in Part F. When users hover over the cover of the picture of a recommended song, they can click a play button to listen to a 30s preview of the song. On the right side of each explanation, they can click on the thumb-up icon to add the song to their playlist. Through the sliders shown in Part D of Figure <ref type="figure" target="#fig_0">1</ref>, users can modify several audio features 2 such as popularity, energy and danceability which are also taken into account in the recommendation process. To help users steer these sliders, the minimum and the maximum for each audio feature is shown for each artist.</p><p>After the user explored all the options of the interface, the recording of the gaze started. As shown in Part E of Figure <ref type="figure" target="#fig_0">1</ref>, users were asked to create a playlist of five songs. To create this playlist, they could use all functionalities without any restriction. When they added the fifth song to their playlist, we stopped the recording of the gaze. On average, users took 4 minutes 26 seconds to complete their playlist. As part of this paper's contribution, this data is publicly available 3 .</p><p>2 https://developer.spotify.com/documentation/ web-api/reference/tracks/get-audio-features/ 3 augment.cs.kuleuven.be/datasets/classifeye</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Classifiers</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Features</head><p>The Tobii 4C does not come with software to detect fixations and saccades so we identified fixations and saccades using an implementation of the ID-T algorithm <ref type="bibr" target="#b34">[35]</ref> with a dispersion threshold of one degree and a duration threshold of 100ms <ref type="bibr" target="#b34">[35]</ref>. This means that in this study a fixation is identified as a circle on the screen in which the user keeps focusing for at least 100ms without moving their eyes more than one degree. All other movements are then identified as saccades, i.e. quick movements of gaze from one fixation to another <ref type="bibr" target="#b29">[30]</ref>.</p><p>Based on these saccades and fixations, we generated a set of eye-tracking features as listed in Table <ref type="table" target="#tab_1">2</ref>. Most of these features are selected because they are widely used in previous eye tracking studies <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b29">30,</ref><ref type="bibr" target="#b35">36]</ref>. In addition to these features, we included Most frequent saccade direction and fixations in a 4x4 heatmap as the study of Hoppe et al. <ref type="bibr" target="#b32">[33]</ref> indicated that these features are important in the extraction of personality. We did not include features that contain explicit information about the content of the interface, so called areas of interest (AOI) even as previous work has shown that these features could have more predictive power <ref type="bibr" target="#b29">[30]</ref>. The reason for this is that this information is already partially captured in a more general way by Most frequent saccade direction and fixations in a 4x4 heatmap. Thus, at this stage we chose to investigate how far we can go with display-independent features, which also have the advantage of possibly being more generalizable to other interfaces.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Data windows</head><p>To explore whether classification of the three PCs would be possible with only a partial  amount of data, we generated three different data windows to simulate partial observations of gaze data during the task similar to Steichen et al. <ref type="bibr" target="#b29">[30]</ref> and Conati et al. <ref type="bibr" target="#b30">[31]</ref>.</p><p>Each window consists of a partial observation of each participant based on relative duration: the first window consisted of the first 30% of data, the second window of the first 60% and the last window consisted of the first 90% of data. Despite the fact that this approach requires a task to be fully completed to determine what 100% of the data constitutes, it still allows to provide valuable insights into trends and patterns about inferring PCs from gaze data <ref type="bibr" target="#b29">[30]</ref>. Each of these windows consist of three different measure-ments and for each of these measurements the data was divided in ten different segments of equal length. For each of these segments, we generated the mentioned set of eye-tracking features resulting in a feature vector of 260 features for each measurement.</p><p>The reasoning behind creating these different datasets is to verify whether we would be able to adapt the RS interface to the needs of the user during the task. As such we did not include a window with 100% of the data as the adaptation would be too late. Additionally, previous research <ref type="bibr" target="#b29">[30]</ref> showed already that after a certain amount of data, the accuracy started to converge or even that the accuracy decreases after a certain amount of data. In this study, we want to explore whether we would notice similar trends for different PCs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Classification methods</head><p>To classify users in a low and high category, we used scikit-learn to train five different classifiers and a baseline <ref type="bibr" target="#b36">[37]</ref>. To evaluate the performance of the classifiers, we applied a leave-one-out methodology. Because of this evaluation methodology and the uniform groups, we could not use the most common majority class baseline which predicts the most likely class (this would lead to 0% accuracy) <ref type="bibr" target="#b29">[30,</ref><ref type="bibr" target="#b30">31,</ref><ref type="bibr" target="#b32">33]</ref>. As a consequence, we choose a random uniform baseline which has a theoretical accuracy of 50%. To classify the characteristics, we trained Logistic Regression, Random Forest, Gaussian Naive Bayes, Linear Support Vector Machines and Gradient Boosting. The reasoning behind the implementation of all these classifiers is that in previous research there is no consensus about which classifiers work the best. Steichen et al. <ref type="bibr" target="#b29">[30]</ref> found that Logistic Regression performed better than Decision Trees, Support Vector Machines and Neural Networks. Lallé et al. <ref type="bibr" target="#b37">[38]</ref> and Hoppe et al. <ref type="bibr" target="#b32">[33]</ref> found that Random forests worked the best. However, Berkovsky et al. <ref type="bibr" target="#b6">[7]</ref> conclude that Naive Bayes and Support Vector Machines are the best. Additionally, Gradient Boosting performed well in the study of <ref type="bibr">Barral et al. [39]</ref>. Because of the small sample size, we chose not to use deep learning methods. For each of these classifiers we tried to optimize the accuracy. The resulting parameters can be found in Table <ref type="table" target="#tab_2">3</ref>.</p><p>To strengthen the stability of the results, we ran this evaluation 10 times with different random seeds. We calculated the average accuracy over all participants, and all runs to measure performance of the classifier.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results</head><p>To examine whether it is possible to classify users in the correct personality group and whether this classification works better on specific windows, we ran for each PC a two-way repeated measures ANOVA with accuracy as the dependent variable and both classifier and window as independent variables. As we run multiple ANOVA's and pairwise comparisons, the reported p-values are adjusted using the Benjamini and Hoghberg procedure <ref type="bibr" target="#b39">[40]</ref> to control for the family-wise false discovery rate. The main results of this analysis are shown in Figure <ref type="figure" target="#fig_2">2</ref> and we will report the results for each of the PCs in detail in the next paragraphs. Need for cognition. The results of the twoway repeated measures ANOVA revealed a significant main effect of classifier on accuracy (F(7.14) =18.8, p&lt;.001). To investigate this main effect, we ran post-hoc pairwise comparisons which showed that the mean accuracy of the logistic regression classifier (0.59) performed statistically better than the baseline (p=.0491) which is shown in Figure <ref type="figure" target="#fig_2">2a</ref>. This figure also shows the accuracy in the three different windows and that the peak accuracy (0.67) is reached in the last window.</p><p>Musical sophistication. The results of the two-way repeated measures ANOVA revealed that no classifier could outperform the baseline and that most of the classifiers performed even worse.</p><p>Openness. The results of the two-way repeated measures ANOVA revealed a significant interaction effect of classifier with window on accuracy (F(14,28)=4.88, p&lt;.001). An analysis of the effect of classifier showed a significant effect for the first window (F(7,16)=4.512, p=.006) and a posthoc test revealed that in this window the Gradient Boost performed significantly better than the baseline (p=.020). The analysis of the effect of window showed a significant effect for the Gradient Boost classifier (F(2,6)=8.12, p=.020) and a post-hoc analysis showed that the gradient boost classifier performed significantly better in the first window than in the second (p=.028) and the third window (p=.029). Figure <ref type="figure" target="#fig_2">2b</ref> shows that the highest accuracy of Gradient Boost is reached in the first window (0.66). This accuracy is significantly higher than the accuracy of the baseline and the accuracy of Gradient Boost in the other windows.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Discussion</head><p>Our results show that we have a higher accuracy than the random baseline for NFC and for openness in the first window, but that we were not able to do beat the random baseline classifier for MS.</p><p>For the classification of openness, it is interesting that we are able to outperform the baseline while openness was one of the few traits of which Hoppe et al. <ref type="bibr" target="#b32">[33]</ref> could not outperform the baseline. This might be due to a different classification technique as Hoppe et al. only used a Random Forest classifier while we outperformed the baseline with a Gradient Boost classifier. Another possible reason could be that this difference is due to the fact that we trained the classifiers on different data windows and that our results show that the performance to classify openness is only significantly better than the baseline in the first window. As far as we know, no other studies formally showed that classifying PCs on early stages of the task can outperform classifiers trained on more data. However, other studies such as the study of Steichen et al. <ref type="bibr" target="#b29">[30]</ref> already discussed this trend for perceptual speed, verbal working memory and visual working memory. They argued that these characteristics most strongly affect the gaze pattern of the user during the initial phase of a task and that other factors dilute the gaze pattern as the task continues. This is probably also the reason why we are only able to classify openness in the beginning of the task. However, this is not necessary a problem as we want to adapt an interface to the openness of a user as early on as possible. Nevertheless, the obtained accuracy is still too low to be used to adapt the explanations. Also, more research is needed to verify that openness will always affect the gaze during the beginning of a task or only when they see a new interface.</p><p>To classify NFC, our results show a signif-  icant main effect of Logistic Regression on accuracy. The reason that we do not see a significant difference between the windows could be that NFC is correlated with decisionmaking processes <ref type="bibr" target="#b11">[12]</ref> and creating a playlist in a music RS constantly involves making decisions. Despite the significant main effect, the accuracy to classify NFC seems not high enough to adapt the interface, especially not in the first two windows. As a consequence, this means that further research needs to focus on reaching a higher accuracy in the beginning of the interaction to be able to adapt explanations early on in the process or on adapting the interface if the user re-visits the application. Additionally, further research should investigate why Logistic Regression performed the best to classify NFC as this is similar to previous studies in which Logistic Regression performed well to classify PCs, but we do not have an explanation why logistic regression outperforms other algorithms <ref type="bibr" target="#b29">[30,</ref><ref type="bibr" target="#b40">41]</ref>.</p><p>As a previous study in the field of music RS showed that MS influences the way users look to a music RS interface and previous studies in the field of information retrieval also showed the potential of predicting domain knowledge based on eye tracking <ref type="bibr" target="#b41">[42,</ref><ref type="bibr" target="#b42">43,</ref><ref type="bibr" target="#b8">9]</ref>, we expected to be able to classify MS based on gaze data. However, our results show that we could not outperform the baseline. A pos-sible reason for this could be that we did not include AOI related features which were included in the above-mentioned studies. An interesting further line of research is to verify whether including these AOI features can improve accuracy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusion</head><p>In this paper, we explored whether it would be possible to adapt the explanations in a music RS interface based on personal characteristics. To do so, we investigated whether a classification of personal characteristics could be inferred by studying the gaze pattern during the creation of a playlist in this system. More concretely, we classified musical sophistication, need for cognition and openness because these characteristics have shown to impact the user experience of explanation in a RS <ref type="bibr" target="#b8">[9]</ref>. We trained the classifiers on different windows to detect whether the classification would already work with only a partial observation of the creation of a playlist.</p><p>Our results show that even as our accuracy is not yet high enough for practical use, we are able to outperform a baseline to classify need for cognition with Logistic Regression. If we only consider the first third of the data, our results show that the classification of openness with Gradient Boost beats the baseline. Despite the limitations in terms of accuracy, this finding is important because it shows the potential to adapt explanations during the interaction with a music RS interface. In a next step, we want to increase the accuracy of the classifiers particularly in the beginning of the interaction which we plan to do by gathering more training data and by using different features such as AOI related features. Additionally, more research is needed to verify whether the results of this study could be generalized to different tasks and interfaces which we also plan to address in future research.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The interface with the different parts highlighted in orange. A: Searchbox, B: Artist, C: Attributes of the artist, D: Preference of the user, E: Task, F: Recommendations, G: Cover of a song, H: Explanations I: (dis)like buttons, J: Play button K: list of (dis)liked songs</figDesc><graphic coords="5,116.60,105.23,361.94,186.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>Accuracy of Gradient Boost for openness.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Accuracy of classifiers that perform significantly better than the baseline.</figDesc><graphic coords="8,304.09,114.13,129.76,78.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>An overview of personal characteristics measured, together with their highest and lowest possible scores and summary statistics for the scores of the participants</figDesc><table><row><cell>PC</cell><cell cols="2">Possible Range Median Score</cell></row><row><cell>Age</cell><cell>18-65</cell><cell>24</cell></row><row><cell>MS</cell><cell>18-126</cell><cell>64</cell></row><row><cell>NFC</cell><cell>0-100</cell><cell>68.75</cell></row><row><cell>Openness</cell><cell>0-100</cell><cell>55</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Description of eye tracking features</figDesc><table><row><cell>Features</cell><cell>Description</cell></row><row><cell>Saccade rate</cell><cell>Number of saccades divided by segment duration</cell></row><row><cell>Avg. saccade length</cell><cell>Average distance between the two fixations delimiting the saccade</cell></row><row><cell>Avg. saccade amplitude</cell><cell>Average size of saccade in degrees of visual angle</cell></row><row><cell>Avg. saccade velocity</cell><cell>Average velocity (saccade amplitude / saccade duration) of saccades</cell></row><row><cell>Peak saccade velocity</cell><cell>Maximum saccade velocity in segment</cell></row><row><cell cols="2">Most frequent saccade direction Most frequent saccade direction (segments of 45°)</cell></row><row><cell>Fixation rate</cell><cell>Number of fixations divided by segment duration</cell></row><row><cell>Avg. fixation duration</cell><cell>Average duration of fixation in ms</cell></row><row><cell>Ratio Fixations/Saccades</cell><cell>Ratio of total nb of fixations divided by total nb of saccades</cell></row><row><cell>4x4 Heatmap</cell><cell>Percentage of fixations in 16 raster areas</cell></row><row><cell>Avg. pupil size</cell><cell>Average pupil size of both eyes</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc>Description of parameters of the different classifiers</figDesc><table><row><cell>Classifier</cell><cell>Parameter</cell></row><row><cell>Baseline</cell><cell>strategy: uniform</cell></row><row><cell>Logistic Regression</cell><cell>solver: liblinear</cell></row><row><cell>Random Forest</cell><cell>estimators: 100</cell></row><row><cell>Gaussian Naive Bayes</cell><cell>na</cell></row><row><cell>Linear Support Vector Machines</cell><cell>gamma: scale probability: True</cell></row><row><cell>Gradient Boosting</cell><cell>maximum depth: 4</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://www.gold.ac.uk/music-mind-brain/ gold-msi/ May</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2020" xml:id="foot_1"></note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>Part of this research has been supported by the KU Leuven Research Council (grant agreement C24/16/017) and the Research Foundation Flanders (FWO).</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>users' domain knowledge from search behaviors, in: Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, 2011, pp. 1225-1226.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Comparing recommendations made by online systems and friends</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">R</forename><surname>Sinha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Swearingen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">DELOS</title>
		<imprint>
			<biblScope unit="volume">106</biblScope>
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Interactive recommender systems: A survey of the state of the art and future research challenges and opportunities</title>
		<author>
			<persName><forename type="first">C</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Parra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Verbert</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">56</biblScope>
			<biblScope unit="page" from="9" to="27" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Let me explain: Impact of personal and impersonal explanations on trust in recommender systems</title>
		<author>
			<persName><forename type="first">J</forename><surname>Kunkel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Donkers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Michael</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-M</forename><surname>Barbu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ziegler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems</title>
				<meeting>the 2019 CHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Making transparency clear</title>
		<author>
			<persName><forename type="first">A</forename><surname>Springer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Whittaker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Algorithmic Transparency for Emerging Technologies Workshop</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page">5</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A survey of explanations in recommender systems</title>
		<author>
			<persName><forename type="first">N</forename><surname>Tintarev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Masthoff</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 23rd international conference on data engineering workshop</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2007">2007. 2007</date>
			<biblScope unit="page" from="801" to="810" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Progressive disclosure: empirically motivated approaches to designing effective transparency</title>
		<author>
			<persName><forename type="first">A</forename><surname>Springer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Whittaker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 24th International Conference on Intelligent User Interfaces</title>
				<meeting>the 24th International Conference on Intelligent User Interfaces</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="107" to="120" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Detecting personality traits using eyetracking data</title>
		<author>
			<persName><forename type="first">S</forename><surname>Berkovsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Taib</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Koprinska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kleitman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems</title>
				<meeting>the 2019 CHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Automatic personality assessment through social media language</title>
		<author>
			<persName><forename type="first">G</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A</forename><surname>Schwartz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Eichstaedt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Kern</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kosinski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Stillwell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">H</forename><surname>Ungar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Seligman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of personality and social psychology</title>
		<imprint>
			<biblScope unit="volume">108</biblScope>
			<biblScope unit="page">934</biblScope>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">What&apos;s in a user? towards personalising transparency for music recommender interfaces</title>
		<author>
			<persName><forename type="first">M</forename><surname>Millecamp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">N</forename><surname>Htun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Conati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Verbert</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization</title>
				<meeting>the 28th ACM Conference on User Modeling, Adaptation and Personalization</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="173" to="182" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">To explain or not to explain: the effects of personal characteristics when explaining music recommendations</title>
		<author>
			<persName><forename type="first">M</forename><surname>Millecamp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">N</forename><surname>Htun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Conati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Verbert</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 24th International Conference on Intelligent User Interfaces</title>
				<meeting>the 24th International Conference on Intelligent User Interfaces</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="397" to="407" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Explainable recommendations in intelligent systems: delivery methods, modalities and risks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Naiseh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ali</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Research Challenges in Information Science</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="212" to="228" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Online daters&apos; willingness to use recommender technology for mate selection decisions</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">T</forename><surname>Tong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">F</forename><surname>Corriero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">G</forename><surname>Matheny</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">T</forename><surname>Hancock</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">-tRS@ RecSys</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="45" to="52" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Argumentation-based explanations in recommender systems: Conceptual framework and empirical results</title>
		<author>
			<persName><forename type="first">S</forename><surname>Naveed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Donkers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ziegler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="293" to="298" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">The structure of phenotypic personality traits</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">R</forename><surname>Goldberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">American psychologist</title>
		<imprint>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="page">26</biblScope>
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Enhancing collaborative filtering systems with personality information</title>
		<author>
			<persName><forename type="first">R</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the fifth ACM conference on Recommender systems</title>
				<meeting>the fifth ACM conference on Recommender systems</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="197" to="204" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Los cinco grandes across cultures and ethnic groups: Multitrait-multimethod analyses of the big five in spanish and english</title>
		<author>
			<persName><forename type="first">V</forename><surname>Benet-Martinez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">P</forename><surname>John</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of personality and social psychology</title>
		<imprint>
			<biblScope unit="volume">75</biblScope>
			<biblScope unit="page">729</biblScope>
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Adapting recommendation diversity to openness to experience: A study of human behaviour</title>
		<author>
			<persName><forename type="first">N</forename><surname>Tintarev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Dennis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Masthoff</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on User Modeling, Adaptation, and Personalization</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="190" to="202" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">How personality influences users&apos; needs for recommendation diversity?</title>
		<author>
			<persName><forename type="first">L</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>He</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CHI&apos;13 extended abstracts on human factors in computing systems</title>
				<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="829" to="834" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Persuasion in recommender systems</title>
		<author>
			<persName><forename type="first">U</forename><surname>Gretzel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">R</forename><surname>Fesenmaier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Electronic Commerce</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="81" to="100" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Constraintbased recommender systems: technologies and research issues</title>
		<author>
			<persName><forename type="first">A</forename><surname>Felfernig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Burke</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th international conference on Electronic commerce</title>
				<meeting>the 10th international conference on Electronic commerce</meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="1" to="10" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">The efficient assessment of need for cognition</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">T</forename><surname>Cacioppo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">E</forename><surname>Petty</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">Feng</forename><surname>Kao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of personality assessment</title>
		<imprint>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="page" from="306" to="307" />
			<date type="published" when="1984">1984</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Web personalization as a persuasion strategy: An elaboration likelihood model perspective</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">Y</forename><surname>Tam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">Y</forename><surname>Ho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">formation systems research</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="271" to="291" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Cogito ergo quid? the effect of cognitive style in a transparent mobile music recommender system</title>
		<author>
			<persName><forename type="first">M</forename><surname>Millecamp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Haveneers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Verbert</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization</title>
				<meeting>the 28th ACM Conference on User Modeling, Adaptation and Personalization</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="323" to="327" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">The musicality of nonmusicians: an index for assessing musical sophistication in the general population</title>
		<author>
			<persName><forename type="first">D</forename><surname>Müllensiefen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Gingras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Musil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Stewart</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PloS one</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page">e89642</biblScope>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Effects of individual traits on diversity-aware music recommender user interfaces</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Tintarev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Verbert</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization</title>
				<meeting>the 26th Conference on User Modeling, Adaptation and Personalization</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="291" to="299" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Predicting personality from twitter</title>
		<author>
			<persName><forename type="first">J</forename><surname>Golbeck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Robles</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Edmondson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Turner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2011 IEEE third international conference on privacy, security, risk and trust and 2011 IEEE third international conference on social computing</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="149" to="156" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Trusting virtual agents: the effect of personality</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">X</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Mark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Yang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Interactive Intelligent Systems (TiiS)</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="1" to="36" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Implicit user-centric personality recognition based on physiological responses to emotional videos</title>
		<author>
			<persName><forename type="first">J</forename><surname>Wache</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Subramanian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Abadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R.-L</forename><surname>Vieriu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Sebe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Winkler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2015 ACM on International Conference on Multimodal Interaction</title>
				<meeting>the 2015 ACM on International Conference on Multimodal Interaction</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="239" to="246" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Individual user characteristics and information visualization: connecting the dots through eye tracking</title>
		<author>
			<persName><forename type="first">D</forename><surname>Toker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Conati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Steichen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Carenini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">proceedings of the SIGCHI Conference on Human Factors in Computing Systems</title>
				<meeting>the SIGCHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="295" to="304" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Inferring visualization task properties, user performance, and user cognitive abilities from eye gaze data</title>
		<author>
			<persName><forename type="first">B</forename><surname>Steichen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Conati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Carenini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Interactive Intelligent Systems (TiiS)</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="1" to="29" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Further results on predicting cognitive abilities for adaptive visualizations</title>
		<author>
			<persName><forename type="first">C</forename><surname>Conati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lallé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rahman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Toker</surname></persName>
		</author>
		<idno type="DOI">10.24963/ijcai.2017/217</idno>
	</analytic>
	<monogr>
		<title level="m">IJ-CAI International Joint Conference on Artificial Intelligence</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1568" to="1574" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Constructing models of user and task characteristics from eye gaze data for user-adaptive information highlighting</title>
		<author>
			<persName><forename type="first">M</forename><surname>Gingerich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Conati</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI Conference on Artificial Intelligence</title>
				<meeting>the AAAI Conference on Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="volume">1</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Eye movements during everyday behavior predict personality traits</title>
		<author>
			<persName><forename type="first">S</forename><surname>Hoppe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Loetscher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Morey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bulling</surname></persName>
		</author>
		<idno type="DOI">10.3389/fnhum.2018.00105</idno>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Human Neuroscience</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="1" to="8" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<title level="m" type="main">The big five inventory-versions 4a and</title>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">P</forename><surname>John</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Donahue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">L</forename><surname>Kentle</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1991">1991</date>
			<biblScope unit="volume">54</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Identifying fixations and saccades in eyetracking protocols</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">D</forename><surname>Salvucci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Goldberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2000 symposium on Eye tracking research &amp; applications</title>
				<meeting>the 2000 symposium on Eye tracking research &amp; applications</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2000">2000</date>
			<biblScope unit="page" from="71" to="78" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Computer interface evaluation using eye movements: methods and constructs</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Goldberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><forename type="middle">P</forename><surname>Kotval</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International journal of industrial ergonomics</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="631" to="645" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Scikit-learn: Machine learning in Python</title>
		<author>
			<persName><forename type="first">F</forename><surname>Pedregosa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Varoquaux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gramfort</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Michel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Thirion</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Grisel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Blondel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Prettenhofer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Weiss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Dubourg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Vanderplas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Passos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Cournapeau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Brucher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Perrot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Duchesnay</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Machine Learning Research</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="2825" to="2830" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Prediction of individual learning curves across information visualizations</title>
		<author>
			<persName><forename type="first">S</forename><surname>Lallé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Conati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Carenini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">User Modeling and User-Adapted Interaction</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="page" from="307" to="345" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Eye-tracking to predict user cognitive abilities and performance for user-adaptive narrative visualizations</title>
		<author>
			<persName><forename type="first">O</forename><surname>Barral</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lallé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Guz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Iranpour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Conati</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2020 International Conference on Multimodal Interaction</title>
				<meeting>the 2020 International Conference on Multimodal Interaction</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="163" to="173" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Controlling the false discovery rate: a practical and powerful approach to multiple testing</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Benjamini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Hochberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the Royal statistical society: series B (Methodological)</title>
		<imprint>
			<biblScope unit="volume">57</biblScope>
			<biblScope unit="page" from="289" to="300" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Exploring gaze data for determining user learning with an interactive simulation</title>
		<author>
			<persName><forename type="first">S</forename><surname>Kardan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Conati</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on User Modeling, Adaptation, and Personalization</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="126" to="138" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">Inferring user knowledge level from eye movement patterns</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Cole</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gwizdka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">J</forename><surname>Belkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Processing &amp; Management</title>
		<imprint>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="page" from="1075" to="1091" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<monogr>
		<title level="m" type="main">Predicting</title>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Cole</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Belkin</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
