<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshops, April</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Classifeye: Classification of Personal Characteristics Based on Eye Tracking Data in a Recommender System Interface</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Martijn Millecamp</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cristina Conati</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Katrien Verbert</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of computer Science</institution>
          ,
          <addr-line>ICICS/CS 107, 2366 Main Mall, Vancouver, BC</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of computer science, KU Leuven</institution>
          ,
          <addr-line>Celestijnenlaan 200A bus 2402, Leuven</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>1</volume>
      <fpage>3</fpage>
      <lpage>17</lpage>
      <abstract>
        <p>Due to the increasing importance of recommender systems in our life, the call to make these systems more transparent becomes louder. However, providing explanations is not as easy as it seems, as research has shown that diferent users have varying reactions to explanations. So not only the recommendations, but also the explanations should be personalised. As a first step towards these personalised explanations, we explore the possibility to classify users based on their gaze pattern during the interaction with a music recommender system. More specifically, we classify three personal characteristics that have been shown to play a role in the interaction with music recommendations: need for cognition, openness and musical sophistication. Our results show that classification based on eye tracking has potential for need for cognition and openness, as we are able to do better than random, but not for musical sophistication as no classifier did better than a uniform random baseline.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;eye tracking</kwd>
        <kwd>classification</kwd>
        <kwd>recommender system</kwd>
        <kwd>openness</kwd>
        <kwd>need for cognition</kwd>
        <kwd>musical sophistication</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In the field of recommender systems (RS),
researchers are increasingly aware that
optimizing accuracy is not enough to reach the
full potential of recommender systems (RS)
[
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. For example, users will not choose a
recommended item unless they have trust in
the system [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. One possible way to increase
this trust is providing explanations which
reveal (a part of ) the internal reasoning of the
RS to the user [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. Especially the
combination of these explanations with control can
help users not only to understand the RS, but
also to steer the RS with input and feedback
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Despite the increased interest in
explanations for RS, it is still not clear how to
implement explanations in practice as users have
varying reactions to them which shows the
need to personalize explanation to the user
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        However, before the system could adapt
explanations to personal characteristics (PCs),
it needs to be aware of the PCs of the user. A
possible way to obtain these characteristics is
by explicitly asking the users to fill in
questionnaires [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] or by implicitly inferring PCs
through an analysis of the social media of the
user [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Nonetheless, asking users to fill in
questionnaires or to give access to their
social media is often not desirable. Moreover, to
personalize explanations it is not necessary
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>
        to obtain a fine-grained result, but a
classification into two categories sufices [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        For this reason, we explore in this paper With the increasing role of RS in our daily
whether it is possible to classify users’ per- lives, the call for explainable, transparent RS
sonality traits during the interaction with a also becomes louder so that users can make
music RS with explanations by analyzing their better informed decisions whether or not to
gaze. We will focus on three diferent PCs: follow the recommendations [
        <xref ref-type="bibr" rid="ref11 ref6">11, 6</xref>
        ]. In
comopenness, need for cognition (NFC) and mu- bination with controls, this transparency also
sical sophistication (MS) [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ]. These PCs enables users to correct the RS whenever they
will be explained in detail in Section 2. feel it makes wrong assumptions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
How
      </p>
      <p>
        Openness is one of the Big Five personal- ever, research has shown that diferent users
ity traits which measures how open a per- have diferent reactions to explanations [6,
son is to new experiences. Millecamp et al. 12, 13]. In the field of music RS, recent
re[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] showed that there was a significant dif- search has shown that there are three PCs
ference in the gaze pattern between low and that could influence the way users perceive
high openness users. This is the reason we explanations: openness, NFC and MS [
        <xref ref-type="bibr" rid="ref10 ref9">10, 9</xref>
        ].
hypothesize that classifying openness based Openness is one of the five factors of the
on gaze might be possible. Five Factor Model, also known as the Big 5
      </p>
      <p>
        Similarly, we hypothesize that inferring MS, model [14]. This model describes personality
which is a measure of domain knowledge in in five diferent traits and it has been used
the music domain, from gaze data might be in several studies which showed the positive
possible as the study of Millecamp et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] impact of considering personality in RS [15].
also found significant diferences in gaze pat- The factor openness describes the breadth, depth
tern between low and high MS. and complexity of an individualś mental and
      </p>
      <p>
        NFC is a cognitive style which influences experiental life [16]. It has been shown that
the way a person prefers to process informa- openness is related to the preferred amount
tion and thus looks at information. Previ- of diversity in RS and to the willingness to
ous studies already showed that NFC moder- use a system with explanations [
        <xref ref-type="bibr" rid="ref9">17, 18, 9</xref>
        ].
ates the perception of explanations in a music Need for cognition has been shown to
inrecommender system, which was the motiva- fluence the success of a RS [13, 12, 19, 20] and
tion to explore whether inferring NFC from is defined as “a measure of the tendency for
gaze would be possible. an individual to engage in, and enjoy,
efort
      </p>
      <p>Next to exploring the general accuracy, we ful cognitive activities” [21]. NFC has been
also want to explore how much data we need shown to have an impact on the willingness
to infer these PCs. of users to rely on a RS [12], on the
confi</p>
      <p>
        The contribution of this paper is twofold. dence in a playlist created in a music RS with
First, to our knowledge, we are the first to ex- explanations [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], on preference matching [22],
plore whether it is possible to infer PCs dur- on the style of explanations they prefer [13]
ing the interaction with a RS in the presence and on the reason why users need a
transparof explanations. Second, we make the gath- ent RS [23].
ered dataset publicly available to support the Musical sophistication is defined by
Mulresearch in this area. This dataset is unique lensiefen et al. [24] as a concept to describe
because it provides both gaze data and data the multi-faceted nature of musical expertise.
about PCs. In the music domain, Millecamp et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
showed that users with high MS feel more
      </p>
      <p>PC
Age
MS
NFC</p>
      <sec id="sec-2-1">
        <title>Openness</title>
      </sec>
      <sec id="sec-2-2">
        <title>Possible Range</title>
      </sec>
      <sec id="sec-2-3">
        <title>Median Score</title>
        <p>supported to make a decision in a RS
interface that provided explanations than an
interface without such explanations, while this
made no diference for users with low MS.</p>
        <p>Another study showed that users with high
domain experience perceive a higher
diversity in a scatter plot than in a simpler bubble
chart [25].</p>
        <p>
          To acquire the PCs of users, the most
common way is to ask users to fill in validated
questionnaire [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], but there exist also other
approaches such as inferring PCs by
analyzing the social media of the user [
          <xref ref-type="bibr" rid="ref8">26, 8</xref>
          ], by
analyzing a conversation with a chatbot [27]
or by analyzing the physical signals such as
brain activity [28] and gaze data [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
        <p>The previously mentioned works rely on
ifne-grained personality scores. In contrast,
in our work we focus on adapting interfaces
to users for which we only need a
classification in two groups. We aim to base this
classification on the gaze pattern during the
interaction with a music RS interface instead of
asking users to watch carefully selected
stimuli, to fill in questionnaires or to share their
social media profile. Previous studies which
classified users based on their gaze pattern
during normal activities are almost all only
focused on cognitive abilities and
visualiza</p>
        <p>
          consists of the gaze data of 30 participants
(21 male). For the three PCs, the participants
were divided into a high and low group based
on a median split. This resulted in equally
distributed groups for MS and NFC and
almost equally groups for openness (16 in the
low and 14 in the high openness group). An
short overview of the characteristics of the
3. Data participants can be found in Table 1.
The gaze data was recorded with a Tobii
The gaze data that is used in this study was 4C remote eye tracker at a sampling rate of
generated in a user study by Millecamp et al. 90Hz. Each sample contained information about
[
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. We will provide a brief summary of this
experiment, but a more elaborate description
4. Classifiers
the focus point on the screen denoted as an
x and y coordinate, the distance between the
participant and the screen, and the validity of 4.1. Features
these measures. To calibrate the eye tracker,
the experiment started with a standard cali- The Tobii 4C does not come with software
bration procedure provided by Tobii Core Soft- to detect fixations and saccades so we
idenware. After the calibration, users were asked tified fixations and saccades using an
impleto explore the interface of a music RS in the mentation of the ID-T algorithm [35] with a
presence of feature-based explanations until dispersion threshold of one degree and a
duthey understood all functionalities. A screen- ration threshold of 100ms [35]. This means
shot of the interface is shown in Figure 1. that in this study a fixation is identified as a
        </p>
        <p>
          As shown in Part A of this figure, users circle on the screen in which the user keeps
ifrst can search for an artist they like through focusing for at least 100ms without moving
a search bar in the top left corner. When they their eyes more than one degree. All other
add the artist, this artist is shown in Part B. movements are then identified as saccades,
Based on this artist, the system starts to gen- i.e. quick movements of gaze from one
fixerate recommendations which were listed in ation to another [30].
a two-column format as shown in Part F. Based on these saccades and fixations, we
When users hover over the cover of the pic- generated a set of eye-tracking features as listed
ture of a recommended song, they can click in Table 2. Most of these features are selected
a play button to listen to a 30s preview of the because they are widely used in previous eye
song. On the right side of each explanation, tracking studies [
          <xref ref-type="bibr" rid="ref7">7, 30, 36</xref>
          ]. In addition to
they can click on the thumb-up icon to add these features, we included Most frequent
sacthe song to their playlist. Through the sliders cade direction and fixations in a 4x4 heatmap
shown in Part D of Figure 1, users can mod- as the study of Hoppe et al. [33] indicated that
ify several audio features 2 such as popular- these features are important in the extraction
ity, energy and danceability which are also of personality. We did not include features
taken into account in the recommendation that contain explicit information about the
process. To help users steer these sliders, the content of the interface, so called areas of
inminimum and the maximum for each audio terest (AOI) even as previous work has shown
feature is shown for each artist. that these features could have more
predic
        </p>
        <p>After the user explored all the options of tive power [30]. The reason for this is that
the interface, the recording of the gaze started. this information is already partially captured
As shown in Part E of Figure 1, users were in a more general way by Most frequent
sacasked to create a playlist of five songs. To cre- cade direction and fixations in a 4x4 heatmap.
ate this playlist, they could use all function- Thus, at this stage we chose to investigate
alities without any restriction. When they how far we can go with display-independent
added the fifth song to their playlist, we features, which also have the advantage of
stopped the recording of the gaze. On aver- possibly being more generalizable to other
inage, users took 4 minutes 26 seconds to com- terfaces.
plete their playlist. As part of this paper’s
contribution, this data is publicly available 3. 4.2. Data windows</p>
        <sec id="sec-2-3-1">
          <title>2https://developer.spotify.com/documentation/</title>
          <p>web-api/reference/tracks/get-audio-features/
3augment.cs.kuleuven.be/datasets/classifeye</p>
        </sec>
        <sec id="sec-2-3-2">
          <title>To explore whether classification of the three PCs would be possible with only a partial</title>
          <p>A B
D
C
G
H
I
J</p>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>Number of saccades divided by segment duration</title>
      </sec>
      <sec id="sec-2-5">
        <title>Average distance between the two fixations delimiting the saccade</title>
      </sec>
      <sec id="sec-2-6">
        <title>Average size of saccade in degrees of visual angle</title>
      </sec>
      <sec id="sec-2-7">
        <title>Average velocity (saccade amplitude / saccade duration) of saccades</title>
      </sec>
      <sec id="sec-2-8">
        <title>Maximum saccade velocity in segment</title>
      </sec>
      <sec id="sec-2-9">
        <title>Most frequent saccade direction (segments of 45°)</title>
      </sec>
      <sec id="sec-2-10">
        <title>Number of fixations divided by segment duration</title>
      </sec>
      <sec id="sec-2-11">
        <title>Average duration of fixation in ms</title>
      </sec>
      <sec id="sec-2-12">
        <title>Ratio of total nb of fixations divided by total nb of saccades</title>
      </sec>
      <sec id="sec-2-13">
        <title>Percentage of fixations in 16 raster areas</title>
      </sec>
      <sec id="sec-2-14">
        <title>Average pupil size of both eyes</title>
        <p>amount of data, we generated three difer- 60% and the last window consisted of the first
ent data windows to simulate partial obser- 90% of data. Despite the fact that this
apvations of gaze data during the task similar proach requires a task to be fully completed
to Steichen et al. [30] and Conati et al. [31]. to determine what 100% of the data
constiEach window consists of a partial observa- tutes, it still allows to provide valuable
intion of each participant based on relative du- sights into trends and patterns about
inferration: the first window consisted of the first ring PCs from gaze data [30]. Each of these
30% of data, the second window of the first windows consist of three diferent
measurements and for each of these measurements Table 3
the data was divided in ten diferent segments Description of parameters of the diferent
classiof equal length. For each of these segments, fiers
we generated the mentioned set of eye-tracking Classifier Parameter
features resulting in a feature vector of 260
features for each measurement. LBoagseisltiniceRegression sstorlavteerg:yli:bulinniefaorrm</p>
        <p>The reasoning behind creating these dif- Random Forest estimators: 100
ferent datasets is to verify whether we would Gaussian Naive Bayes na
boef tahbeleutsoeraddauprtinthgetRheS itnastekr.faAcse stoucthhewneeeddids Linear Support Vector Machines gparombmabai:listyca:lTerue
not include a window with 100% of the data Gradient Boosting maximum depth: 4
as the adaptation would be too late.
AddittcadPwihucaCoeacrtnsatuaw.a.cralayIolfnycutse,ytltprdhadrirasenetvcosecritdteoueircaudttesosayei,rsnscweiomsaeanefimtavwleareorarcruhngatntre[cett3eoon0rord]etfasxseidhpvnfaooleotawrnarme,detdiohwtfehuaarhetnelerntteaththoace-defry ttcebBwohaveaeauesyluttrsle.,seRsiBeAnaoeaddnfrntdedkhtdiehootepiemvSosussnltkepmfuaoyapdlarrloeyyenltlr,sitoaGntssalfVgr.mwaeB[mdc7paoti]rleroeercktnrahoestlMnoidBzedceaottls,huco.awhedsliteF.ebnionteec[hrgss3hate9po.ata]esrNH.cerehfaooBntiwhvrooemee--tfed
these classifiers we tried to optimize the
ac4.3. Classification methods curacy. The resulting parameters can be found
To classify users in a low and high category, in Table 3.
we used scikit-learn to train five diferent clas- To strengthen the stability of the results,
sifiers and a baseline [37]. To evaluate the we ran this evaluation 10 times with
diferperformance of the classifiers, we applied a ent random seeds. We calculated the average
leave-one-out methodology. Because of this accuracy over all participants, and all runs to
evaluation methodology and the measure performance of the classifier.
uniform groups, we could not use the most
common majority class baseline which pre- 5. Results
dicts the most likely class (this would lead to
0% accuracy) [30, 31, 33]. As a consequence, To examine whether it is possible to classify
we choose a random uniform baseline which users in the correct personality group and whether
has a theoretical accuracy of 50%. To clas- this classification works better on specific
winsify the characteristics, we trained Logistic dows, we ran for each PC a two-way repeated
Regression, Random Forest, Gaussian Naive measures ANOVA with accuracy as the
deBayes, Linear Support Vector Machines and pendent variable and both classifier and
winGradient Boosting. The reasoning behind the dow as independent variables. As we run
mulimplementation of all these classifiers is that tiple ANOVA’s and pairwise comparisons, the
in previous research there is no consensus reported p-values are adjusted using the
Benabout which classifiers work the best. Ste- jamini and Hoghberg procedure [40] to
conichen et al. [30] found that Logistic Regres- trol for the family-wise false discovery rate.
sion performed better than Decision Trees, The main results of this analysis are shown
Support Vector Machines and Neural Networks.in Figure 2 and we will report the results for
Lallé et al. [38] and Hoppe et al. [33] found</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>6. Discussion</title>
      <p>each of the PCs in detail in the next
paragraphs.</p>
      <p>Need for cognition. The results of the two- Our results show that we have a higher
accuway repeated measures ANOVA revealed a racy than the random baseline for NFC and
significant main efect of classifier on accu- for openness in the first window, but that we
racy (F(7.14) =18.8, p&lt;.001). To investigate this were not able to do beat the random baseline
main efect, we ran post-hoc pairwise com- classifier for MS.
parisons which showed that the mean accu- For the classification of openness, it is
inracy of the logistic regression classifier ( 0.59) teresting that we are able to outperform the
performed statistically better than the base- baseline while openness was one of the few
line (p=.0491) which is shown in Figure 2a. traits of which Hoppe et al. [33] could not
This figure also shows the accuracy in the outperform the baseline. This might be due
three diferent windows and that the peak ac- to a diferent classification technique as Hoppe
curacy (0.67 ) is reached in the last window. et al. only used a Random Forest classifier</p>
      <p>Musical sophistication. The results of while we outperformed the baseline with a
the two-way repeated measures ANOVA re- Gradient Boost classifier. Another possible
vealed that no classifier could outperform the reason could be that this diference is due to
baseline and that most of the classifiers per- the fact that we trained the classifiers on
difformed even worse. ferent data windows and that our results show</p>
      <p>Openness. The results of the two-way re- that the performance to classify openness is
peated measures ANOVA revealed a signifi- only significantly better than the baseline in
cant interaction efect of classifier with win- the first window. As far as we know, no other
dow on accuracy (F(14,28)=4.88, p&lt;.001). An studies formally showed that classifying PCs
analysis of the efect of classifier showed a on early stages of the task can outperform
significant efect for the classifiers trained on more data. However,
ifrst window ( F(7,16)=4.512, p=.006) and a post- other studies such as the study of Steichen et
hoc test revealed that in this window the al. [30] already discussed this trend for
perGradient Boost performed significantly bet- ceptual speed, verbal working memory and
ter than the baseline (p=.020). The analysis visual working memory. They argued that
of the efect of window showed a significant these characteristics most strongly afect the
efect for the Gradient Boost classifier gaze pattern of the user during the initial phase
(F(2,6)=8.12, p=.020) and a post-hoc analysis of a task and that other factors dilute the gaze
showed that the gradient boost classifier per- pattern as the task continues. This is
probformed significantly better in the first win- ably also the reason why we are only able
dow than in the second (p=.028) and the third to classify openness in the beginning of the
window (p=.029). Figure 2b shows that the task. However, this is not necessary a
probhighest accuracy of Gradient Boost lem as we want to adapt an interface to the
is reached in the first window ( 0.66). This ac- openness of a user as early on as possible.
curacy is significantly higher than the accu- Nevertheless, the obtained accuracy is still too
racy of the baseline and the accuracy of Gra- low to be used to adapt the explanations. Also,
dient Boost in the other windows. more research is needed to verify that
openness will always afect the gaze during the
beginning of a task or only when they see a new
interface.</p>
      <p>To classify NFC, our results show a
signifiFrst
pOens
rGdienta
(a) Accuracy of Logistic Regression for NFC.</p>
      <p>(b) Accuracy of Gradient Boost for openness.
could be that NFC is correlated with decision- interesting further line of research is to
vermaking processes [12] and creating a playlist
ify whether including these AOI features can
in a music RS constantly involves making de- improve accuracy.
cisions. Despite the significant main efect,
the accuracy to classify NFC seems not high
enough to adapt the interface, especially not
in the first two windows. As a consequence,
this means that further research needs to
focus on reaching a higher accuracy in the
beginning of the interaction to be able to adapt
explanations early on in the process or on
adapting the interface if the user re-visits the
application. Additionally, further research
should investigate why Logistic Regression
performed the best to classify NFC as this is
similar to previous studies in which
Logistic Regression performed well to classify PCs,
but we do not have an explanation why
logistic regression outperforms other algorithms
we could not outperform the baseline. A
pos</p>
    </sec>
    <sec id="sec-4">
      <title>7. Conclusion</title>
      <p>In this paper, we explored whether it would
be possible to adapt the explanations in a
music RS interface based on personal
characteristics. To do so, we investigated whether a
classification of personal characteristics could
be inferred by studying the gaze pattern
during the creation of a playlist in this system.</p>
      <sec id="sec-4-1">
        <title>More concretely, we classified musical sophis</title>
        <p>tication, need for cognition and openness
because these characteristics have shown to
impact the user experience of explanation in a</p>
      </sec>
      <sec id="sec-4-2">
        <title>RS [9]. We trained the classifiers on diferent</title>
        <p>windows to detect whether the classification
would already work with only a partial
observation of the creation of a playlist.</p>
        <p>Our results show that even as our
accuracy is not yet high enough for practical use,
we are able to outperform a baseline to
classify need for cognition with Logistic
Regression. If we only consider the first third of
the data, our results show that the
classification of openness with Gradient Boost beats
the baseline. Despite the limitations in terms
of accuracy, this finding is important because
it shows the potential to adapt explanations
during the interaction with a music RS
interface. In a next step, we want to increase the
accuracy of the classifiers particularly in the
beginning of the interaction which we plan
to do by gathering more training data and
by using diferent features such as AOI
related features. Additionally, more research is
needed to verify whether the results of this
study could be generalized to diferent tasks
and interfaces which we also plan to address
in future research.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <sec id="sec-5-1">
        <title>Part of this research has been supported by the KU Leuven Research Council (grant agreement C24/16/017) and the Research Foundation Flanders (FWO).</title>
        <p>mation Science, Springer, 2020, pp. 212– based recommender systems:
technolo228. gies and research issues, in:
Proceed[12] S. T. Tong, E. F. Corriero, R. G. Math- ings of the 10th international
confereny, J. T. Hancock, Online daters’ will- ence on Electronic commerce, 2008, pp.
ingness to use recommender technol- 1–10.
ogy for mate selection decisions., in: In- [21] J. T. Cacioppo, R. E. Petty, C. Feng Kao,
tRS@ RecSys, 2018, pp. 45–52. The eficient assessment of need for
[13] S. Naveed, T. Donkers, J. Ziegler, cognition, Journal of personality
asArgumentation-based explanations in sessment 48 (1984) 306–307.
recommender systems: Conceptual [22] K. Y. Tam, S. Y. Ho, Web
personalizaframework and empirical results, in: tion as a persuasion strategy: An
elaboAdjunct Publication of the 26th Con- ration likelihood model perspective,
Inference on User Modeling, Adaptation formation systems research 16 (2005)
and Personalization, 2018, pp. 293–298. 271–291.
[14] L. R. Goldberg, The structure of pheno- [23] M. Millecamp, R. Haveneers, K. Verbert,
typic personality traits., American psy- Cogito ergo quid? the efect of cognitive
chologist 48 (1993) 26. style in a transparent mobile music
rec[15] R. Hu, P. Pu, Enhancing collaborative ommender system, in: Proceedings of
ifltering systems with personality infor- the 28th ACM Conference on User
Modmation, in: Proceedings of the fifth eling, Adaptation and Personalization,
ACM conference on Recommender sys- 2020, pp. 323–327.</p>
        <p>tems, 2011, pp. 197–204. [24] D. Müllensiefen, B. Gingras, J. Musil,
[16] V. Benet-Martinez, O. P. John, Los cinco L. Stewart, The musicality of
nongrandes across cultures and ethnic musicians: an index for assessing
mugroups: Multitrait-multimethod analy- sical sophistication in the general
popses of the big five in spanish and en- ulation, PloS one 9 (2014) e89642.
glish., Journal of personality and social [25] Y. Jin, N. Tintarev, K. Verbert, Efects
psychology 75 (1998) 729. of individual traits on diversity-aware
[17] N. Tintarev, M. Dennis, J. Masthof, music recommender user interfaces, in:
Adapting recommendation diversity to Proceedings of the 26th Conference on
openness to experience: A study of hu- User Modeling, Adaptation and
Personman behaviour, in: International Con- alization, 2018, pp. 291–299.
ference on User Modeling, Adaptation, [26] J. Golbeck, C. Robles, M. Edmondson,
and Personalization, Springer, 2013, pp. K. Turner, Predicting personality from
190–202. twitter, in: 2011 IEEE third
interna[18] L. Chen, W. Wu, L. He, How personality tional conference on privacy, security,
influences users’ needs for recommen- risk and trust and 2011 IEEE third
interdation diversity?, in: CHI’13 extended national conference on social
computabstracts on human factors in comput- ing, IEEE, 2011, pp. 149–156.
ing systems, 2013, pp. 829–834. [27] M. X. Zhou, G. Mark, J. Li, H. Yang,
[19] U. Gretzel, D. R. Fesenmaier, Persua- Trusting virtual agents: the efect of
sion in recommender systems, Interna- personality, ACM Transactions on
Intional Journal of Electronic Commerce teractive Intelligent Systems (TiiS) 9
11 (2006) 81–100. (2019) 1–36.
[20] A. Felfernig, R. Burke, Constraint- [28] J. Wache, R. Subramanian, M. K. Abadi,
R.-L. Vieriu, N. Sebe, S. Winkler, Im- search &amp; applications, ACM, 2000, pp.
plicit user-centric personality recogni- 71–78.
tion based on physiological responses [36] J. H. Goldberg, X. P. Kotval,
Comto emotional videos, in: Proceedings of puter interface evaluation using eye
the 2015 ACM on International Confer- movements: methods and constructs,
ence on Multimodal Interaction, 2015, International journal of industrial
erpp. 239–246. gonomics 24 (1999) 631–645.
[29] D. Toker, C. Conati, B. Steichen, [37] F. Pedregosa, G. Varoquaux, A.
GramG. Carenini, Individual user charac- fort, V. Michel, B. Thirion, O. Grisel,
teristics and information visualization: M. Blondel, P. Prettenhofer, R. Weiss,
connecting the dots through eye track- V. Dubourg, J. Vanderplas, A. Passos,
ing, in: proceedings of the SIGCHI D. Cournapeau, M. Brucher, M. Perrot,
Conference on Human Factors in E. Duchesnay, Scikit-learn: Machine
Computing Systems, 2013, pp. 295–304. learning in Python, Journal of Machine
[30] B. Steichen, C. Conati, G. Carenini, Learning Research 12 (2011) 2825–2830.</p>
        <p>Inferring visualization task properties, [38] S. Lallé, C. Conati, G. Carenini,
Predicuser performance, and user cognitive tion of individual learning curves across
abilities from eye gaze data, ACM information visualizations, User
ModTransactions on Interactive Intelligent eling and User-Adapted Interaction 26
Systems (TiiS) 4 (2014) 1–29. (2016) 307–345.
[31] C. Conati, S. Lallé, A. Rahman, D. Toker, [39] O. Barral, S. Lallé, G. Guz, A. Iranpour,
Further results on predicting cognitive C. Conati, Eye-tracking to predict user
abilities for adaptive visualizations, IJ- cognitive abilities and performance for
CAI International Joint Conference on user-adaptive narrative visualizations,
Artificial Intelligence (2017) 1568–1574. in: Proceedings of the 2020
Internadoi:10.24963/ijcai.2017/217. tional Conference on Multimodal
Inter[32] M. Gingerich, C. Conati, Constructing action, 2020, pp. 163–173.
models of user and task characteristics [40] Y. Benjamini, Y. Hochberg, Controlling
from eye gaze data for user-adaptive in- the false discovery rate: a practical and
formation highlighting, in: Proceed- powerful approach to multiple testing,
ings of the AAAI Conference on Arti- Journal of the Royal statistical society:
ifcial Intelligence, 1, 2015. series B (Methodological) 57 (1995) 289–
[33] S. Hoppe, T. Loetscher, S. A. Morey, 300.</p>
        <p>A. Bulling, Eye movements during [41] S. Kardan, C. Conati, Exploring gaze
everyday behavior predict personality data for determining user learning with
traits, Frontiers in Human Neuro- an interactive simulation, in:
Interscience 12 (2018) 1–8. doi:10.3389/ national Conference on User
Modelfnhum.2018.00105. ing, Adaptation, and Personalization,
[34] O. P. John, E. M. Donahue, R. L. Kentle, Springer, 2012, pp. 126–138.</p>
        <p>The big five inventory—versions 4a and [42] M. J. Cole, J. Gwizdka, C. Liu, N. J.
54, 1991. Belkin, X. Zhang, Inferring user
knowl[35] D. D. Salvucci, J. H. Goldberg, Iden- edge level from eye movement patterns,
tifying fixations and saccades in eye- Information Processing &amp; Management
tracking protocols, in: Proceedings of 49 (2013) 1075–1091.
the 2000 symposium on Eye tracking re- [43] X. Zhang, M. Cole, N. Belkin, Predicting
users’ domain knowledge from search
behaviors, in: Proceedings of the 34th
international ACM SIGIR conference on
Research and development in
Information Retrieval, 2011, pp. 1225–1226.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Sinha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Swearingen</surname>
          </string-name>
          , et al.,
          <article-title>Comparing recommendations made by online systems and friends</article-title>
          .,
          <source>DELOS</source>
          <volume>106</volume>
          (
          <year>2001</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Parra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Verbert</surname>
          </string-name>
          ,
          <article-title>Interactive recommender systems: A survey of the state of the art and future research challenges and opportunities</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>56</volume>
          (
          <year>2016</year>
          )
          <fpage>9</fpage>
          -
          <lpage>27</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kunkel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Donkers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Michael</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.-M. Barbu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Ziegler</surname>
          </string-name>
          ,
          <article-title>Let me explain: Impact of personal and impersonal explanations on trust in recommender systems</article-title>
          ,
          <source>in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Springer</surname>
          </string-name>
          , S. Whittaker,
          <article-title>Making transparency clear</article-title>
          ,
          <source>in: Algorithmic Transparency for Emerging Technologies Workshop</source>
          ,
          <year>2019</year>
          , p.
          <fpage>5</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N.</given-names>
            <surname>Tintarev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Masthof</surname>
          </string-name>
          ,
          <article-title>A survey of explanations in recommender systems</article-title>
          ,
          <source>in: 2007 IEEE 23rd international conference on data engineering workshop</source>
          , IEEE,
          <year>2007</year>
          , pp.
          <fpage>801</fpage>
          -
          <lpage>810</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Springer</surname>
          </string-name>
          , S. Whittaker,
          <article-title>Progressive disclosure: empirically motivated approaches to designing efective transparency</article-title>
          ,
          <source>in: Proceedings of the 24th International Conference on Intelligent User Interfaces</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>107</fpage>
          -
          <lpage>120</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Berkovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Taib</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Koprinska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kleitman</surname>
          </string-name>
          ,
          <article-title>Detecting personality traits using eyetracking data</article-title>
          ,
          <source>in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Schwartz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Eichstaedt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Kern</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kosinski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Stillwell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. H.</given-names>
            <surname>Ungar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Seligman</surname>
          </string-name>
          ,
          <article-title>Automatic personality assessment through social media language</article-title>
          .,
          <source>Journal of personality and social psychology 108</source>
          (
          <year>2015</year>
          )
          <fpage>934</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Millecamp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. N.</given-names>
            <surname>Htun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Conati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Verbert</surname>
          </string-name>
          ,
          <article-title>What's in a user? towards personalising transparency for music recommender interfaces</article-title>
          ,
          <source>in: Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>173</fpage>
          -
          <lpage>182</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Millecamp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. N.</given-names>
            <surname>Htun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Conati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Verbert</surname>
          </string-name>
          ,
          <article-title>To explain or not to explain: the efects of personal characteristics when explaining music recommendations</article-title>
          ,
          <source>in: Proceedings of the 24th International Conference on Intelligent User Interfaces</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>397</fpage>
          -
          <lpage>407</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Naiseh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Jiang</surname>
          </string-name>
          , J. Ma, R. Ali,
          <article-title>Explainable recommendations in intelligent systems: delivery methods, modalities and risks</article-title>
          , in: International Conference on Research Challenges in Infor-
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>