<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Rocking around the clock eight days a week: an exploration of temporal patterns of music listening</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Perfecto Herrera</string-name>
          <email>perfecto.herrera@upf.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohamed Sordo</string-name>
          <email>mohamed.sordo@upf.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Zuriñe Resa Music Technology Group Department of Technology Universitat Pompeu Fabra</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Music listening patterns can be influenced by contextual factors such as the activity a listener is involved in, the place one is located or physiological constants. As a consequence, musical listening choices might show some recurrent temporal patterns. Here we address the hypothesis that for some listeners, the selection of artists and genres could show a preference for certain moments of the day or for certain days of the week. With the help of circular statistics we analyze playcounts from Last.fm and detect the existence of that kind of patterns. Once temporal preference is modeled for each listener, we test the robustness of that using the listener's playcount from a posterior temporal period. We show that for certain users, artists and genres, temporal patterns of listening can be used to predict music listening selections with above-chance accuracy. This finding could be exploited in music recommendation and playlist generation in order to provide user-specific music suggestions at the “right” moment.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Music context analysis</kwd>
        <kwd>Playlist generation</kwd>
        <kwd>User modeling</kwd>
        <kwd>Music metadata</kwd>
        <kwd>Temporal patterns</kwd>
        <kwd>Music preference</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. INTRODUCTION</title>
      <p>Among the requirements of good music recommenders we can
point to, not only delivering the right music but, delivering it at
the right moment. This amounts to consider the context of
listening as a relevant variable in any user model for music
recommendation. As existing technologies also make it possible
to track the listening activity every time and everywhere it is
happening, it seems pertinent to ask ourselves how this tracking
can be converted into usable knowledge for our recommendation
WOMRAD 2010 Workshop on Music Recommendation and Discovery,
colocated with ACM RecSys 2010 (Barcelona, SPAIN).</p>
      <p>
        Copyright ©. This is an open-access article distributed under the terms of
the Creative Commons Attribution License 3.0 Unported, which permits
unrestricted use, distribution, and reproduction in any medium, provided
the original author and source are credited.
systems. Music listening decisions might seem expressions of free
will but they are in fact influenced by interlinked social,
environmental, cognitive and biological factors [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ][
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
Chronobiology is the discipline that deals with time and rhythm in
living organisms. The influence of circadian rhythms (those
showing a repetition pattern every 24 hours approximately,
usually linked to the day-night alternation), but also of ultradian
rhythms (those recurring in a temporal lag larger than one day like
the alternation of work and leisure or the seasons), has been
demonstrated on different levels of organization of many living
creatures, and preserving some biological cycles is critical to keep
an optimum health [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. The observation that human behavior is
modulated by rhythms of hormonal releases, exposure to light,
weather conditions, moods, and also by the activity we are
engaged into [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ][
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] paves the way to our main hypothesis: there
are music listening decisions that reflect the influence of those
rhythms and therefore show temporal patterns of occurrence. The
connection would be possible because of the existing links
between music and mood on one side, and between music and
activity on the other side. In both cases, music has functional
values either as mood regulator [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] or as an activity regulator
[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Therefore, as mood and activity are subject to rhythmic
patterns and cycles, music selection expressed in playlists could
somehow reflect that kind of patterning [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ][
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. More
specifically, in this paper we inquire on the possibility of
detecting that, for a specific user, certain artists or musical genres
are preferentially listened to at certain periods of the day or on
specific days of the week. The practical side of any finding on this
track would be the exploitation of this knowledge for a better
contextualized music recommendation. Our research is aligned
with a generic trend on detecting hidden patterns of human
behavior at the individual level thanks, mainly, to the spread of
portable communication and geolocation technologies [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ][
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. RELATED RESEARCH</title>
      <p>
        While recommendations based on content analysis or on
collaborative filtering may achieve a certain degree of
personalization, they do miss the fact that the users interact with
the systems in a particular context [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Furthermore, several
studies have shown that a change in contextual variables induces
changes in user’s behaviors and, in fact, when applying contextual
modelling of the users (i.e., considering the time of the day, the
performed activity, or the lighting conditions), the performance of
recommendation systems improves both in terms of predictive
accuracy and true positive ratings [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ][
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. Although
contextbased music recommenders were available since 2003 [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], time
information is a recently-added contextual feature [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ][
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
A generic approach to the characterization of temporal trends in
everyday behavior has been presented in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], where the concept
of “eigenbehavior” is introduced. Eigenbehaviors are
characteristic behaviors (such as leaving early home, going to
work, breaking for lunch and returning home in the evening)
computed from the principal components of any individual’s
behavioral data. It is an open research issue if Eigenbehaviors
could provide a suitable framework for analyzing music listening
patterns. A model tracking the time-changing behavior of users
and also of recommendable items throughout the life span of the
data was developed for the Netfix movie collection [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. This
allowed the author to detect concept drifts and the temporal
evolution of preferences, and to improve the recommendation
over a long time span.
      </p>
      <p>
        Although research on behavioral rhythms has a long and solid
tradition, we are not aware of many studies about their influence
on music listening activities. The exception is a recent paper [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
where users’ micro-profiles were built according to predefined
non-overlapping temporal partitions of the day (e.g., “morning
time slot”,). The goal of the authors was to build a time-aware
music recommender and their evaluation of the computed
microprofiles showed their potential to increase the quality of
recommendations based on collaborative filtering. Most of that
reported work was, though, on finding optimal temporal
partitions. As we will see, there are other feasible, maybe
complementary, options that keep the temporal dimension as a
continuous and circular one by taking advantage of circular
statistics. Developed forty years ago and largely used in biological
and physical sciences, circular statistics has also been exploited in
personality research for studying temporal patterns of mood
[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ][
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. To our knowledge, it is the first time they are used in the
analysis of music-related behavior, though applications to music
have been previously reported [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ][
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. METHODOLOGY</title>
    </sec>
    <sec id="sec-4">
      <title>3.1 Data Collection</title>
      <p>Getting access to yearly logs of the musical choices made by a
large amount of listeners is not an easy task. Many music playing
programs store individual users’ records of that, but they are not
publicly accessible. As a workable solution, we have taken
advantage of Last.fm API, which makes possible to get the
playcounts and related metadata of their users. As raw data we
have started with the full listening history of 992 unique users,
expressed as 19,150,868 text lines and spanning variable length
listening histories from 2005 to 2009. The data contained a user
identifier, a timestamp, Musicbrainz identifiers for the artist and
track, and a text name for the listened track.</p>
      <p>The artist genre information was gathered from Last.fm using the
Last.fm API method track.getTopTags(), which returns a list of
tags and their corresponding weight1.This list of tags, however,
may relate to different aspects of music (e.g. genre, mood,
instrumentation, decades...). Since in our case we need a single
genre per track, we first clean tags in order to remove special
characters or any other undesirable characters, such as spaces,
hyphens, underscores, etc. Then irrelevant tags (i.e., those having
1 Last.fm relevance weight of tag t to artist a, ranging from 0 to
100.
a low weight) are removed and the remaining ones are matched
against a predefined list of 272 unique musical genres/styles
gathered from Wikipedia and Wordnet. From the genre tags we
obtained for each song, we select the one with the highest weight.
If there are several tags with the highest weight, we select the one
with the least popularity (popularity is computed as the number of
occurrences of a specific genre in our data-set).</p>
    </sec>
    <sec id="sec-5">
      <title>3.2 Data cleaning</title>
      <p>Data coming from Lastfm.com contain playcounts that cannot be
attributable to specific listening decisions on the side of users. If
they select radio-stations based on other users, on tags or on
similar artists there are chances that songs, artists and genres will
not recur in a specific user’s profile. In general, even in the case
of having data coming from personal players obeying solely to the
user’s will, we should discard (i) users that do not provide enough
data to be processed, and (ii) artists and genres that only appear
occasionally. We prefer to sacrifice a big amount of raw data
provided those we keep help to identify a few of clearly recurring
patterns, even if it is only for a few users, artists or genres.
In order to achieve the above-mentioned cleaning goals we first
compute, for each user, the average frequency of each artist/genre
in his/her playlist. Then, for each user’s dataset, we filter out all
those artists/genres for which the playlist length is below the
user’s overall average playlist length. Finally, in order to get rid of
low-frequency playing users, we compute the median value of the
number of artists/genres left after the last filtering step, which we
will name as “valid” artists/genres. Those users whose number of
“valid” artists/genres is below the median percentage value are
discarded.</p>
    </sec>
    <sec id="sec-6">
      <title>3.3 Prediction and Validation Data Sets</title>
      <p>Once we get rid of all the suspected noise, we split our dataset in
two groups. One will be used to generate the temporal predictions
while the other one will be used to test them. The test set contains
all the data in the last year of listening for a given subject. The
prediction-generation set contains the data coming from two years
of listening previous to the year used in the test set.</p>
    </sec>
    <sec id="sec-7">
      <title>3.4 Circular Statistics</title>
      <p>
        Circular statistics are aimed to analyze data on circles where
angles have a meaning, which is the case when dealing with daily
or weekly cycles. In fact, circular statistics is an alternative to
common methods or procedures for identifying cyclic variations
or patterns, which include spectral analysis of time-series data or
time-domain based strategies [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Although these approaches are
frequently used, their prerequisites (e.g., interval scaling, regularly
spaced data, Gaussianity) are seldom met and, as we mentioned
above, these techniques have rarely been used to analyze
musicrelated data and therefore we wanted to explore its potential.
Under the circular statistics framework, variables or data
considered to be cyclic in nature are meant to have a period of
measurement that is rotationally invariant. In our case this period
is referred to the daily hours and the days of the week. Therefore,
taking into account the rotationally invariant period of analysis
this would be reflected as daily hours that range from 0 to 24,
where 24 is considered to be the same as 0. Regarding to the
weekly rhythm, Monday at 0h would be considered to be the same
as Sunday at 24h.
      </p>
      <p>After this transformation, vectors ri are vector-averaged by
⎛cosα i ⎞
ri = ⎜⎜⎝ sinα i ⎠⎟</p>
      <p>⎟
r = 1 ∑ ri</p>
      <p>
        N i
The first step in circular analysis is converting raw data to a
common angular scale. We chose the angular scale in radians, and
thus we apply the following conversion to our dataset:
α =
2πx
k
where x represents raw data in the original scale, α is its angular
direction (in radians) and k is the total number of steps on the
scale where x is measured. In fact, we denote α as a vector of N
directional observations αi (i ranging from 1 to N). For the daily
hour case, x would have values between 0 and 24, and k = 24.
Alternatively, for the weekday analysis, x would have a scale from
0 (Monday) to 6 (Sunday) and thus, k = 6. As noted, the effect of
this conversion can be easily transformed back to the original
scale. Once we have converted our data to angular scale, we
compute the mean direction (a central tendency measure) by
transforming raw data into unit vectors in the two-dimensional
plane by
The quantity r is the mean resultant vector associated to the
mean direction, and its length R describes the spread of the data
around the circle. For events occurring uniformly in time R
values approach 0 (uniform circular distribution) whereas events
concentrated around the mean direction yield values close to 1
(see figure 1 for an example). A null hypothesis (e.g., uniformity)
about the distribution of data can be assessed using Rayleigh’s
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] or Omnibus (Hodges-Ajne) tests [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], the latter working well
for many distribution shapes. Once we have detected significantly
modally distributed data by means of both tests, we verify that it
wasn’t completely pointing to a single day or hour. All the
circular statistics analyses presented here have been performed
with the CircStat toolbox for Matlab [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
    </sec>
    <sec id="sec-8">
      <title>4. RESULTS</title>
    </sec>
    <sec id="sec-9">
      <title>4.1 Data cleaning</title>
      <p>As a consequence of the cleaning process, our working dataset
now contains data from 466 valid users. The cleaning process has
kept 62% of their total playcounts, which corresponds to 4,5% of
the initial amount of artists. This dramatic reduction of the artists
should not be surprising as many listening records show a
“longtail” distribution, with just a few of frequently played artists, and
many of them seldom played. On the other hand, when focusing
on musical genre listening, the working dataset includes 515
users, from which 78% of their playcounts has been kept. These
playcounts comprise 8.6% of the total number of genres. Again, a
long-tail distribution of the amount of listened genres is observed.</p>
    </sec>
    <sec id="sec-10">
      <title>4.2 Temporal Patterns of Artist Selection</title>
      <p>Once we have cleaned our dataset, we compute the mean circular
direction and the mean resultant vector length for each artist and
user. Therefore, these values can be considered as a description of
the listening tendencies for each artist by each user. Both
parameters were calculated for the daily and for the weekly data.</p>
      <p>In order to assess the relevance of these listening trends, we tested
that the distribution of playcounts was different from uniform, and
that it was modally distributed (i.e, showing a tendency around an
hour or around a day of the week) and discarded those that were
not fulfilling these requirements (a null hypothesis rejection
probability p&lt;0.05 was set for the tests).</p>
      <p>In the hour prediction problem, for each listener’s clean dataset
almost 93% (σ=13) of the artists passed on average the uniformity
test (i.e., listening to them is meant to be concentrated around a
specific hour). However, considering the raw dataset, only a
peruser average of 7% (σ=3.2) of the artists show a listening hour
tendency. For the weekly approach, the per-user average in the
clean dataset is 99.8% (σ=0.8), indicating that there are some
artists showing a clear tendency towards a preferred listening day.
Considering the original raw dataset, they correspond to a 7.5%
(σ= 3.2) of all the played artists.</p>
      <p>Data from 466 users, including 7820 different songs and a grand
total of 23669 playcounts were used in the validation of the
temporal listening patterns of artists. For each user and artist we
computed a “hit” if the absolute difference between the playing
day in the prediction and test conditions, expressed as a circular
mean value in radians, was less than 0.45 (the equivalent to a
halfa-day error). For the time of the day a half-an-hour error was
accepted, corresponding to a difference between the predicted and
the observed time of less than 0.13 radians.</p>
      <p>When predicting the day of listening, an overall 32.4% of hits was
found for the songs in the test collection, which exceeds by far the
chance expectations (1/7=14.28%). As the final goal of the model
is providing user-specific contextual recommendation, an
additional per-user analysis yielded 34.5% of hits (σ=17.8).
Identical data treatment was done with the time of the day
yielding an overall 17.1% of hits (chance expectation baseline:
1/24=4.1%) and a per-user hit rate of 20.5% (σ=16.4).</p>
    </sec>
    <sec id="sec-11">
      <title>4.3 Temporal Patterns of Genre Selection</title>
      <p>Data from 456 users, including more than 5100 songs and 117
genres, were used for the validation of the genre-related patterns.
In order to consider a “hit” in the prediction of listening time and
day for a given genre, we set the same thresholds than for
evaluating the artist prediction. For the time of the day an overall
22.6% (and per-user 23.2%) of accurate predictions was found. It
is interesting to note that relaxing the required accuracy of the
prediction to plus/minus one hour error we reached 39.9% of
average hits and per-user average 41% (σ=28.4). For the day of
the week, the overall hit percent was 40.9%, while the per-genre
average and the per-user average were, respectively, 40.7%
(σ=24.1) and 41.7% (σ=26.3). It is interesting to note that among
the best predictable genres we find many of infrequent ones but
also many of the most frequent ones.</p>
    </sec>
    <sec id="sec-12">
      <title>5. CONCLUSIONS</title>
      <p>The present study is, as far as we know, the first one inquiring the
possibility that our music listening behavior may follow some
detectable circadian and ultradian patterns, at least under certain
circumstances. We have discovered that a non-negligible amount
of listeners tend to prefer to listen to certain artists and genres at
specific moments of the day and/or at certain days of the week.
We have also observed that, respectively for artists and for genres,
20% and 40% time-contextualized music recommendations can be
successful. In our future work agenda, more sophisticated
prediction models will be tested, and also ways to implement them
into existing music recommenders.</p>
    </sec>
    <sec id="sec-13">
      <title>6. ACKNOWLEDGMENTS</title>
      <p>Our thanks to Òscar Celma who kindly shared the Last.fm data
file, accessible from this URL:
http://www.dtic.upf.edu/~ocelma/MusicRecommend
ationDataset/lastfm-1K.html</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ball</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boley</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Greene</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Howse</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lemire</surname>
            ,
            <given-names>D</given-names>
          </string-name>
          , and
          <string-name>
            <surname>McGrath</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2003</year>
          .
          <article-title>Racofi: A rule-applying collaborative filtering system</article-title>
          .
          <source>In Proc. of COLA'03.</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Baltrunas</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Amatriain</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>Towards TimeDependant recommendation based on implicit feedback</article-title>
          .
          <source>RecSys09 Workshop on Context-aware Recommender Systems (CARS-2009).</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Balzer</surname>
            ,
            <given-names>H.U.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>Chronobiology as a foundation for and an approach to a new understanding of the influence of music</article-title>
          . In R. Haas and
          <string-name>
            <given-names>V.</given-names>
            <surname>Brandes</surname>
          </string-name>
          (Eds.),
          <source>Music that Works</source>
          . Wien/New York: Springer Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Barabasi</surname>
            ,
            <given-names>A.L.</given-names>
          </string-name>
          <year>2010</year>
          .
          <article-title>Bursts: The Hidden Pattern Behind Everything We Do</article-title>
          . New York: Dutton Books.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Beran</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2004</year>
          . Statistics in Musicology, Boca Raton: CRC.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Berens</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <year>2009</year>
          ,
          <article-title>CircStat, a Matlab Toolbox for Circular Statistics</article-title>
          ,
          <source>Journal of Statistical Software</source>
          ,
          <volume>31</volume>
          ,
          <fpage>10</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Boström</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <year>2008</year>
          .
          <article-title>AndroMedia - Towards a Context-aware Mobile Music Recommender</article-title>
          .
          <source>Master's thesis</source>
          , University of Helsinki, Faculty of Science, Department of Computer Science. https://oa.doria.fi/handle/10024/39142.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Coppola</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Della Mea</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Di</surname>
            <given-names>Gaspero</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Menegon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Mischis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Mizzaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Scagnetto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            and
            <surname>Vassena</surname>
          </string-name>
          ,
          <string-name>
            <surname>L.</surname>
          </string-name>
          <year>2009</year>
          .
          <article-title>The context-aware browser</article-title>
          .
          <source>IEEE Intelligent Systems</source>
          ,
          <volume>25</volume>
          ,
          <issue>1</issue>
          ,
          <fpage>38</fpage>
          -
          <lpage>47</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Dressler</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Streich</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2007</year>
          .
          <source>Tuning Frequency Estimation Using Circular Statistics. 8th Int. Conf. on Music Information Retrieval (ISMIR-2007)</source>
          ,
          <fpage>357</fpage>
          -
          <lpage>360</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Eagle</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Pentland</surname>
            ,
            <given-names>A.S.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>Eigenbehaviors: Identifying structure in routine</article-title>
          .
          <source>Behavioral Ecology and Sociobiology</source>
          ,
          <volume>63</volume>
          ,
          <issue>7</issue>
          ,
          <fpage>1057</fpage>
          -
          <lpage>1066</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Fisher</surname>
            <given-names>N.I.</given-names>
          </string-name>
          ,
          <year>1993</year>
          ,
          <article-title>Statistical Analysis of circular data</article-title>
          , Cambridge: Cambridge University Press.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Foster</surname>
            ,
            <given-names>R.G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Kreitzman</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Rhythms of Life: The Biological Clocks that Control the Daily Lives of Every Living Thing</article-title>
          . Yale: Yale University Press.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Hargreaves</surname>
            ,
            <given-names>D. J.</given-names>
          </string-name>
          and North,
          <string-name>
            <surname>A. C.</surname>
          </string-name>
          <year>1999</year>
          .
          <article-title>The functions of music in everyday life: Redefining the social in music psychology</article-title>
          .
          <source>Psychology of Music</source>
          <volume>27</volume>
          ,
          <fpage>71</fpage>
          -
          <lpage>83</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Koren</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>Collaborative filtering with temporal dynamics</article-title>
          , New York, NY, USA,
          <fpage>447</fpage>
          -
          <lpage>456</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Kubiak</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Jonas</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2007</year>
          .
          <article-title>Applying circular statistics to the analysis of monitoring data: Patterns of social interactions and mood</article-title>
          .
          <source>European Journal of Personality Assessment</source>
          ,
          <volume>23</volume>
          ,
          <fpage>227</fpage>
          -
          <lpage>237</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Larsen</surname>
            ,
            <given-names>R.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Augustine</surname>
            ,
            <given-names>A.A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Prizmic</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>A process approach to emotion and personality: Using time as a facet of data</article-title>
          .
          <source>Cognition and Emotion</source>
          ,
          <volume>23</volume>
          ,
          <issue>7</issue>
          ,
          <fpage>1407</fpage>
          -
          <lpage>1426</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J.S.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J.C.</given-names>
          </string-name>
          <year>2008</year>
          .
          <article-title>Context awareness by casebased reasoning in a music recommendation system</article-title>
          .
          <source>4th Int. Conf. on Ubiquitous Computing Systems</source>
          ,
          <volume>45</volume>
          -
          <fpage>58</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Lloyd</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Rossi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2008</year>
          .
          <article-title>Ultradian Rhythms from Molecules to Mind: a new vision of life</article-title>
          . New York: Springer.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Lombardi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anand</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Gorgoglione</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>Context and Customer Behavior in Recommendation</article-title>
          .
          <source>RecSys09 Workshop on Context-aware Recommender Systems.</source>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Neuhaus</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <year>2010</year>
          . Cycles in Urban Environments:
          <article-title>Investigating Temporal Rhythms</article-title>
          . Saarbrücken: LAP.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Radocy</surname>
            ,
            <given-names>R.E.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Boyle</surname>
            ,
            <given-names>J.D.</given-names>
          </string-name>
          <year>1988</year>
          .
          <article-title>Psychological Foundations of Musical Behavior (2nd ed</article-title>
          .) Springfield, IL: Charles C. Thomas.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Rentfrow</surname>
            ,
            <given-names>P.J.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Gosling</surname>
            ,
            <given-names>S.D.</given-names>
          </string-name>
          <year>2003</year>
          .
          <article-title>The do re mi's of everyday life: The structure and personality correlates of music preferences</article-title>
          .
          <source>Journal of Personality and Social Psychology</source>
          ,
          <volume>84</volume>
          ,
          <issue>6</issue>
          ,
          <fpage>1236</fpage>
          -
          <lpage>1256</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Reynolds</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barry</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burke</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Coyle</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2008</year>
          .
          <article-title>Interacting with large music collections: towards the use of environmental metadata</article-title>
          .
          <source>IEEE International Conference on Multimedia and Expo</source>
          ,
          <volume>989</volume>
          -
          <fpage>992</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Saarikallio</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Erkkilä</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2007</year>
          .
          <article-title>The role of music in adolescents' mood regulation</article-title>
          .
          <source>Psych. of Music</source>
          ,
          <volume>35</volume>
          ,
          <issue>1</issue>
          ,
          <fpage>88</fpage>
          -
          <lpage>109</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Su</surname>
            ,
            <given-names>J.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yeh</surname>
            ,
            <given-names>H.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>P.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tseng</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <year>2010</year>
          .
          <article-title>Music recommendation using content and context information mining</article-title>
          .
          <source>IEEE Intelligent Systems</source>
          ,
          <volume>25</volume>
          ,
          <fpage>16</fpage>
          -
          <lpage>26</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Valcheva</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>Playlistism: a means of identity expression and self-representation</article-title>
          .
          <source>Technical Report</source>
          , Intermedia, University of Oslo. http://www.intermedia.uio.no/download/attachments/435164 60/
          <article-title>vit-ass-mariya_valcheva</article-title>
          .
          <source>pdf?version=1</source>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Zar</surname>
            <given-names>J.H.</given-names>
          </string-name>
          <year>1999</year>
          ,
          <article-title>Biostatistical Analysis (4th edition), Upper Saddle River</article-title>
          , NJ: Prentice Hall.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>