<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Spotivibes: Tagging Playlist Vibes With Colors</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hiba Abderrazik∗</string-name>
          <email>h.abderrazik@student.tudelft.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Henky Janse∗</string-name>
          <email>h.a.b.janse@student.tudelft.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giovan Angela∗</string-name>
          <email>g.j.a.angela@student.tudelft.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sterre Lutz∗</string-name>
          <email>s.lutz@student.tudelft.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hans Brouwer∗</string-name>
          <email>j.c.brouwer@student.tudelft.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gwennan Smitskamp∗</string-name>
          <email>g.m.smitskamp@student.tudelft.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sandy Manolios</string-name>
          <email>s.manolios@tudelft.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cynthia C. S. Liem</string-name>
          <email>c.c.s.liem@tudelft.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Delft University of Technology</institution>
          ,
          <addr-line>Delft</addr-line>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <abstract>
        <p>Music is often both personally and afectively meaningful to human listeners. However, little work has been done to create music recommender systems that take this into account. In this demo proposal, we present Spotivibes: a first prototype for a new colorbased tagging and music recommender system. This innovative tagging system is designed to take the users' personal experience of music into account and allows them to tag their favorite songs in a non-intrusive way, which can be generalized to their entire library. The goal of Spotivibes is twofold: to help users better tag their playlists to get better playlists and to provide research data on implicit grouping mechanisms in personal music collections. The system was tested with a user study on 34 Spotify users.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>INTRODUCTION</title>
      <p>Many people love to listen to music and share their music tastes
with others. With music consumption largely having moved to the
digital realm, music organization and discovery have moved to the
digital space accordingly, opening up great opportunities for digital
music services to support these experiences.</p>
      <p>
        However, many popular present-day music services are very
much framed as catalogues, in which users have to perform directed,
linguistic searches on existing song metadata to find what they are
∗All marked authors contributed equally to this research.
looking for. In the Music Information Retrieval research domain,
considerable work has been performed to automatically describe
music objects beyond catalogued metadata. However, much of the
research in this area still has focused on fairly “objective” descriptors
of aspects of the music object (e.g. chords, tempo), but did not
explicitly consider corresponding end user experiences [
        <xref ref-type="bibr" rid="ref11 ref3 ref6">3, 6, 11</xref>
        ].
      </p>
      <p>
        Frequently, music is seen as a moderator of mood and emotion. A
considerable body of work on automatic music emotion recognition
from audio content exists [
        <xref ref-type="bibr" rid="ref17 ref2 ref9">2, 9, 17</xref>
        ]. However, generally, it is hard
to get good labeled data (for which humans need to give the initial
input) at scale. In order to make labeling engaging, several
proposals have been made for crowdsourced tagging games [
        <xref ref-type="bibr" rid="ref1 ref10 ref8">1, 8, 10</xref>
        ].
While these are more engaging to users than traditional tagging
interfaces, they explicitly ask for users to concentrate on the
annotation within imposed linguistic frames (e.g. describing songs
with a tag, or mapping songs in valence-arousal space), which may
take away the “natural” afective experience of music consumption.
Furthermore, these tagging interfaces generally reward consensus
across human annotators. While this allows for labels that are more
stable and generalizable across a music population, this takes away
any notion of very personal and subjective perception.
      </p>
      <p>
        Also with regard to automatic music recommendation, in which
user consumption patterns are taken into account to foster
automatic music discovery, it was pointed out that true user feedback
is not yet optimally integrated [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. While many algorithms are
evaluated with user studies or trained on hand-labeled genre tags,
not many approaches holistically incorporate user responses.
      </p>
      <p>
        While algorithms have focused on describing musical objects,
when humans listen to music in everyday life, they actually may
not have their full focus on the musical object for active listening.
Instead, multiple studies have shown that music is often consumed
passively, e.g. in the background while performing another
activity [
        <xref ref-type="bibr" rid="ref12 ref16 ref5 ref7">5, 7, 12, 16</xref>
        ]. This again gives useful dimensions of (personal)
music categorization, that presently still are understudied.
      </p>
      <p>To study these open challenges in the literature and music
consumption in general, we propose the Spotivibes system. This system
is designed to capture user reactions and associations to music in
both a personal and an abstract way, in an integrated way with the
user’s existing listening preferences in the Spotify music service.
Taking a user’s existing playlists as the basis, users are asked to tag
the “vibe” of songs (with “vibe” intentionally chosen to be more
abstract than “mood” or “purpose’) with one or more colors. This
both restricts the tag vocabulary in the backend, while at the same
time, it allows for more abstract associations at the user side than
would be the case when imposing a certain vocabulary.</p>
      <p>In the backend, the system will learn associations from colors to
content features in the user’s consumption history. Consequently,
the system can generate tailored playlists for the users based on
colors. In this way, Spotivibes serves a twofold goal: on one hand,
it can serve as an annotation system that is both more abstracted
than existing tagging tools, while at the same time being more
integrated with actual everyday listening behavior of a user. On the
other hand, it also directly can serve users in gaining insight into
their music preferences and associations, and setting more personal
recommendations. This makes Spotivibes an interesting research
tool to study the impact and usefulness of abstract color tagging
of personal perception of music in recommender systems. In the
current paper, we present a first functional prototype of Spotivibes,
that is intended to provide a framework for conducting deeper
research on tagging mechanisms in the future.
2</p>
    </sec>
    <sec id="sec-2">
      <title>OVERVIEW OF THE APPLICATION</title>
      <p>Spotivibes is a web application and as such only requires a device
with Internet access and a browser, as well as a Spotify account.
Upon their first login, users have to tag a certain number of songs
(10 or 30) from their Spotify saved tracks, using as many colors
as they want. The available colors are depicted in the upper part
of Figure 1. Then, they can get personalized playlists based on a
single color, a progression between two colors or a mosaic of colors.
Those playlists can then be exported to their Spotify account.</p>
      <p>Spotivibes relies on user feedback to further improve its
recommendations : users can always modify existing tags or tag more
songs. Users have also access to various statistics regarding their
tags to give them more information about their tagging behavior
and motivate them to tag more.</p>
      <p>A detailed overview of the application can be found at
https://youtu.be/x2KZ2z0s4Uk .
2.1</p>
    </sec>
    <sec id="sec-3">
      <title>Initialization</title>
      <p>The initial set a user is asked to label is based on k-means clustering
of Spotify’s audio features. The user is asked to label either 10 or
30 of their own tracks, so k is set to 10 or 30. This theoretically
gives tracks which represent the major clusters of songs in Spotify’s
audio feature space and so should cover the majority of associations
a user can have to songs in their library. Also a “reset labels" button
on the home page allows the user to clear all the label data the user
has provided. This way, the initialization process can be repeated
for a fresh start.
2.2</p>
    </sec>
    <sec id="sec-4">
      <title>Bulk labeling</title>
      <p>Once the initialization process has been completed, if the user wants
to label more songs, he/she can select multiple songs and labels and
tag that selected group of songs with a color in one go. The user
Spotivibes allows users to create their vibes-based playlists in three
diferent ways: a gradient playlist, a single color playlist and a
mosaic playlist.</p>
      <p>One color. The single color playlist is pretty self-explanatory. The
user will select a single color and will receive a playlist containing
songs with a high label value for the selected color.</p>
      <p>Gradient. The gradient playlist generation works by selecting two
diferent colors. A new playlist will be generated with a gradual
change in song vibe from start to finish. For example, the user
selects the colors yellow and blue, for the first and second colors
respectively. The first songs in the playlist will contain a higher
“yellow" label assigned to it and gradually change to songs with
that contain more “blue".</p>
      <p>Mosaic. The mosaic pattern works by selecting multiple colors,
the user can also select the same color multiple times. As shown in
Figure 1, if a user selects two blue and one yellow, a playlist will be
generated containing songs with more blue than yellow, but should
also contain yellow.</p>
      <p>Editing and Exporting Playlists. Once a playlist has been
generated, the user can give feedback on each song by updating its color
labels. Songs can also be removed, which gives negative feedback
for future playlist generation. After creating and editing a playlist, a
user can choose to export the playlist to their Spotify library. They
can give it any custom name and later listen to it on their own
Spotify account.
2.4</p>
    </sec>
    <sec id="sec-5">
      <title>Statistics</title>
      <p>As a part of their Spotivibes experience, users can get insight into
their labeling behavior on a “Statistics” page. This page provides
some basic information about the users, including the number of
songs the user has labeled, and tracks in the library. More detailed
statistics listed in the subsections below can be viewed by selecting
a color from the color picker pop-up window on the left side. For
displaying the diferent statistics plots shown by Figure 2, the data
is calculated by the classifiers or retrieved from the Spotify API.</p>
      <p>Landscape. The “Landscape” statistic is a detailed 3d plot
providing information about the songs labeled with the selected color.
The x , y, and z-axis of the 3d plot indicate tempo, valence, and
loudness respectively. Each song labeled with the selected color will
be displayed as a dot on this 3d plot, its size corresponding to the
certainty with which we have classified it to be that color, as shown
in the upper left part of Figure 2. For example, if a user associates
yellow songs with high-tempo numbers, a cluster of larger dots will
appear on the higher end of the tempo axis. The plot is interactive:
it can be dragged to be viewed from diferent angles and when the
user hovers over a dot, they can see the title and artist of the song
it represents.</p>
      <p>Influencers . The “Influencers” section, displayed in the upper
right part of Figure 2, is a bar plot showing the 3 most influential
artists within a color. The metric used to measure “influence” is
simply the sum of the likelihood of all the songs of the artist within
that color. In this way, influence indicates the likelihood of an artist
being associated with the currently selected color, depending on
how many of this artist’s songs are classified as that color.</p>
      <p>Decades. The “Decades” tile displayed in the lover left part of
Figure 2 shows a histogram of the number of tracks in decades that
belong to the selected color, weighted by their likelihood to be
correctly classified.</p>
      <p>Genres. The “Genre” tile displayed in the lover right part of Figure
2 shows a radial histogram of genres classified within the selected
color.
2.5</p>
    </sec>
    <sec id="sec-6">
      <title>Associating songs with colors</title>
      <p>The algorithm that learns correspondences between songs and color
tags is the heart of Spotivibes’ functionality, yet is almost invisible to
users. Since color labels are so personal, we do not make use of any
inter-user information. This means that classifiers need to be trained
for each individual user, yielding user-dependent correspondences
between audio features and (categorically modeled) color tags.</p>
      <p>
        Our color label predictor consists of an ensemble of classifiers
and regressors from the scikit-(multi)learn [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] and XGBoost [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
python packages.
      </p>
      <p>The label predictor must find underlying audio features that
are strongly correlated with the labels that users give to songs.
This, of course, means that the predictor is strongly inuflenced by
how a user labels tracks. If a user chooses to use a color as a label
for something completely uncorrelated with the audio features,
no meaningful connections will be found, but this will also show
accordingly in the “Statistics” overviews.</p>
      <p>
        [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] found that, in the context of multi-label classification of
music by emotion, the random k-label sets (RAKEL) ensembling
method worked best on datasets of a similar size to most user
Spotify libraries (less than 15,000 songs). The RAKEL algorithm
works by training multiple naive predictors on subsets of the total
label set and then deciding on the final output by voting among the
predictors. Here we used scikit-multilearn’s RAkELo module. Based
on a set of training features and training labels, the algorithm
outputs a list of features that is most descriptive for a color. To
allow for some tolerance in the predictions, the RAKEL’s binary
classification was combined with a regression of the label values, for
which we made use of scikit-multilearn’s MultiOutputRegressor
module. This means a song can e.g. be 30% green and 70% blue.
Depending on need, the fractional scores can be thresholded in
diferent post-processing steps. For example, labels with a score
higher than 0.5 score are currently shown to users in the front-end
(this gives a nice balance between showing many labels, while not
just returning all the colors), and calculating “influencer" artists for
the statistics page only incorporates the most certain predictions.
3
      </p>
    </sec>
    <sec id="sec-7">
      <title>EVALUATION: USER STUDY</title>
      <p>A user study was conducted to assess the usability of the system and
quality of recommendations (measured by user satisfaction). The
study was conducted with 34 participants recruited via personal
connections and among computer science students of our university.
They all had to freely explore the application on their own, and
ifll in a questionnaire afterwards. All users had to go through the
longer setup which made them tag 30 of their Spotify favorite songs.
The experiment lasted around 20 minutes per participant.</p>
      <p>The questionnaire was composed of 17 questions. The answers
to the main questions are shown in Figure 3. They were designed
to measure the tediousness of the initial set-up process, user
satisfaction in the recommendations, the perceived usefulness of the
color-based tagging system and the usability of the interface. Other
notable questions were about the user satisfaction of Spotivibes.</p>
      <p>The user study concludes that overall, the participants had a
good/satisfactory experience with the application (3.74 average on
a 5 points scale), but were less satisfied with the services provided
by Spotivibes (3.41 average on a 5 points scale), as shown in Figure
3a and Figure 3b .</p>
      <p>Initialization Process. One thing that emerged during this study
is that the song labeling process was on the edge of tediousness
as shown in Figure 3c. The results shows an even split of 1/3rd of
users agreeing the process was tedious, 1/3rd being neutral, and
1/3rd disagreeing that the process was tedious. We might consider
going back to include the short labeling process in further user
experiments, but this can result in a decrease in playlist satisfaction.
Perhaps a quick initialization process with better bulk labeling
features could be included during initialization to improve
userfriendliness as well as data for the classifier.</p>
      <p>Playlist Generation. The user study was sadly inconclusive on
the value of Spotivibes’ color-based playlist generation, as shown
by Figure 3e. Users were asked to rate their satisfaction with Spotify
and Spotivibes on a 1 to 10 scale in terms of keeping to a given
vibe or emotion. Disregarding low scores for Spotivibes related to
a couple of users for which the initialization process failed due to
bugs in the data management model, the minimum, lower quartile,
median, upper quartile, and maximum were identical and the mean
score for Spotivibes was 0.2 lower (not statistically significant). This
might be afected by our choice of splitting the rating of Spotify and
Spotivibes into diferent parts of the user study. In-person feedback
from a couple of users indicated that they did not realize they were
rating the two services against each other. Perhaps placing those
two questions next to each other in the survey would have given a
better view of how users actually felt about recommendations.</p>
      <p>Colour Associations. Users were, however, generally satisfied with
the use of colors as labels for emotions, as shown by Figure 3d. Half
agreed that colors made it easy to represent emotions, a quarter
were neutral, and a quarter disagreed. When asked whether using
multiple colors helped express complex or multi-faceted feelings
65% agreed and only 10% disagreed. This does point towards the
usefulness of colors as abstract labels for emotions in music. An
interesting point to note was that users that gave negative feedback
on the intuitiveness of the color labeling process (regarding their
dificulty relating colors to songs or not knowing multiple labels
could be used) also had lower satisfaction with the quality of playlist
generation. This suggests that our classifier does actually pick up
on patterns in the user’s color labels and functions better when
users label meaningfully.</p>
    </sec>
    <sec id="sec-8">
      <title>4 CONCLUSION</title>
      <p>Spotivibes is an innovative color-based tagging system that allows
users to tags songs in a personal, intuitive and abstract way in order
to get personalized playlists that supports their unique experience
and needs of music. The current version of Spotivibes still is an
early, but functional prototype, on which initial user studies have
been performed. In future work, deeper research into the merit of
the color-based tagging is planned to be performed, also including
larger-scale user studies.</p>
    </sec>
    <sec id="sec-9">
      <title>ACKNOWLEDGMENTS</title>
      <p>We would like to thanks Bernd Kreynen for his valuable feedback
throughout the project.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Anna</given-names>
            <surname>Aljanaki</surname>
          </string-name>
          , Frans Wiering, and Remco C Veltkamp.
          <year>2016</year>
          .
          <article-title>Studying emotion induced by music through a crowdsourcing game</article-title>
          .
          <source>Information Processing &amp; Management</source>
          <volume>52</volume>
          ,
          <issue>1</issue>
          (
          <year>2016</year>
          ),
          <fpage>115</fpage>
          -
          <lpage>128</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Anna</given-names>
            <surname>Aljanaki</surname>
          </string-name>
          ,
          <string-name>
            <surname>Yi-Hsuan Yang</surname>
            , and
            <given-names>Mohammad</given-names>
          </string-name>
          <string-name>
            <surname>Soleymani</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Developing a benchmark for emotional analysis of music</article-title>
          .
          <source>PloS one 12</source>
          ,
          <issue>3</issue>
          (
          <year>2017</year>
          ),
          <year>e0173392</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Michael</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Casey</surname>
            , Remco C. Veltkamp, Masataka Goto, Marc Leman, Christophe Rhodes, and
            <given-names>Malcolm</given-names>
          </string-name>
          <string-name>
            <surname>Slaney</surname>
          </string-name>
          .
          <year>2008</year>
          .
          <article-title>Content-Based Music Information Retrieval: Current Directions and Future Challenges</article-title>
          .
          <source>Proc. IEEE 96, 4 (April</source>
          <year>2008</year>
          ),
          <fpage>668</fpage>
          -
          <lpage>696</lpage>
          . https://doi.org/10.1109/JPROC.
          <year>2008</year>
          .916370
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Tianqi</given-names>
            <surname>Chen</surname>
          </string-name>
          and
          <string-name>
            <given-names>Carlos</given-names>
            <surname>Guestrin</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Xgboost: A scalable tree boosting system</article-title>
          .
          <source>In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. ACM</source>
          ,
          <volume>785</volume>
          -
          <fpage>794</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Andrew</given-names>
            <surname>Demetriou</surname>
          </string-name>
          , Martha Larson, and
          <string-name>
            <surname>Cynthia</surname>
            <given-names>C. S.</given-names>
          </string-name>
          <string-name>
            <surname>Liem</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Go with the lfow: When listeners use music as technology</article-title>
          . (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Stephen</surname>
          </string-name>
          <string-name>
            <surname>Downie</surname>
          </string-name>
          , Donald Byrd, and
          <string-name>
            <given-names>Tim</given-names>
            <surname>Crawford</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Ten years of ISMIR: Relfections on challenges and opportunities</article-title>
          .
          <source>In Proceedings of the 10th International Society for Music Information Retrieval Conference (ISMIR</source>
          <year>2009</year>
          ).
          <fpage>13</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Mohsen</given-names>
            <surname>Kamalzadeh</surname>
          </string-name>
          , Dominikus Baur, and
          <string-name>
            <given-names>Torsten</given-names>
            <surname>Möller</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>A survey on music listening and management behaviours</article-title>
          . (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Youngmoo</surname>
            <given-names>E Kim</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Erik M Schmidt</surname>
            ,
            <given-names>and Lloyd</given-names>
          </string-name>
          <string-name>
            <surname>Emelle</surname>
          </string-name>
          .
          <year>2008</year>
          .
          <article-title>Moodswings: A collaborative game for music mood label collection.</article-title>
          .
          <source>In Ismir</source>
          , Vol.
          <year>2008</year>
          .
          <volume>231</volume>
          -
          <fpage>236</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Youngmoo</surname>
            <given-names>E Kim</given-names>
          </string-name>
          , Erik M Schmidt,
          <string-name>
            <given-names>Raymond</given-names>
            <surname>Migneco</surname>
          </string-name>
          , Brandon G Morton,
          <article-title>Patrick Richardson, Jefrey Scott, Jacquelin A Speck,</article-title>
          and
          <string-name>
            <given-names>Douglas</given-names>
            <surname>Turnbull</surname>
          </string-name>
          .
          <year>2010</year>
          .
          <article-title>Music emotion recognition: A state of the art review</article-title>
          .
          <source>In Proc. ISMIR</source>
          , Vol.
          <volume>86</volume>
          .
          <fpage>937</fpage>
          -
          <lpage>952</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Edith</given-names>
            <surname>Law</surname>
          </string-name>
          and Luis von Ahn.
          <year>2009</year>
          .
          <article-title>Input-agreement: A New Mechanism for Collecting Data Using Human Computation Games</article-title>
          .
          <source>In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '09)</source>
          . ACM, New York, NY, USA,
          <fpage>1197</fpage>
          -
          <lpage>1206</lpage>
          . https://doi.org/10.1145/1518701.1518881
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Cynthia</surname>
            <given-names>C. S.</given-names>
          </string-name>
          <string-name>
            <surname>Liem</surname>
          </string-name>
          , Andreas Rauber, Thomas Lidy, Richard Lewis, Christopher Raphael,
          <string-name>
            <surname>Joshua D Reiss</surname>
            ,
            <given-names>Tim</given-names>
          </string-name>
          <string-name>
            <surname>Crawford</surname>
            , and
            <given-names>Alan</given-names>
          </string-name>
          <string-name>
            <surname>Hanjalic</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Music information technology and professional stakeholder audiences: Mind the adoption gap</article-title>
          .
          <source>In Dagstuhl Follow-Ups</source>
          , Vol.
          <volume>3</volume>
          .
          <string-name>
            <surname>Schloss</surname>
          </string-name>
          Dagstuhl-Leibniz-Zentrum fuer Informatik.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Adrian</surname>
            <given-names>C North</given-names>
          </string-name>
          ,
          <string-name>
            <surname>David J Hargreaves</surname>
          </string-name>
          , and
          <string-name>
            <surname>Jon</surname>
          </string-name>
          J Hargreaves.
          <year>2004</year>
          .
          <article-title>Uses of music in everyday life</article-title>
          .
          <source>Music Perception: An Interdisciplinary Journal</source>
          <volume>22</volume>
          ,
          <issue>1</issue>
          (
          <year>2004</year>
          ),
          <fpage>41</fpage>
          -
          <lpage>77</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Fabian</surname>
            <given-names>Pedregosa</given-names>
          </string-name>
          , Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel,
          <string-name>
            <given-names>Peter</given-names>
            <surname>Prettenhofer</surname>
          </string-name>
          , Ron Weiss,
          <string-name>
            <surname>Vincent Dubourg</surname>
          </string-name>
          , et al.
          <year>2011</year>
          .
          <article-title>Scikit-learn: Machine learning in Python</article-title>
          .
          <source>Journal of machine learning research 12</source>
          ,
          <string-name>
            <surname>Oct</surname>
          </string-name>
          (
          <year>2011</year>
          ),
          <fpage>2825</fpage>
          -
          <lpage>2830</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Markus</surname>
            <given-names>Schedl</given-names>
          </string-name>
          , Arthur Flexer, and
          <string-name>
            <given-names>Julián</given-names>
            <surname>Urbano</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>The Neglected User in Music Information Retrieval Research</article-title>
          .
          <source>Journal of Intelligent Information Systems</source>
          <volume>41</volume>
          ,
          <issue>3</issue>
          (
          <year>2013</year>
          ),
          <fpage>523</fpage>
          -
          <lpage>539</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Konstantinos</surname>
            <given-names>Trohidis</given-names>
          </string-name>
          , Grigorios Tsoumakas, George Kalliris, and Ioannis P Vlahavas.
          <year>2008</year>
          .
          <article-title>Multi-label classification of music into emotions.</article-title>
          .
          <source>In ISMIR</source>
          , Vol.
          <volume>8</volume>
          .
          <fpage>325</fpage>
          -
          <lpage>330</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Karthik</surname>
            <given-names>Yadati</given-names>
          </string-name>
          , Cynthia C. S. Liem, Martha Larson, and
          <string-name>
            <given-names>Alan</given-names>
            <surname>Hanjalic</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>On the Automatic Identification of Music for Common Activities</article-title>
          .
          <source>In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval. ACM</source>
          ,
          <volume>192</volume>
          -
          <fpage>200</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Yi-Hsuan Yang</surname>
          </string-name>
          and
          <string-name>
            <surname>Homer H. Chen</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Machine Recognition of Music Emotion: A Review</article-title>
          .
          <source>ACM Trans. Intell. Syst. Technol. 3</source>
          ,
          <issue>3</issue>
          , Article 40 (May
          <year>2012</year>
          ),
          <volume>30</volume>
          pages. https://doi.org/10.1145/2168752.2168754
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>