<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>D. Kowald, P. Müllner, E. Zangerle, C. Bauer, M. Schedl, E. Lex, Support the underground:
characteristics of beyond-mainstream music listeners, EPJ Data Science</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1145/2631775.2631811</article-id>
      <title-group>
        <article-title>mending for the Audience: Tailoring a Live Concert Program to the Music Preferences of the Listeners</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Martijn C. Willemsen</string-name>
          <email>m.c.willemsen@tue.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yu Liang</string-name>
          <email>y.liang1@tue.nl</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Eindhoven University of Technology,Human-Technology Interaction</institution>
          ,
          <addr-line>Eindhoven</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Jheronimus Academy of Data Science</institution>
          ,
          <addr-line>'s-Hertogenbosch</addr-line>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>10</volume>
      <issue>2021</issue>
      <fpage>107</fpage>
      <lpage>115</lpage>
      <abstract>
        <p>How can we use recommender technologies to personalize a concert program to the music preferences of the audience, even if that audience is not very familiar with the music genre of the performance? We present the results of two use cases in which we used a group recommendation approach to tailor a live concert program (one opera singer and one choir concert) to the user profiles of the audience. Using Gaussian mixture modeling, we matched musical attributes of the songs from the performance list of the artist to the (Spotify) user profiles of concert visitors. This allowed us to generate a matching concert program for the audience as a whole, as well as a personalized ranking of the songs in the program for each user. During the concert, we tested how much the audience enjoyed the songs and if the predicted ranking of the songs matched their actual preferences, using an app that would show live predictions and ask for their (user) experience. The results show that our algorithm was able to predict user preferences and rankings for songs performed during the concert. Gaussian mixture modeling on audio features seems to be a feasible tool to tailor a concert program to the musical preferences of the audience, even for music outside of listeners' normal music genres and preferences.</p>
      </abstract>
      <kwd-group>
        <kwd>Music exploration</kwd>
        <kwd>real-world user study</kwd>
        <kwd>group recommendation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Classical recommender system approaches use collaborative or content-based filtering to recommend
items to a user, based on their historical preferences. But the purpose of a recommender system can go
beyond just finding items that fit to users’ (long-term preferences) [</p>
      <sec id="sec-1-1">
        <title>1]. Systems have been proposed</title>
        <p>to help users discover novel items, such as novel music tracks [2, 3], or support them to explore new
tastes [4, 5]. Supporting such new preferences exploration could also keep users away [6] from the
so-called “filter bubble” issue [ 7], and encourage them to explore their blind spots [8].</p>
        <p>One domain in which exploration occurs naturally is the domain of music recommendation [9].
Compared to other domains like movies or books, listening to a song has lower “costs” for the user
and people like to explore more. One situation in which people can encounter new music (often
intentionally) is when they visit a live concert, especially if that concert is not by their own favorite
artists. In this paper we explicitly focus on such a use case, in which a concert visitor will explore novel
music.</p>
        <p>Unlike current streaming-based listening behavior, the listener of a live concert program is not in
control of the ‘playlist’: The concert programs are meticulously prepared by the artists to provide the
best experience. However, the program itself is not necessarily tailored to the audience and their music
preferences. Recommender technology might actually allow us to better match the concert program to
the music profile of the audience. For example, earlier work in Genre exploration [ 5, 10] shows that a
Gaussian mixture modeling (GMM) approach on audio feature values allows matching songs from a
new genre on the spotify profile of a user. Other work has supported users in finding suitable historical
live concert performances [11]. However, in these cases, the recommendations were not afecting an
actual live concert as we aim to do in this study.</p>
        <p>Tailoring music programs to the profile of the audience may also help to lower the barrier for people to
start exploring niche music genres (such as opera), bringing diversity to current mainstream tastes [12].</p>
        <p>CEUR</p>
        <p>ceur-ws.org</p>
        <p>Of course, the efectiveness of this approach might depend on personal characteristics such as users’
interest in the (novel) music domain and their musical expertise. Research has shown that musical
expertise (measured by the Music Sophistication Index [13]) strongly afects how users engage with
music [14] and how far they are willing to explore novel music genres [15], showing that we should
take these personal characteristics into account in our research.</p>
        <p>Therefore, in this paper, we investigate whether a GMM-based recommender algorithm can be used
to tailor a music program of a live concert to the audience, based on a performance list of a music
performer. We are especially interested in:</p>
        <p>RQ1: To what extent is a GMM algorithm able to predict user preferences and rankings for novel
music performed during a live concert?</p>
        <p>RQ2: Do personal characteristics afect the experience and reported preferences of the audience?
We will present the results of two “live concerts” (one opera singer and one choir concert), in which
we tailored the concert program to the audience by matching a candidate list of songs, delivered by the
performer, with the Spotify profiles of the concert visitors using GMM. During the concert, we provided
the attendees with an app that provided information about the current song, and that measured their
preferences and perceived level of personalization for each song. These subjective evaluations could
then be compared with our predictions from our matching algorithm. In the following, we will present
the general method in more detail, after which we will report the results of the two live concerts. Our
results show that we can indeed match a program to the audience’s profiles and that our algorithm can
predict user preferences and rankings for songs.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Method</title>
      <p>Our goal was to match a concert program to the music preferences of the audience. The matching
is done using a content-based approach on audio features of the music, extracted from Spotify API 1.
From the performers, we received a potential candidate playlist, and we used the Spotify API to extract
audio features of these songs. We used the Spotify API during the registration for the event to extract
the top-listened tracks from the audience and the corresponding audio features of these tracks. The
recommendation algorithm was adapted from the music genre exploration tool [5]. To model a user’s
musical preferences, a Gaussian Mixture Model (GMM) was built from the user’s top tracks based on the
four relevant audio features (acousticness, danceability, energy, valence). To generate recommendations,
we used the personalized algorithm of [5]2 that calculated the recommendation score of a candidate
song based on how well the song matched the user’s profile in the audio feature space. The more a
candidate song matched the visitor profile, the higher that song would be ranked for that visitor. In this
way we were able to get an individual ranking of the candidate songs for each visitor.</p>
      <p>Composing an appropriate concert program is a group recommendation problem: How do we
aggregate the individual rankings of the candidate songs into a single performance list that matches the
audience best? The goal of our research was to test the GMM approach and not to test diferent group
recommendation approaches. In this particular case, we therefore use a simple averaging method [16]
in which we calculated which tracks would have the overall highest rank for the entire audience. We
would then use this to select the best songs for the concert program.</p>
      <p>To evaluate our approach, we need subjective evaluation as there is no ground truth available for
recommending new music that people did not experience before. During the concert, visitors received
an app that provided information about each song. Each song could be individually rated on a 5-point
scale depending on how the visitor liked the song and how well they think the song fitted their personal
music preference. As our recommendation approach is based on (relative) rankings of the songs, we
also collected relative preferences during the concert. The program was devised in pairs of songs and</p>
      <sec id="sec-2-1">
        <title>1Spotify API: https://developer.spotify.com/documentation/web-api/</title>
        <p>2We recommend readers to the refer to the original paper [5] for details. Note that the concerts were performed in 2018 and
2019, and that the GMM approach was aligned with the Spotify API guidelines at the time</p>
        <p>Concert</p>
        <sec id="sec-2-1-1">
          <title>Opera</title>
        </sec>
        <sec id="sec-2-1-2">
          <title>Choir</title>
          <p>Cluster N
classic_rock_pop 6
pop_hiphop_house 16
pop_rock 13
rock_pop_folk 12
classical 10
popular 31
we carefully selected the songs in these pairs to be substantially diferent in their audio features and
predicted preferences for diferent audience members (see section section 3 for details). After each
pair of songs, we asked the users whether they liked the first song or the second song a (bit) more, on
a 4-point scale without midpoint. After their response, the app would show how we predicted their
ranking for the two songs. These preference measurements were aimed at answering our RQ1.</p>
          <p>During the registration for the concert, we informed participants about the data collection and
purposes of the study before they gave informed consent. In the onboarding survey, we measured
personal characteristics of the visitors. We expect that users may perceive or engage with the songs
diferently based on their personal characteristics, especially their musical expertise [ 14, 17, 18]. Visitors
reported their age and completed the Musical Sophistication Index (MSI) survey [13], which measured
their musical expertise in active engagement (in musical activities) and emotional engagement (being
emotionally triggered by music). For the choir concert (which featured a pop-choir as well as a classical
choir), we also asked visitors to indicate their general music preference (popular or classical) to see how
much their general music preferences afect their experience and ratings during the concert. The MSI
and general music preferences were measured to answer RQ2.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <p>As we used a similar setup and analysis for the two concerts, we will report on their results in parallel in
the following sections. Both concerts were featured as part of the Den Bosch Data Week, and attendance
was composed of a diverse group of people interested in Data Science and Music, but not typical visitors
of classical or choir concerts. Data and analysis scripts, as well as example screen shots of the app, can be
found in our online repository at https://osf.io/cbp6v/?view_only=5b8e710f500d48b09509be939e246b27.
3.1. Characteristics of the concerts and audience
The Opera concert featured a professional opera singer, accompanied by a piano. The mezzo-soprano
provided a diverse set of 32 candidate songs: Opera arias that span several musical eras from Monteverdi
(17th century) to Bernstein (20th century). We got 47 registrations for the concert. During registration,
we collect visitors’ top-listened tracks via the Spotify API. To understand our audience better, We
clustered visitors based on the genres in their top-tracks into 4 diferent groups (see Table 1 for
information about each cluster in terms of age and MSI score). During the concert, we indicated to the
visitors which group they belonged to and showed live results per group. However, in our analysis
below we do not compare these groups directly as the sample size is too small.</p>
      <p>In the Choir concert, two choirs participated, one choir singing mostly popular music (pop songs),
the other choir singing classical chamber choir repertoire. The choirs provided us with a total of 16
candidate songs they could perform. In total, 41 visitors registered for the event. We collected their
Spotify top-listened tracks and asked them to indicate their preferences for music genres: 10 liked
classical music more, and 31 liked popular music more. Table 1 presents the diferences in age and MSI
score between these two groups. The two groups allowed us to make more specific comparisons in our
analyses below.
C
h
o
i
r
O
p
e
r
a
3.2. Analysis of visitor preferences
We used the GMM approach on the four audio features to predict for each visitor how much they would
like a candidate song, based on their top tracks. There was a large discrepancy between the distribution
of features of the candidate songs and the top tracks of the audience. In Figure 1, we plot the distribution
of features using violin plots combined with boxplots, for the Choir (top row) and Opera (bottom row)
concerts. The distribution of audio feature values for the top tracks of the audiences (right-most plots)
is quite similar between concerts and quite diferent from the candidate songs (left-most plots). This
shows that the music in the concert program is quite diferent from users’ own music profiles and will
therefore be mostly novel to them. Compared to the top tracks, the candidate sets have much higher
acousticness and lower energy and valence values. This was more strongly the case for the opera
concert, which was entirely classical, whereas the choir songs also contain several more popular songs.
candidate set
final list
top tracks
acousticness
danceability
energy
valence
acousticness
danceability
energy
valence
acousticness
danceability
energy</p>
      <p>valence
feature</p>
      <p>For both candidate song sets, we generated the group recommendation by aggregating the individual
rankings. For the Opera concert, we selected the top 10 out of the 32 tracks and let the singer decide
which 6 songs she could perform. We composed a concert program of 3 pairs of songs that were
suficiently diferent in their features (and their predicted ranking: i.e., how well they matched the
profile of audience), to make sure that we could find pairwise diferences in the preferences for each
pair of songs.</p>
      <p>For the choir concert, we ranked the 16 candidate songs by aggregating on the individual rankings and
also compared the rankings between the two groups of audience (classical or popular music taste). As
our goal was to show some diferences between these groups during concerts, we carefully selected pairs
of songs that better matched either the preference of the classical or popular group. We programmed 5
sets of 2 songs, two sets per choir and the last set having the two choirs ‘compete’. The setlist is shown
in Table 2. Comparing the classical rank to the popular rank, we see that for every pair, people with a
classical taste are predicted to prefer one song, and with a popular taste to prefer the other song. For
example, in set 3, those with a classical taste should like 5 (rank 4) over 6 (rank 9), whereas those with a
popular taste should like song 6 (rank 5) over song 5 (rank 8).</p>
      <p>The feature distribution in the middle column of Figure 1 shows that the final list for both concerts
is somewhat in between the candidate set and the top tracks, showing we were able to select those
candidate tracks that fitted best on the top tracks of the audience.
3.3. Do user preferences match our predictions?
During the concert, the visitors rated each song in the app in terms of how much they liked the song
and how much it was personalized to their preferences. We compare how these ratings matched their
predicted ranking of the songs. Visitors rated 6 or 10 songs respectively, which means we have repeated
data. To test the matching, we estimate how the predicted rank of the song (based on our GMM model)
and other factors (like MSI or taste group) can predict the rating, using multilevel regression. This type
of regression controls for repeated measurements with random intercepts per user, which also adjusts
for diferences in scale usage in their ratings, so no normalization of the scale is needed.</p>
      <p>
        Figure 2 shows the distribution of the ratings as a function of the (individual) rank for the Opera
concert. The audience seem to really like all 6 songs, the median rating is around 4 (out of 5 stars). The
level of personalization is rated lower, which is not surprising given that most of the audience members
did not listen to opera music at all. We do not see a strong pattern between the ranking and the ratings,
but the top 3 ranked songs (
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1-3</xref>
        ) seem to be rated somewhat higher than the bottom 3 ranked songs (
        <xref ref-type="bibr" rid="ref4 ref5">4-6</xref>
        ).
We tested several multilevel models and found that the rank could not predict the rating directly, but a
model based on whether the predicted rank of the song was high (
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1–3</xref>
        ) or low (
        <xref ref-type="bibr" rid="ref4 ref5">4–6</xref>
        ) showed that this
factor was a significant predictor for both liking (  = 0.26,  = 0.12,  = 0.037 ), and personalization
( = .38,  = 0.15,  = 0.01 ), consistent with the pattern observed in Figure 2. This partially answers
our RQ1: the predicted rank is (weakly) related to the preferences of the users. We did not find any
efects of taste group or MSI on these ratings (per RQ2).
      </p>
      <p>For the choir concert, the 10 songs varied more in their audio features than the 6 songs performed
during the opera concert, reflected in more variance in the ratings of the songs. Again, the ratings of
liking were overall high and the ratings of personalization were lower. Figure 3 shows that both liking
and personalization ratings decrease with the predicted rank, suggesting that songs ranked better by the
model seem to be liked more and are felt to be more personalized, as proposed by RQ1.This is confirmed
by the multilevel regression models that predicted the ratings based on the ranking. A model of liking
showed that the factor rank predicted the liking negatively ( = −.07,  = 0.03,  = .01 ), consistent with
Figure 3. A model for personalization showed a similar negative efect (  = −.07,  = 0.02,  = .002 ).
We did not find any efect of MSI on the ratings.</p>
      <p>We checked if the ratings depended on whether users belong to the classical or popular taste group.
We found an interaction efect between rank and group for personalization ratings, in which the negative
slope of rank appears to occur only for those in the classical group ( = −.09,  = 0.06,  = 0.09 ).
The estimate means plot reveals an interesting pattern, as shown in Figure 4. Those visitors with a
classical preference (blue line) seem to be sensitive to the predicted rank, whereas those with popular
preference(red line) considered all songs evenly personalized, irrespective of predicted rank. These
results show some diferences in personal characteristics, as proposed by RQ2.</p>
      <p>liking
personalization
1
2
3
4
5
6
1
2
3
4
5</p>
      <p>6
rank
e
s
n
o3
p
s
e
r
2
1
5
4
e
s
n
o3
p
s
e
r
2
1
9 10
rank</p>
      <p>1
1
2
3
4
5
6
7
8
2
3
4
5
6
7
8
3.4. Pairwise preferences
After each two songs, We asked participants which they liked best on a 4-point scale (no midpoint!),
ranging from much more song 1 to much more song 2. This allows us to measure an explicit pairwise
(relative) preference for each song that can be matched to the predicted ranking, which provides a more
direct answer to RQ1 than the absolute ratings per song in the previous section. For the Opera concert,
we selected the highest predicted song to be the second in the pair (GMM predicted song 1 to be liked</p>
      <p>Predicted values of personalized
classical
more in only 18 of 99 cases) 3. Looking at the confusion matrix (Table 3), we see that the algorithm
predicts at chance level (50%) for song 1, but it was able to correctly predict that song 2 was better
69% of the time. The algorithm achieves an accuracy (AUC) of 0.66 (95% CI is .55 - .75, so we are able
to predict above chance). Sensitivity (recall or hit rate) was 0.86, but Specificity (true negative rate)
however is low (0.26).</p>
      <p>For the Choir concert, we explicitly designed the pairs of songs such that one song would be liked better
by the classical group, and the other song more by the popular group. We calculated the rating diferences
for each pair and tried to predict them with the predicted diferences in rank, This ranked diference is
indeed a significant predictor of the actual pair-wise diference (  = 0.10,  = 0.036,  =&lt; .01 ). However,
we find no diferences between groups. Like the Opera concert, we again tested the agreement of
each predicted pairwise ranking with their actual ranking given during the concert, see Table 3. If we
predicted song 1 to be liked over song 2, 66% indeed liked it more. If we predicted song 2 to be liked
over song 1, 61% indeed liked it more. The accuracy (AUC) is 63.6% (95% CI is .55-.72), significantly
above chance (50%) and above the no-information rate (p&lt; .02, on average 53.5% chooses 1 over 2).
Sensitivity is 0.62 and specificity is 0.65.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion and Discussion</title>
      <p>Despite the fact that we only tailored the concert programs based on four Spotify musical features using
GMM, our results show that songs we predicted to be ranked more at the top, were to some extent
3Randomizing the orders of these pairs would have been better empirically, but the fixed order was required to be able to
present clear and simple results during the concert
indeed like more, perceived to be more personalized and also won in a direct pairwise comparison.
This answers our first research question: a simple GMM alogorithm on the audio feature values is
able to predict user preferences for novel music duing a live concert. This shows that we can tune a
concert program from niche music genres to the preferences of the audience using GMM, providing a
more personalized experience. A limitation is that streaming services like Spotify are not necessarily
optimized for classical music [19] in terms of their audio features, meta-data and genre descriptions.
Despite these limitations, we do see that our GMM modeling on the audio features of Spotify was able
to personalize the recommendations. In future work this means that the GMM predictions can be used
efectively as a measure of user preference in similar settings, for example to test diferent and more
advanced group recommendation approaches, but it might be useful to collect other data, like explicit
ratings or likes of s candidate list to further tune the recommendations.</p>
      <p>Our approach can also ofer a novel way to explore new music and uncover blind spots[ 8] or bring
more diversity to users’ music tastes [12]. In our small and perhaps not very diverse sample, We did
not find much efect of musical expertise on user preferences and experience (RQ2), despite earlier
work showing strong efects of MSI on exploration behavior. In future work, it is important to ask
participants about their familiarity with the songs, as it might influence participants’ preferences. We
did find that for most participants, the songs were all relatively novel. However, for modeling the
individual preferences, it would have been helpful to have individual scores of their familiarity with the
songs. Future work should further investigate user experience in live performances and how it relates
to personal characteristics such as musical expertise. Diferent from our use cases, in which we only
measured experience right after each two songs, Future studies should also measure users’ post-hoc
experience, for example directly after the concert, or in a follow up survey a few weeks later to test for
the lasting impact of the concert experience.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools, apart from grammar and spell checking tools
available in overleaf.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Jannach</surname>
          </string-name>
          , G. Adomavicius,
          <article-title>Recommendations with a purpose</article-title>
          ,
          <source>in: Proceedings of the 10th ACM Conference on Recommender Systems</source>
          , RecSys '16,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2016</year>
          , p.
          <fpage>7</fpage>
          -
          <lpage>10</lpage>
          . URL: https://doi.org/10.1145/2959100.2959186. doi:
          <volume>10</volume>
          .1145/ 2959100.2959186.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kamalzadeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kralj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Möller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sedlmair</surname>
          </string-name>
          ,
          <article-title>Tagflip: Active mobile music discovery with social tags</article-title>
          ,
          <source>in: Proceedings of the 21st International Conference on Intelligent User Interfaces</source>
          ,
          <source>IUI '16</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2016</year>
          , p.
          <fpage>19</fpage>
          -
          <lpage>30</lpage>
          . URL: https: //doi.org/10.1145/2856767.2856780. doi:
          <volume>10</volume>
          .1145/2856767.2856780.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>I.</given-names>
            <surname>Andjelkovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Parra</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. O'Donovan</surname>
            ,
            <given-names>Moodplay:</given-names>
          </string-name>
          <article-title>Interactive music recommendation based on Artists' mood similarity</article-title>
          ,
          <source>International Journal of Human-Computer Studies</source>
          <volume>121</volume>
          (
          <year>2019</year>
          )
          <fpage>142</fpage>
          -
          <lpage>159</lpage>
          . URL: https://linkinghub.elsevier.com/retrieve/pii/S1071581918301654. doi:
          <volume>10</volume>
          .1016/j. ijhcs.
          <year>2018</year>
          .
          <volume>04</volume>
          .004.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Taramigkou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Bothos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Christidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Apostolou</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Mentzas, Escape the bubble</article-title>
          ,
          <source>ACM</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>335</fpage>
          -
          <lpage>338</lpage>
          . URL: https://dl.acm.org/doi/10.1145/2507157.2507223. doi:
          <volume>10</volume>
          .1145/2507157. 2507223.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Willemsen</surname>
          </string-name>
          ,
          <article-title>Personalized recommendations for music genre exploration</article-title>
          ,
          <source>in: Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization</source>
          , UMAP '19,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2019</year>
          , p.
          <fpage>276</fpage>
          -
          <lpage>284</lpage>
          . URL: https://doi.org/10.1145/3320435.3320455. doi:
          <volume>10</volume>
          .1145/3320435.3320455.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>