<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Music recommendation for music learning: Hotttabs, a multimedia guitar tutor</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Mathieu Barthet, Amélie Anglade, Gyorgy Fazekas, Sefki Kolozali, Robert Macrae Centre for Digital Music Queen Mary University of London</institution>
          ,
          <addr-line>Mile End Road, London E1 4NS</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>Music recommendation systems built on top of music information retrieval (MIR) technologies are usually designed to provide new ways to discover and listen to digital music collections. However, they do not typically assist in another important aspect of musical activity, music learning. In this study we present the application Hotttabs, an online music recommendation system dedicated to guitar learning. Hotttabs makes use of The Echo Nest music platform to retrieve the latest popular or \hot" songs based on editorial, social and charts/sales criteria, and YouTube to nd relevant guitar video tutorials. The audio tracks of the YouTube videos are processed with an automatic chord extraction algorithm in order to provide a visual feedback of the chord labels synchronised with the video. Guitar tablatures, a form of music notation showing instrument ngerings, are mined from the web and their chord sequences are extracted. The tablatures are then clustered based on the songs' chord sequences complexity so that guitarists can pick up those adapted to their performance skills.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Categories and Subject Descriptors</title>
      <p>H.5.5 [Sound and Music Computing]: [Signal
analysis, synthesis, and processing]; H.3.5 [On-line
Information Services]: [Web-based services]; H.5.1 [Multimedia
Information Systems]: [Video (e.g., tape, disk, DVI)]
computer-assisted guitar tuition, automatic chord
recognition, guitar tab recommendation, online music service,
multimodal, hotttness measure (The Echo Nest), music video
tutorial (YouTube), tag cloud, user interface</p>
    </sec>
    <sec id="sec-2">
      <title>1. INTRODUCTION</title>
      <p>
        The design of music recommendation systems exploiting
context and/or content based information [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] has mainly
WOMRAD 2011 2nd Workshop on Music Recommendation and Discovery,
colocated with ACM RecSys 2011 (Chicago, US)
Copyright c . This is an open-access article distributed under the terms
of the Creative Commons Attribution License 3.0 Unported, which permits
unrestricted use, distribution, and reproduction in any medium, provided
the original author and source are credited.
been undertaken by considering music listening as the
central end-user activity. Examples of such systems are the
popular online music services Last.fm1, Pandora2, and
Spotify3, which provide new ways to experience and discover
songs [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. If, in this view, music recommendation models aim
at satisfying listeners' needs and expectations, they discard
other major actors of the chain of musical communication:
performers. In this article, we present an online music
recommendation system targeting music learning rather than
music listening, therefore targeting performers rather than
listeners.
      </p>
      <p>
        Music education is one of the humanity subjects
emphasised since ancient times [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Since the 1970s, many studies
have been published in order to build computer-assisted
instruction systems in various tasks of music education such as
music theory, ear training, performance skills development,
music composition or editing, music appreciation, musical
instruments knowledge, and harmony. However, most of
these systems use di erent approaches due to the
interdisciplinary nature of the eld [
        <xref ref-type="bibr" rid="ref2 ref5">2, 5</xref>
        ]. With the existing high-tech
information era and the rapidly growing world wide web, it is
easier to combine di erent musical instructions on a system
to provide a good learning environment which not only
integrates a variety of learning experiences for performers (e.g.
textual materials, images, expert videos, and audio), but
also allows individual performers to practice in less stressful
conditions (e.g. the typical home setting) when compared
to group-based practice [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        Amongst musical instruments, the guitar stands out as
being one of the most popular instruments in the world, with
new players taking it up every day (e.g. guitar sales
represent 50% of the whole musical instruments' market in France
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]). Amateur guitarists often seek new songs to play solo
or with other musicians during a jam session. It is common
to spend the whole time devoted to the practicing session
trying to select a song adapted to one's musical skills and to
nd music notations in order to learn it. The proposed
Hotttabs4 application is an online guitar tuition system aimed
at solving this problem by recommending popular songs to
play and guiding guitarists in the learning process.
Hotttabs uses a multimodal approach, relying on video tutorials,
chord visualisations, and tablatures (commonly referred to
1http://www.last.fm
2http://www.pandora.com
3https://www.spotify.com
4http://isophonics.net/hotttabs/
The Echo
      </p>
      <p>Nest
EchoNest</p>
      <p>API</p>
      <p>Popular song
recommender</p>
      <p>Song
query</p>
      <p>YouTube</p>
      <p>API
YouTube Video</p>
      <p>Retriever</p>
      <p>Guitar video</p>
      <p>tutorial
Guitar Tab
Crawler</p>
      <p>Hotttabs Audio/Video Processing
Audio
Exractor
Synchronised
chords
Guitar Tab</p>
      <p>Parser</p>
      <p>Automatic
Chord Recognition</p>
      <p>Guitar tab
clusters</p>
      <p>Guitar Tab</p>
      <p>Recommender
Hotttabs Guitar Tab Processing
as \tabs"), a form of music notation representing instrument
ngering with numerical symbols rather than the musical
pitches commonly used in scores.</p>
      <p>
        The popularity of the guitar may be explained by several
reasons: the great versatility of timbres that can be
produced on acoustic or electric guitars make the instrument
suitable for many di erent musical genres (blues, rock,
reggae, jazz, classical, etc.), the simple accessibility to the
instrument (guitars can be obtained at moderate costs and
can be easily stored and carried away), and the possibility
to learn songs regardless of prior music theory knowledge
using tablatures. Since the range of pitches that can be
produced on the guitar present some overlap between the
various strings, notes of identical pitch can be played at several
positions on the nger board (albeit with slightly di erent
timbres due to the di erences in the physical properties of
the strings). One of the interests of the tablature notation
is to alleviate the ambiguity on ngering by proposing an
explicit solution. They can thus be considered as more
effective than scores to assist beginners in guitar learning [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
This may be one reason why guitar tabs are by far the most
popular means of sharing musical instructions on the
internet, largely surpassing online music sheet and MIDI score
databases [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The Hotttabs application takes advantage
of the richness of hand annotated tablatures provided on
the web.
      </p>
      <p>Recently, the number of guitar tuition applications for
smartphones have blossomed (e.g. Killer Ri s5, Touch Chords6,
5http://itunes.apple.com/gb/app/killer-ri s/id325662214?mt=8
6http://itunes.apple.com/us/app/touchchords/id320070588?mt=8
Ultimate Guitar Tabs7, TabToolkit8, Rock Prodigy9)
showing a real interest in new technologies devoted to enhancing
music learning. However, most applications provide either
short sections of songs, such as ri s, only the tablatures
(without visual feedback showing how to play them), or
prede ned lists that may not contain current chart music.</p>
      <p>
        In the proposed Hotttabs application, these issues are
tackled by using The Echo Nest music platform10 to
retrieve the latest popular or \hot" songs based on editorial,
social and charts/sales criteria, guitar video tutorials from
YouTube11, the online video sharing platform, and
cuttingedge technologies in automatic chord recognition [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and
guitar tab parsing [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], to provide users with symbolic
music information assisting them in the learning process. A
ow chart showing the processes involved in the Hotttabs
application is shown in Figure 1. The application comprises
three main components: song recommendation, audio/video
processing, and guitar tab processing.
      </p>
      <p>The remainder of the article is organised as follows. In
Section 2 we present the song recommendation method. In
Section 3 we describe the audio and video processing. In
section 4 the guitar tab processing is presented. Section
5 details the web application. In Section 6 we give some
conclusions and perspectives on this work.</p>
    </sec>
    <sec id="sec-3">
      <title>2. SONG RECOMMENDATION</title>
      <p>The application recommends users a list of songs to
practice consisting of the twenty most popular songs at the time
7http://app.ultimate-guitar.com/ugt/iphone/
8http://agilepartners.com/apps/tabtoolkit/
9http://www.rockprodigy.com
10http://the.echonest.com/
11http://www.youtube.com/
of the query. These popular songs are obtained using the
\hotttnesss" measure from The Echo Nest music platform.
This measure which is expressed in the range [0;1] is based on
editorial, social and charts/sales criteria. Editorial criteria
rely on the number of reviews and blog posts that have been
published about an artist in the last three weeks, providing
an indicator of how much impact an artist has. Social
criteria are derived from the total number of track plays the artist
is receiving on a number of social media sites (for instance
using statistics gathered on last.fm12) providing an indicator
of how often people are listening to this artist. Charts/sales
criteria are based on the appearance of the artist on various
sales charts providing an indicator of how often people are
purchasing music by this artist. A list of the twenty most
popular artists is rst retrieved. Then, the most popular
song from each artist is selected. With such a process, the
song recommender relies on a dynamic music chart directly
in uenced by listeners over the web, mobile or desktop
applications, music consumers, and journalists.</p>
    </sec>
    <sec id="sec-4">
      <title>AUDIO/VIDEO PROCESSING</title>
      <p>The song learning methods underlying the application are
based on a multimodal approach using audio, video, and
symbolic (chord labels) feedback. We detail in this section
how these modalities are exploited within the application.
3.1</p>
    </sec>
    <sec id="sec-5">
      <title>YouTube guitar video tutorials retrieval</title>
      <p>Music video tutorials o er a musical tuition alternative
to music teachers since they allow one to see how a real
performer plays while listening to the music. Furthermore,
they often include spoken parts, giving extra information
in how to perform the music or how the music is
structured. YouTube provides a rich source of guitar video
tutorials which are frequently updated with the latest popular
songs by a large community of amateur and professional
guitarists. Hotttabs lters the YouTube video database to
retrieve relevant guitar video tutorials for the selected songs.
To connect Hotttabs with YouTube we use Google's Data
API Python client gdata13 and request videos
(YouTubeVideoQuery() function) containing the following keywords:
\&lt;song and artist&gt; guitar chords learn".
3.2</p>
    </sec>
    <sec id="sec-6">
      <title>Automatic chord recognition</title>
      <p>
        Symbolic information representing music along with the
video can facilitate the learning process. Furthermore in
some video tutorials, the position of the player's ngers on
the guitar neck cannot be seen. In order to tackle this issue,
the audio tracks of the YouTube video tutorials are rst
extracted (using the FFmpeg converter14) and then processed
with an automatic chord extraction algorithm. Hotttabs
utilises the chord recognition algorithm described in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] to
identify the chords played by the performer and displays
them on the screen synchronously with the video. This
algorithm is a simpli ed version of the state-of-the-art chord
extraction model [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] whose accuracy outperforms that
obtained by typical hand annotated guitar tabs from the web
12last.fm not only tracks listeners' musical history on their
website but also when they use other services in desktop
and mobile applications through what they call \scrobblers":
http://www.last.fm/help/faq?category=Scrobbling
13http://code.google.com/p/gdata-python-client/
14http:// mpeg.org/
[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]: the average chord accuracy (79%) obtained by the
automatic method over 180 Beatles tracks is 10 percentage
points higher than the chord accuracy (69%) obtained from
guitar tabs.
      </p>
      <p>
        The algorithm in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] is implemented in the Vamp
plugin15 Chordino/NNLS Chroma16. A spectrally whitened
log-frequency spectrum (constant-Q with three bins per
semitone) is rst computed. It is automatically corrected for any
deviations from the standard 440 Hz tuning pitch, and an
approximate semitone spaced transcription is obtained using
a dictionary of notes with geometrically decaying harmonics
magnitudes. The resulting semitone spectrum is multiplied
with a chroma pro le, and mapped to 12 bins corresponding
to pitch classes. Using these features, the algorithm provides
chord transcription, using a set of pro les (dictionary) to
calculate frame-wise chord similarities. The resulting chord
sequence is smoothed by the standard hidden Markov model
(HMM)/Viterbi approach. The chord dictionary comprises
the four main chord classes: major, minor, diminished, and
dominant.
4.
      </p>
    </sec>
    <sec id="sec-7">
      <title>GUITAR TAB PROCESSING</title>
      <p>One of the driving factors behind the growth in online
hand annotated tabs is in the ease in which they can be
produced and shared by anyone. As a consequence, these
tabs do not conform to any standard format and exist in
many locations on the internet. As such, we have developed
methods for mining the web for these tabs and parsing them
to interpret the required data.
4.1</p>
    </sec>
    <sec id="sec-8">
      <title>Tab mining</title>
      <p>The web crawler of the Hotttabs application uses 911tabs17,
a guitar tablature search engine, to access 4.7 million tabs
that have already been categorised by artist, title,
tablature type, and rating. Additionally, we crawled 264
common chords from Chordie18 and Guitarsite19 to assist in the
recognition of chords when parsing tab content.
4.2</p>
    </sec>
    <sec id="sec-9">
      <title>Tab parsing</title>
      <p>To interpret the tablature text from the HTML code and
the chord sequence from the tablature text, Hotttabs does
the following:</p>
      <p>Any HTML code is stripped from the tab and
\nonbraking space" or \new line" tags are expanded
accordingly.</p>
      <p>Chord de nitions indicating ngerings are interpreted
and added to a chord dictionary for the remainder of
the tablature (e.g. \C#m: 45664-"; in this sequence,
the numbers indicate the nger positions along the
guitar neck for the six guitar strings ordered by decreasing
pitch from left to right, and the hyphen indicates that
the string must not be plucked).</p>
      <p>The tablature is divided up into sections based on the
layout and any structural indicators (e.g. \Chorus").
15http://www.vamp-plugins.org/
16http://isophonics.net/nnls-chroma
17http://www.911tabs.com
18http://chordie.com
19http://www.guitarsite.com/chords
User song
queries</p>
      <p>Web
app.</p>
      <sec id="sec-9-1">
        <title>Hotttabs</title>
        <p>REST Web API
EchoNest API YouTube API</p>
      </sec>
      <sec id="sec-9-2">
        <title>Server</title>
        <p>Scheduled
queries</p>
        <sec id="sec-9-2-1">
          <title>Search</title>
        </sec>
        <sec id="sec-9-2-2">
          <title>Engine</title>
        </sec>
        <sec id="sec-9-2-3">
          <title>Robot</title>
        </sec>
        <sec id="sec-9-2-4">
          <title>Processing</title>
        </sec>
        <sec id="sec-9-2-5">
          <title>Engine</title>
          <p>Hotttabs</p>
          <p>Database
Audio and Tab
Feature Extractors</p>
          <p>ow chart (app.: application; API: Application Programming Interface).</p>
          <p>When learning how to play guitar, one of the di culties
lies in knowing an increasing number of chords and their
relative ngerings. Thus the chord vocabulary (i.e. the set
of unique chords) used in a guitar tab is of interest to the
learning guitarist. Additionally, both the number of chords
required to play the song and the speci c chords it contains
(as some chords tend to be easier to play than others)
in</p>
          <p>uence the guitarist when choosing a guitar tab. For any
given song it is common to nd several guitar tabs with
chord vocabularies of varying sizes. Indeed, some simpli ed
(e.g. to make it more accessible and easier to play) or even
complexi ed versions (e.g. to change the style or genre of
the song by varying the harmonisation) of the original song
are sometimes provided on guitar tabs websites.</p>
          <p>Thus to help the user choose between all the tabs retrieved
for one seed song we further cluster the guitar tabs into three
categories based on the size of their chord vocabulary: easy,
medium, and di cult. To do so we rank the tabs by the
number of unique chords they each contain and then divide
this ranked list into three clusters. The tab clusters are then
displayed as tag clouds (aggregated collections of words or
phrases used to describe a document), where each tag in the
cloud shows the name of the website from which the tab
was extracted as well as the chord vocabulary used in the
tab (see bottom of Figure 4). Therefore users can know, in
one glance, which chords are required to play a given tab and
how many chords it contains, without having to browse each
tab individually. By clicking on an item in the tab clouds
the user is redirected to the full tab in the website where
it was originally published. Although the di culty to play
individual chords is not yet taken into account in the tab
clustering process (which only uses the size of the chord
vocabulary of the tab), displaying the chord vocabulary in the
tab cloud helps users to choose the most appropriate tabs
for them since they know which chords they have already
learned and which chords they nd di cult to play.
However as most guitarists consider some speci c chords to be
more di cult or more tiring for the hand than others due to
their ngering constraints (e.g. barre chords), we will
consider including a measure of chord ( ngering) di culty into
future implementations of our tab clustering algorithm.</p>
          <p>Figure 4: Screenshot of the Hotttabs web application (http://isophonics.net/hotttabs/).</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>HOTTTABS WEB APPLICATION</title>
      <p>The Hotttabs application integrates the functionality
described in the previous sections in a web-based client-server
architecture.</p>
      <p>The client runs in most popular web browsers, and
provides an easy to use interface (see Figure 4). It allows the
user to interact with the application and perform the
following actions: i) query for popular songs, ii) retrieve a list of
video tutorials and three sets of tab clusters (easy, medium,
and di cult) for the selected popular song, iii) play a video,
from a list of thumbnails, in an embedded video player,
synchronised with automatically extracted chords, iv) select and
link to a tab from the tab clusters as you would from a search
engine.</p>
      <p>
        In response to user interaction, the server performs the
core functionality as described in section 5.2. Concerning
client-server communication, Hotttabs follows the
Representational State Transfer (REST) style web application design
(see Figure 3). In this architecture web pages form a virtual
state machine, allowing a user to progress through the
application by selecting links, with each action resulting in a
transition to the next state of the application by transferring
a representation of that state to the user [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
5.1
      </p>
    </sec>
    <sec id="sec-11">
      <title>Front end</title>
      <p>The light weight client uses a combination of standard
web technologies (HTML, CSS, JavaScript) and makes use
of the JQuery20 library to dynamically load content from
the server via XMLHttp requests. This content includes the
list of popular songs, and the a list of video thumbnails for
a selected song. We developed client-side JavaScript code
which interacts with the embedded YouTube player, to
display chord names next to the video. The chord sequence
is requested when the user starts the video, and returned
using JavaScript Object Notation with timing information,
which is used to synchronise the chords with the video. The
tab clusters are displayed using an adapted version of the
WP-Cumulus Flash-based tag cloud plugin21. This plugin
utilises XML data generated on the server side from the
results of the tab search and tab clustering algorithm.
5.2</p>
    </sec>
    <sec id="sec-12">
      <title>Back end</title>
      <p>
        The server side of the Hotttabs application builds on
semantic audio and web technologies outlined in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The Sonic
Annotator Web Application (SAWA) [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], a Python22
framework for writing web applications involving audio analysis, is
used as a basis for Hotttabs. This is extended with modules
to access The Echo Nest, YouTube, and perform additional
application speci c functionality as shown in Figure 3.
      </p>
      <p>
        The communication between the client and server is
coordinated using the Model View Controller (MVC)
architectural pattern [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Some important domain objects in the
MVC model, as well as the Hotttabs database, are provided
by the Music Ontology framework [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], such that
corresponding data structures are generated from the ontology
speci cation using the Music Ontology Python library [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
20http://jquery.com/
21http://wordpress.org/extend/plugins/wp-cumulus/
22http://www.python.org
For instance, information about popular artists and their
songs (retrieved from The Echo Nest) are stored in objects
and database entries corresponding to the mo:Track23 and
mo:MusicArtist concepts.
      </p>
      <p>
        Besides user interaction, the server also performs
scheduled queries for popular songs to bootstrap the database.
This is necessary, since crawling for guitar tabs and the
feature extraction process for chord analysis are too
computationally expensive to be performed in real-time. This process
uses the crawler described in section 4.1, as well as the chord
extraction algorithm of [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] implemented as a Vamp audio
analysis plugin [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] which can be loaded by the processing
engine of SAWA.
6.
      </p>
    </sec>
    <sec id="sec-13">
      <title>CONCLUSIONS AND PERSPECTIVES</title>
      <p>We presented Hotttabs, an online multimedia guitar
tuition service comprised of the following features: (i) the
recommendation of popular songs based on The Echo Nest
\hotttness" measure, taking into account the artists'
popularity dynamically through web data mining, (ii) the
retrieval of guitar video tutorials from the YouTube database,
(iii) the visual feedback of the chord labels using a
contentbased music information retrieval technique, and (iv) the
recommendation of guitar tablatures targeting users of
different levels depending on the vocabulary of chords in the
selected song.</p>
      <p>
        We plan to conduct a user survey in order to obtain some
feedback to feed into future technical developments of the
application. We also intend to model user skills and
assess performances in order to adapt which music and guitar
tabs are recommended, based on the users learning process.
Interesting follow-ups to this work also include the
development of a guitar chord ngering dictionary to display various
possible chord ngerings along with the chord labels. The
chord concurrence measure introduced in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] could be used
to select the most accurate guitar tabs and discard erroneous
ones. Future work will also address the development of new
tab clustering methods based on the chord sequence parsing,
the integration of an audio/video time-stretching technique
to allow for the slowing down of the video tutorials, and
the synchronisation of guitar tabs and lyrics with the videos
using sequence alignment.
7.
      </p>
    </sec>
    <sec id="sec-14">
      <title>ACKNOWLEDGMENTS</title>
      <p>The authors would like to thank Matthias Mauch for his
automatic chord recognition system. This work was partly
funded by the musicology for the masses EPSRC project
EP/I001832/1, and the Platform Grant from the Centre for
Digital Music funded by the EPSRC project EP/E045235/1.
8.
23In this notation, the namespace pre x mo: represents the URL
of the Music Ontology.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Aman</surname>
          </string-name>
          and
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Liikkanen</surname>
          </string-name>
          .
          <article-title>A survey of music recommendation aids</article-title>
          .
          <source>In Proceedings of the 1st Workshop on Music Recommendation and Discovery (WOMRAD) colocated with ACM RecSys</source>
          , pages
          <volume>25</volume>
          {
          <fpage>28</fpage>
          ,
          <string-name>
            <surname>Barcelona</surname>
          </string-name>
          , Spain,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Brandao</surname>
          </string-name>
          , G. Wiggins, and
          <string-name>
            <given-names>H.</given-names>
            <surname>Pain</surname>
          </string-name>
          .
          <article-title>Computers in music education</article-title>
          .
          <source>In Proceedings of the AISB'99 Symposium on Musical Creativity</source>
          , Edinburgh, UK,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Cannam</surname>
          </string-name>
          .
          <article-title>The VAMP audio analysis plugin API: A programmer's guide</article-title>
          . Availble online: http://vamp-plugins.org/guide.pdf,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>O.</given-names>
            <surname>Celma</surname>
          </string-name>
          .
          <article-title>Music Recommendation and Discovery - The Long Tail, Long Fail, and Long Play in the Digital Music Space</article-title>
          . Springer,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y</given-names>
            <surname>.-K. Cli Liao</surname>
          </string-name>
          .
          <article-title>E ects of computer-assisted instruction on students' achievement in Taiwan: A meta-analysis</article-title>
          .
          <source>Computers &amp; Education</source>
          ,
          <volume>48</volume>
          :
          <fpage>216</fpage>
          {
          <fpage>233</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Evans</surname>
          </string-name>
          .
          <article-title>Domain-Driven Design: Tackling Complexity in the Heart of Software</article-title>
          .
          <string-name>
            <surname>Addison-Wesley Professional</surname>
          </string-name>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G.</given-names>
            <surname>Fazekas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cannam</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Sandler</surname>
          </string-name>
          .
          <article-title>Reusable metadata and software components for automatic audio analysis</article-title>
          .
          <source>IEEE/ACM Joint Conference on Digital Libraries (JCDL'09) Workshop on Integrating Digital Library Content with Computational Tools and Services</source>
          , Austin, Texas, USA,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Fazekas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Raimond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Jakobson</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Sandler</surname>
          </string-name>
          .
          <article-title>An overview of Semantic Web activities in the OMRAS2 project</article-title>
          .
          <source>Journal of New Music Research (special issue on Music Informatics and the OMRAS2 project)</source>
          ,
          <volume>39</volume>
          (
          <issue>4</issue>
          ):
          <volume>295</volume>
          {
          <fpage>311</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R. T.</given-names>
            <surname>Fielding</surname>
          </string-name>
          and
          <string-name>
            <given-names>R. N.</given-names>
            <surname>Taylor</surname>
          </string-name>
          .
          <article-title>Principled design of the modern Web architecture</article-title>
          .
          <source>ACM Transactions on Internet Technology</source>
          ,
          <volume>2</volume>
          (
          <issue>2</issue>
          ):
          <volume>115</volume>
          {
          <fpage>150</fpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>IRMA</surname>
          </string-name>
          .
          <article-title>Le marche des instruments de musique: une facture bien reglee [the market of musical instruments: a well tuned invoice]</article-title>
          . http://www.irma.asso.
          <article-title>fr/ LE-MARCHE-DES-</article-title>
          <string-name>
            <surname>INSTRUMENTS-DE?xtor=</surname>
          </string-name>
          EPR-75, Last viewed in
          <year>September 2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>A. LeBlanc</surname>
            ,
            <given-names>Y. C.</given-names>
          </string-name>
          <string-name>
            <surname>Jin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Obert</surname>
            , and
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Siivola</surname>
          </string-name>
          .
          <article-title>E ect of audience on music performance anxiety</article-title>
          .
          <source>Journal of Research in Music Education</source>
          ,
          <volume>45</volume>
          (
          <issue>3</issue>
          ):
          <volume>480</volume>
          {
          <fpage>496</fpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.-J.</given-names>
            <surname>Lou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-C.</given-names>
            <surname>Guo</surname>
          </string-name>
          , Y.-
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhu</surname>
          </string-name>
          , R.-C. Shih, and W.-Y. Dzan.
          <article-title>Applying computer-assisted musical instruction to music appreciation course: An example with Chinese musical instruments</article-title>
          .
          <source>The Turkish Online Journal of Educational Technology (TOJET)</source>
          ,
          <volume>10</volume>
          (
          <issue>1</issue>
          ),
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R.</given-names>
            <surname>Macrae</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Dixon</surname>
          </string-name>
          .
          <article-title>Guitar tab mining, analysis and ranking</article-title>
          .
          <source>In Proceedings of the 12th International Society for Music Information Retrieval Conference (ISMIR</source>
          <year>2011</year>
          ), Miami, Florida, USA,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mauch</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Dixon</surname>
          </string-name>
          .
          <article-title>Approximate note transcription for the improved identi cation of di cult chords</article-title>
          .
          <source>In Proceedings of the 11th International Society for Music Information Retrieval Conference (ISMIR</source>
          <year>2010</year>
          ), Utrecht, Netherlands,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mauch</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Dixon</surname>
          </string-name>
          .
          <article-title>Simultaneous estimation of chords and musical context from audio</article-title>
          .
          <source>IEEE Transactions on Audio, Speech, and Language Processing</source>
          ,
          <volume>18</volume>
          (
          <issue>6</issue>
          ):
          <volume>1280</volume>
          {
          <fpage>1289</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Naofumi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Yoshiano</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tsuyoshi</surname>
          </string-name>
          .
          <article-title>A study of automated tablature generation for assisting learning playing the guitar</article-title>
          .
          <source>Technical Report 189, IEIC Technical Report (Institute of Electronics, Information and Communication Engineers)</source>
          , Japan,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Raimond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Abdallah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sandler</surname>
          </string-name>
          , and
          <string-name>
            <surname>G. Frederick.</surname>
          </string-name>
          <article-title>The music ontology</article-title>
          .
          <source>In Proceedings of the 7th International Society for Music Information Retrieval Conference (ISMIR</source>
          <year>2007</year>
          ), Vienna, Austria,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>