<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>J. d. Berardinis);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jacopo de Berardinis</string-name>
          <email>jacopo.deberardinis@kcl.ac.uk</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Albert Meroño-Peñuela</string-name>
          <email>albert.merono@kcl.ac.uk</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Poltronieri</string-name>
          <email>andrea.poltronieri2@unibo.it</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valentina Presutti</string-name>
          <email>valentina.presutti@unibo.it</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>King's College London</string-name>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Semantic Web, Ontology, Music Information Retrieval, Computational Musicology</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>The annotation of music content is a complex process to represent due to its inherent multifaceted, subjectivity, and interdisciplinary nature. Numerous systems and conventions for annotating music have been developed as independent standards over the past decades. Little has been done to make them interoperable, which jeopardises cross-corpora studies as it requires users to familiarise with a multitude of conventions. Most of these systems lack the semantic expressiveness needed to represent the complexity of the musical language and cannot model multi-modal annotations originating from audio and symbolic sources. In this article, we introduce the Music Annotation Pattern, an Ontology Design Pattern (ODP) to homogenise diferent annotation systems and to represent several types of musical objects (e.g. chords, patterns, structures). This ODP preserves the semantics of the object's content at diferent levels and temporal granularity. Moreover, our ODP accounts for multi-modality upfront, to describe annotations derived from diferent sources, and it is the first to enable the integration of music datasets at a large scale.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Similarly to other forms of artistic expression, the analysis of music can be considered as a
quest for meaning – a process driven by musical theories and perceptual cues attempting to
shed light on the potentially ambiguous and intricate messages that artists have encoded in
their music [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Starting from a composition or a performance, music analysis usually focuses
on detecting elements related to harmony, form, texture, etc., along with the identification
of potential interrelated functions they may exert in the piece (creating or releasing tension,
evoking images, inducing emotions, etc.) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        At the core of this multifaceted process lies the ability to efectively annotate music For
example, if the goal of a harmonic analysis is to identify chords from a composition, a music
annotation may correspond to a list of chords together with a reference to their onset (i.e.
when they occur in the piece). Besides contributing to the more general goal of understanding
CEUR
music, these annotations are also of pedagogic interest (e.g. teaching material for classrooms in
analysis, harmony, or composition) and of musicological relevance. They also provide valuable
data for training and evaluating algorithmic methods for music information retrieval (MIR)
and computational music analysis (CMA), and for supporting performers studying scores and
preparing their own interpretation [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. This interdisciplinary interest in music annotations has
also fuelled the development of novel applications and workflows focused on their collection
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], interaction [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and sharing [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Nevertheless, annotating music has always been a challenging task in many respects [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
Musical content is rich in components (voices, sections, etc.) and nuances (accents, prolongations,
modulations) that are often dificult to represent and to consistently relate to the content of an
annotation. Several types of musical notations have been introduced to address this problem,
although primarily focused on representing musical scores (c.f. Section 2.1). Even the score
itself is based on conventions and symbols that have evolved diachronically – as musical periods
have changed, as well as stylistically – as musical genres vary [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This evanescent aspect of
music is then more pronounced when focusing on representing music annotations. For example,
when annotating chords, diferent notation systems have been used over the years, starting
with the basso continuo, almost universally used in the Baroque era, to the modern Leadsheet
notations, mainly used to annotate chords in Jazz music [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        A multitude of notation systems have been developed, proposing diferent approaches on how
to annotate music. This fragmentation is reflected in a vast heterogeneity of file formats and
extensions, with consequent interoperability problems. When annotations are encoded within
a score, software tools for music processing and computer-aided musicology, like m u s i c 2 1 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
and note-seq1, have rapidly evolved to parse a variety of symbolic formats 2. When annotations
are decoupled from the music content [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ], these are often encoded using dataset-specific
standards and conventions. As a result, retrieving and integrating music annotations from
diferent sources is a challenging, time-consuming task, which stems from the encoding problem
and the lack of well-established standards for releasing music datasets [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. This brings a cascade
of efects: (i) it limits the ability to perform cross-corpora studies, especially in multi-modal
settings – involving both audio and score annotations; (ii) it leaks ambiguity in the annotations
due to the poor semantic expressiveness of the current approaches; and (iii) it confines users to
familiarising with a multitude of standards.
      </p>
      <p>In this article, we introduce the Music Annotation Pattern, an Ontology Design Pattern for
modelling a wide set of music annotations. The Music Annotation Pattern is a reusable block
for representing annotations of diferent types, from diferent sources, and addressing
heterogeneous timing conventions. The ODP has been used in preliminary experiments integrating
harmonic datasets (chord annotations from multiple sources) in the Polifonia project3. To our
best, it is the first attempt at achieving semantic interoperability of music annotations collected
from multi-modal sources.</p>
      <sec id="sec-1-1">
        <title>1https://github.com/magenta/note-seq 2See, for example https://web.mit.edu/music21/doc/moduleReference/moduleConverter.html 3https://polifonia-project.eu</title>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        The complexity of representing musical content is related to the manifold sources that are
available when studying music. To contextualise this process, Vinet [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] introduces two diferent
Representation Levels to categorise diferent types of music content: signal representations and
symbolic representations. A symbolic representation is context-aware and describes events in
relation to formalised concepts of music (music theory), whereas the signal representation is
a blind, context-unaware representation, thus adapted to transmit any, non-musical kind of
sound, and even non-audible signals.
      </p>
      <p>In this paper, we focus on symbolic representation systems and how these can be semantically
described to address the three challenges outlined in the introduction.4</p>
      <sec id="sec-2-1">
        <title>2.1. Modelling scores and score-embedded annotations</title>
        <p>
          Over the years, various representation systems have been developed, some of which are still
used today. A notable example is MIDI (Musical Instrument Digital Interface) [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], which also
provides a data communication protocol for music production and live performance. A MIDI
ifle can be described as a stream of events, each defined by two components: MIDI time and
MIDI message. The time value describes the time to wait (a temporal ofset) before executing
the following message. The message value, instead, is a sequence of bytes, where the first one
is a command, often followed by complementary data.
        </p>
        <p>
          The ABC notation [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] is a text-based music notation system and the de facto standard for
folk and traditional music. An ABC tune consists of a tune header and a tune body, terminated
by an empty line or the end of the file. The tune header contains the tune’s metadata, and can
be filled with 27 diferent fields that describe composer, tempo, rhythm, source, etc. The tune
body, instead, describes the actual music content, such as notes, rests, bars, chords, and clefs.
        </p>
        <p>
          MusicXML [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] is an XML-based music interchange language. It is intended to represent
common western musical notation from the seventeenth century onwards, including both
classical and popular music. Similarly to MIDI, MusicXML defines both an interchange language
and a file format (in this case XML).
        </p>
        <p>
          The Music Encoding Initiative (MEI) [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] is a community-driven, open-source efort to define
a system for encoding musical documents in a machine-readable structure. The community
formalised the MEI schema, a core set of rules for recording physical and intellectual
characteristics of music notation documents expressed with an XML schema. This framework aims at
preserving the XML compatibility while expressing a wide level of music nuances.
        </p>
        <p>
          Other systems of symbolic notation include the CHARM system [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ], **kern [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] and
LilyPond [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. All these formats difer dramatically in their syntax, which may exacerbate the
interoperability problem and the consequent fragmentation of music data.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Modelling decoupled annotations</title>
        <p>
          To overcome these problems, annotation standards have been proposed to decouple annotations
from the scores, and to encode them in a separate yet unified format. The most notable example
4This does not imply that a symbolic annotation cannot also refer to audio music (alias tracks, recordings).
is the Annotated Music Specification for Reproducible MIR Research (JAMS) [
          <xref ref-type="bibr" rid="ref21 ref22">21, 22</xref>
          ], a
JSONbased format to encode music annotations. It is primarily used to train and evaluate MIR
algorithms, especially in the audio domain. JAMS supports the annotation of several music
object types – from notes and chords to patterns and emotions, unambiguously defining the
onset, duration, value and confidence of each observation (e.g. ”C:major” starting at second
3, lasting for 4 seconds, detected with a confidence level of 90%). This standard also ofers the
possibility of storing multiple and heterogeneous annotations in the same file, as long as they
pertain to the same piece. Notably, JAMS provides a loose schema to record metadata, both
related to the track (title, artists, etc.) and to each annotation (annotator, annotation tools, etc.).
        </p>
        <p>Nonetheless, JAMS supports annotations collected from signal representation (audio), as it
was not originally designed for the symbolic domain. This is due to a discrepancy between
audio-based annotations – expressing temporal information in absolute times (seconds), and
symbolic annotations – using relative or metrical temporal anchors (e.g. beats, measures).
Also, from a descriptive perspective, it is not possible to disambiguate certain attributes in the
metadata sections. For instance, the “a r t i s t ” field in the current JAMS definition may refer to
the composer or to the performer of the piece. Finally, JAMS is limited to the expressiveness
of JSON, which does not allow for the semantic expression of concepts that are sometimes
essential for describing musical content. For example, even if the specification of composers and
performers was possible in the standard, this would still be insuficient to express the semantic
relationships occurring between these concepts.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Modelling semantics in music data</title>
        <p>
          To encode semantics in music data, and account for the ambiguity problem in music annotations,
Semantic Web technologies can be useful, as shown in other domains such as Cultural Heritage
[
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. Over the past two decades, several ontologies have been developed in the music domain.
Some ontologies have been designed for describing high-level descriptive and cataloguing
information, such as the The Music Ontology [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] and the DOREMUS Ontology [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ].
        </p>
        <p>
          Other ontologies describe musical notation, both from the music score and the symbolic points
of view. For example, the MIDI Linked Data Cloud [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] models symbolic music descriptions
encoded in MIDI format. The Music Theory Ontology (MTO) [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ] aims to describe theoretical
concepts related to a music composition, while The Music Score Ontology (Music OWL) [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ]
represents similar concepts with a focus on music sheet notation. Finally, the Music Notation
Ontology [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ] focuses on the core “semantic” information present in a score. The Music
Encoding and Linked Data framework (MELD) [
          <xref ref-type="bibr" rid="ref30">30</xref>
          ] reuses multiple ontologies, such as the
Music and Segment Ontologies, FRBR in order to describe real-time annotation of digital music
scores. The Music Note Ontology [
          <xref ref-type="bibr" rid="ref31">31</xref>
          ] proposes to model the relationships between a symbolic
representation and the audio representation, but only considering the structure of the music
score and the granularity level of the music note.
        </p>
        <p>Each of these ontologies covers a specific aspect of music notations. Our ODP reuses and
extend their modelling solutions to provide a comprehensive, scalable and coherent representation
music annotations.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. The Music Annotation Pattern</title>
      <p>
        The Music Annotation ODP addresses the goal of modelling diferent types of musical
annotations. For example, this ODP can be used to describe musical chords, notes rather than patterns,
both harmonic and melodic and structural annotations. The Music Annotation ODP also aims to
represent annotations derived from diferent types of sources, such as audio and score.
The ODP is represented in Figure 1 an it is available online at the following URI:
https://purl.org/andreapoltronieri/music-annotation-pattern
The complete implementation and documentation of the pattern, as well as its documentation
and all the examples presented in this paper are available on a dedicated GitHub repository5.
To be compliant with the practice of the Music Information Retrieval community, we reuse the
terminology from JAMS6 [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. The following terms are used for the ODP vocabulary:
• Music Annotation: a music annotation is defined as a group of M u s i c O b s e r v a t i o n s (see
below) that share certain elements, such as the method used for the annotation and the
type of object being annotated (e.g. chords, notes, patterns); an annotation has one and
only one annotator, that can be of diferent types e.g., a human, a computational method,
and which is the same for all its observations.
• Music Observation: a music observation is defined as the content of a music annotation.
      </p>
      <p>It includes all the elements that characterise the observation. For example, in the case of
an annotation of chords, each observation is associated with one chord, and it specifies,
in addition to the chord value, its related temporal information and the confidence of the
annotator for that observation.</p>
      <p>The structure of the Music Annotation ODP consists of the relations between an M u s i c A n n o t a t i o n
and its M u s i c O b s e r v a t i o n s .</p>
      <p>An integration efort of a set of datasets containing chord annotations, in the context of the
Polifonia project3, provided a useful empirical ground to define a set of Competency questions
(CQs) to drive the design of the Music Annotation ODP. They are listed in Table 1. Each
competency question is associated with a corresponding SPARQL query, they are all available
on the project’s GitHub repository5.</p>
      <p>
        The ODP was modelled by following a CQ-driven approach [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ], and by reusing a JAMS-based
terminology.
      </p>
      <p>Annotation. Addressing CQ1, CQ4, CQ10: a M u s i c A n n o t a t i o n has to be intended as a
collection of M u s i c O b s e r v a t i o n s about a M u s i c a l O b j e c t . For musical objects, in this context, we refer
to a concept generalising over audio tracks and scores. M u s i c A n n o t a t i o n s can be of two types:
S c o r e M u s i c A n n o t a t i o n and A u d i o M u s i c A n n o t a t i o n .
5Music Annotation Pattern repository: https://github.com/andreamust/music-annotation-pattern
6Oficial JAMS documentation: https://jams.readthedocs.io/en/stable/
ID
CQ1
CQ2
CQ3
CQ4
CQ5
CQ6
CQ7
CQ8
CQ9
CQ10</p>
      <p>Competency questions
What is the type of a music annotation/observation for a musical object?
What is the time frame within the musical object addressed by an annotation?
What is its start time (i.e. the starting time of the time frame)?
Which are the observations included in an annotation?
For a specific music observation, what is the starting point of its addressed time frame, within
its reference musical object?
For a specific music observation, what is its addressed time frame, within the musical object?
What is the value of a music observation?
Who/what is the annotator of a music annotation/observation, and what is its type?
What is the confidence of a music observation?</p>
      <p>What is the musical object addressed by a music annotation?</p>
      <p>Time information. Addressing CQ2, CQ3, CQ5, CQ6. The temporal information of a
M u s i c A n n o t a t i o n and a M u s i c O b s e r v a t i o n is expressed in the same way, thus efectively
creating an independent pattern for describing musical time intervals. This pattern is composed
by a M u s i c T i m e I n t e r v a l which in turn defines a M u s i c T i m e I n d e x and a M u s i c T i m e D u r a t i o n . They
indicate the time frame, within the referenced musical object, addressed by a music
annotation/observation. More specifically, a M u s i c T i m e I n d e x defines the start point of the annotation,
while M u s i c T i m e D u r a t i o n describes the duration of the annotation.</p>
      <p>Each M u s i c T i m e I n d e x is composed of one or more components, namely
M u s i c T i m e I n d e x C o m p o n e n t s. The latter, as well as the M u s i c T i m e D u r a t i o n , defines the
value of the temporal annotation via a datatype property h a s T i m e V a l u e , which has as range
r d f s : L i t e r a l , and the format of the annotation itself, expressed by the M u s i c T i m e V a l u e T y p e
class.</p>
      <p>In the case of A u d i o M u s i c A n n o t a t i o n and A u d i o M u s i c O b s e r v a t i o n , the start time of the annotation
shall be expressed by a single M u s i c T i m e I n d e x C o m p o n e n t , which will have as M u s i c T i m e V a l u e T y p e
a time format in seconds, minutes or milliseconds. Instead, in the case of S c o r e A n n o t a t i o n and
S c o r e O b s e r v a t i o n two M u s i c T i m e I n d e x C o m p o n e n t s will be needed to describe the start time, the
ifrst to describe the beat in which the annotation begins and the second to describe the beat
within the measure in which the annotation starts.</p>
      <p>Class: MusicTimeInterval</p>
      <p>SubClassOf:
hasMusicTimeDuration only MusicTimeDuration,
hasMusicTimeIndex only MusicTimeIndex,
hasMusicTimeDuration exactly 1 MusicTimeDuration,
hasMusicTimeIndex exactly 1 MusicTimeIndex
Class: MusicTimeIndex</p>
      <p>SubClassOf:
hasMusicTimeIndexComponent only MusicTimeIndexComponent,
hasMusicTimeIndexComponent min 1 MusicTimeIndexComponent
Class: MusicTimeIndexComponent</p>
      <p>SubClassOf:
hasMusicTimeValueType only MusicTimeValueType,
hasMusicTimeValueType exactly 1 MusicTimeValueType,
hasTimeValue only rdfs:Literal,
hasTimeValue exactly 1 rdfs:Literal
Annotator. Addressing CQ8. Annotations have one and only one A n n o t a t o r , relation
expressed through the object property h a s A n n o t a t o r . A n n o t a t o r s are classified by their type
(A n n o t a t o r T y p e ), for example H u m a n , M a c h i n e , C r o w d s o u r c i n g , etc., which is exactly one.
ObjectProperty: hasAnnotator</p>
      <p>SubPropertyChain:</p>
      <p>isAnnotatorOf o includesMusicObservation
Domain:</p>
      <p>MusicAnnotation
Range:</p>
      <p>Annotator
Music Observation. Addressing CQ1, CQ4, CQ7, CQ9. Each M u s i c A n n o t a t i o n includes
a set of M u s i c O b s e r v a t i o n s. M u s i c O b s e r v a t i o n s can be of two types: S c o r e M u s i c O b s e r v a t i o n
and A u d i o M u s i c O b s e r v a t i o n . The type of an observation must be compatible with the type
of the annotation that contains them. If the annotation is S c o r e M u s i c A n n o t a t i o n , it contains
S c o r e M u s i c O b s e r v a t i o n s, otherwise it contains A u d i o M u s i c O b s e r v a t i o n s. The annotator (and
its type) of an observation is the same and only from the annotation that includes it: this is
formalised by means of a property chain in the ODP. However, the level of confidence of an
annotator is associated to each observation (h a s C o n f i d e n c e ).</p>
      <p>Each M u s i c O b s e r v a t i o n has an M u s i c O b s e r v a t i o n V a l u e , which characterises its content. The
M u s i c O b s e r v a t i o n V a l u e class is meant to be specialised depending on the subject being observed
(and annotated), e.g. Chord, Note, Structural Annotation. For example, it can generalise over
concepts from existing ontologies, such as the Chord Ontology7 for chord annotations. Musical
object, music annotation, music observation, music observation value, music time interval,
annotator, and annotator type are disjoint concepts.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Usage Example</title>
      <p>In this section, we describe two examples of usage of the Music Annotation ODP. We remind
that this ODP addresses diferent types of annotations for diferent types of sources (e.g. score,
audio). The examples show how the Music Annotation ODP can be used to describe: (i) musical
chord annotations and (ii) structural annotations of a song.
7Chord Ontology documentation available at: http://motools.sourceforge.net/chord_draft_1/chord.html</p>
      <sec id="sec-4-1">
        <title>4.1. Chord Annotations</title>
        <p>
          The first example is an annotation of chords from a music score of Wolfgang Amadeus Mozart’s
Piano Sonata no. 1 in C major (Allegro). The original annotation is taken from the Mozart Piano
Sonatas Dataset [
          <xref ref-type="bibr" rid="ref33">33</xref>
          ]. Figure 2 depicts the resulting RDF graph using the Grafoo Notation8. In
all the examples, dummy prefix and namespace ( e x : and http://example.org/) are defined for
instances.
        </p>
        <p>In this case, the M u s i c a l O b j e c t is a musical score, defined by the e x : M o z a r t P i a n o S o n a t a S c o r e
instance, which has e x : S c o r e A n n o t a t i o n as its annotation. The annotation is linked to its
annotator, in this case a human and to its M u s i c T i m e I n t e r v a l . The M u s i c T i m e I n t e r v a l defines
the duration of the annotation, by means of the M u s i c T i m e D u r a t i o n class, and the start point of
the annotation, by means of the M u s i c T i m e I n d e x class. The latter, being the annotation is of type
score, contains two diferent M u s i c T i m e I n d e x C o m p o n e n t s: the first has as its M u s i c T i m e V a l u e T y p e
a e x : M e a s u r e , which indicated the measure at which the annotation starts, while the second has
as value type a e x : B e a t , which describes the beat within the measure at which the annotation
begins. Duration is instead expressed only in beats.</p>
        <p>The annotation then contains two diferent observations (the actual number has been reduced
for demonstration purposes), namely e x : C h o r d O b s e r v a t i o n 1 and e x : C h o r d O b s e r v a t i o n 2 .</p>
        <p>Each of these observations has a value, i.e. the chord per se, and a time interval. In this
example, observations have no C o n f i d e n c e , as this is not provided by the original annotation.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Structural Annotations</title>
        <p>
          The second example is an annotation of segments from an audio track of The Beatles’ Michelle.
The original annotation is available in JAMS format and is taken from the Isophonics9 [
          <xref ref-type="bibr" rid="ref34">34</xref>
          ].
Figure 3 depicts the example graphically using the Grafoo notation.
        </p>
        <p>In this example, the M u s i c a l O b j e c t is instead a track, defined by the e x : B e a t l e s M i c h e l l e T r a c k
instance, which has an e x : A u d i o M u s i c A n n o t a t i o n , as it was annotated from the audio signal.
The annotation has a human-type annotator and an annotation time interval.</p>
        <p>The annotation then contains two diferent S e g m e n t O b s e r v a t i o n , which define the
structure of the track. Each observation has a starting time and duration, defined by the classes
M u s i c T i m e I n d e x and M u s i c T i m e D u r a t i o n , respectively. In this case, there is only a single
M u s i c T i m e I n d e x C o m p o n e n t , since the time information is expressed in seconds (e x : S e c o n d s ).
Finally, the value of each observation corresponds to the structural segment itself, in this case
e x : S i l e n c e and e x : I n t r o .</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and Future Work</title>
      <p>We propose the Music Annotation ODP for modelling annotations of music scores and audio
tracks. A distinction at the core of this ODP is the diferent encoding of time information, which
depends on the type of the subject of observation (score or audio). The ODP is the result of
the analysis of many relevant diferent existing formats used for music annotation (MusicXML,</p>
      <sec id="sec-5-1">
        <title>8https://essepuntato.it/graffoo/ 9Isophonics dataset: http://isophonics.net/datasets</title>
        <p>ABC, JAMS, etc.) and provides a template for supporting the integration of data from such
heterogeneous sources. This work demonstrated the use of the ODP for modelling harmonic
and structural annotations (chords, segments) collected from symbolic and audio sources. We
plan to follow up with a large scale integration experiment on a selection of MIR datasets, and
the extension of our pattern to model additional types of music annotations.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This project has received funding from the European Union’s Horizon 2020 research and
innovation programme under grant agreement No 101004746.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pople</surname>
          </string-name>
          ,
          <article-title>Theory, analysis and meaning in music</article-title>
          , Cambridge University Press,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Johnston</surname>
          </string-name>
          ,
          <article-title>Harmony and climax in the late works of Sergei Rachmaninof</article-title>
          , University of Michigan,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Giraud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Groult</surname>
          </string-name>
          , E. Leguy,
          <article-title>Dezrann, a web framework to share music analysis</article-title>
          ,
          <source>in: International Conference on Technologies for Music Notation and Representation (TENOR</source>
          <year>2018</year>
          ),
          <year>2018</year>
          , pp.
          <fpage>104</fpage>
          -
          <lpage>110</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Turnbull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Barrington</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. R.</given-names>
            <surname>Lanckriet</surname>
          </string-name>
          ,
          <article-title>A game-based approach for collecting semantic annotations of music</article-title>
          .,
          <source>in: ISMIR</source>
          , volume
          <volume>7</volume>
          ,
          <year>2007</year>
          , pp.
          <fpage>535</fpage>
          -
          <lpage>538</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Pugin</surname>
          </string-name>
          ,
          <article-title>Interaction perspectives for music notation applications</article-title>
          ,
          <source>in: Proceedings of the 1st International Workshop on Semantic Applications for Audio and Music</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hadjakos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ifland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Keil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Oberhof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Veit</surname>
          </string-name>
          ,
          <article-title>Challenges for annotation concepts in music</article-title>
          ,
          <source>International Journal of Humanities and Arts Computing</source>
          <volume>11</volume>
          (
          <year>2017</year>
          )
          <fpage>255</fpage>
          -
          <lpage>275</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P. E.</given-names>
            <surname>Savage</surname>
          </string-name>
          , Cultural evolution of music,
          <source>Palgrave Communications</source>
          <volume>5</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kite-Powell</surname>
          </string-name>
          ,
          <article-title>A performer's guide to seventeenth-century music</article-title>
          , Indiana University Press,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Cuthbert</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Ariza, music21: A toolkit for computer-aided musicology and symbolic music data (</article-title>
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V.</given-names>
            <surname>Eremenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Demirel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bozkurt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Serra</surname>
          </string-name>
          , Jaah:
          <article-title>Audio-aligned jazz harmony dataset</article-title>
          ,
          <year>2018</year>
          . URL: https://doi.org/10.5281/zenodo.1290737.
          <source>doi:1 0 . 5 2 8 1 / z e n o d o . 1 2</source>
          <volume>9 0 7 3 7 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Neuwirth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Harasim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. C.</given-names>
            <surname>Moss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rohrmeier</surname>
          </string-name>
          ,
          <article-title>The annotated beethoven corpus (abc): A dataset of harmonic analyses of all beethoven string quartets, Frontiers in Digital Humanities (</article-title>
          <year>2018</year>
          )
          <fpage>16</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>V. A.</given-names>
            <surname>Carriero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ciroku</surname>
          </string-name>
          , J. de Berardinis,
          <string-name>
            <given-names>D. S. M.</given-names>
            <surname>Pandiani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Meroño-Peñuela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Poltronieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Presutti</surname>
          </string-name>
          ,
          <article-title>Semantic integration of mir datasets with the polifonia ontology network</article-title>
          ,
          <source>in: ISMIR Late Breaking Demo</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>H.</given-names>
            <surname>Vinet</surname>
          </string-name>
          ,
          <article-title>The representation levels of music information</article-title>
          , in: U. K. Wiil (Ed.),
          <source>Computer Music Modeling and Retrieval</source>
          , Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>International</surname>
            <given-names>MIDI Association</given-names>
          </string-name>
          ,
          <article-title>MIDI Musical Instrument Digital Interface Specification 1.0</article-title>
          ,
          <string-name>
            <surname>Technical</surname>
            <given-names>Report</given-names>
          </string-name>
          , Los Angeles,
          <year>1983</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>C.</given-names>
            <surname>Walshaw</surname>
          </string-name>
          ,
          <source>The ABC music standard 2</source>
          .1.,
          <source>Technical Report</source>
          , abcnotation.com,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Good</surname>
          </string-name>
          ,
          <string-name>
            <surname>Musicxml:</surname>
          </string-name>
          <article-title>An internet-friendly format for sheet music</article-title>
          ,
          <source>in: XML conference and expo,</source>
          <year>2001</year>
          , pp.
          <fpage>03</fpage>
          -
          <lpage>04</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>P.</given-names>
            <surname>Roland</surname>
          </string-name>
          ,
          <article-title>The music encoding initiative (MEI)</article-title>
          ,
          <source>in: Proceedings of the First International Conference on Musical Applications Using XML</source>
          , volume
          <volume>1060</volume>
          ,
          <year>2002</year>
          , pp.
          <fpage>55</fpage>
          -
          <lpage>59</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Smaill</surname>
          </string-name>
          , G. Wiggins,
          <string-name>
            <given-names>M.</given-names>
            <surname>Harris</surname>
          </string-name>
          ,
          <article-title>Hierarchical music representation for composition and analysis</article-title>
          ,
          <source>Computers and the Humanities</source>
          <volume>27</volume>
          (
          <year>1993</year>
          )
          <fpage>7</fpage>
          -
          <lpage>17</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>D.</given-names>
            <surname>Huron</surname>
          </string-name>
          ,
          <article-title>Music information processing using the humdrum toolkit: Concepts, examples, and lessons</article-title>
          ,
          <source>Computer Music Journal</source>
          <volume>26</volume>
          (
          <year>2002</year>
          )
          <fpage>11</fpage>
          -
          <lpage>26</lpage>
          . URL: http://www.jstor.org/stable/ 3681454.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>H.-W. Nienhuys</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Nieuwenhuizen</surname>
          </string-name>
          ,
          <article-title>Lilypond, a system for automated music engraving</article-title>
          ,
          <source>in: Proceedings of the XIV Colloquium on Musical Informatics (XIV CIM</source>
          <year>2003</year>
          ), volume
          <volume>1</volume>
          ,
          <year>2003</year>
          , pp.
          <fpage>167</fpage>
          -
          <lpage>171</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>E. J.</given-names>
            <surname>Humphrey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Salamon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Nieto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Forsyth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Bittner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Bello</surname>
          </string-name>
          ,
          <article-title>JAMS: A JSON annotated music specification for reproducible MIR research</article-title>
          , in: H.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>J. H.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the 15th International Society for Music Information Retrieval Conference</source>
          ,
          <string-name>
            <surname>ISMIR</surname>
          </string-name>
          <year>2014</year>
          , Taipei, Taiwan,
          <source>October 27-31</source>
          ,
          <year>2014</year>
          ,
          <year>2014</year>
          , pp.
          <fpage>591</fpage>
          -
          <lpage>596</lpage>
          . URL: http://www.terasoft.com.tw/conf/ismir2014/proceedings/T106_355_Paper.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>B.</given-names>
            <surname>McFee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. J.</given-names>
            <surname>Humphrey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Nieto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Salamon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Bittner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Forsyth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Bello</surname>
          </string-name>
          ,
          <article-title>Pump Up The JAMS: V0.2 And Beyond</article-title>
          ,
          <source>Technical Report</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>V. A.</given-names>
            <surname>Carriero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gangemi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Mancinelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Marinucci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. G.</given-names>
            <surname>Nuzzolese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Presutti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Veninata</surname>
          </string-name>
          ,
          <article-title>Arco: The italian cultural heritage knowledge graph</article-title>
          , in: C.
          <string-name>
            <surname>Ghidini</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Hartig</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Maleshkova</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Svátek</surname>
            ,
            <given-names>I. Cruz</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hogan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lefrançois</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Gandon</surname>
          </string-name>
          (Eds.),
          <source>The Semantic Web - ISWC 2019</source>
          , Springer International Publishing, Cham,
          <year>2019</year>
          , pp.
          <fpage>36</fpage>
          -
          <lpage>52</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Raymond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Abdallah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sandler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giasson</surname>
          </string-name>
          ,
          <article-title>The music ontology</article-title>
          ,
          <source>in: Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR</source>
          <year>2007</year>
          ), Vienna, Austria,
          <year>2007</year>
          , pp.
          <fpage>417</fpage>
          -
          <lpage>422</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lisena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Troncy</surname>
          </string-name>
          ,
          <article-title>Doing reusable musical data (DOREMUS), in: Proceedings of Workshops and Tutorials of the 9th International Conference on Knowledge Capture (K-CAP2017</article-title>
          ), Austin, Texas, USA,
          <year>December 4th</year>
          ,
          <year>2017</year>
          , volume
          <volume>2065</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>64</fpage>
          -
          <lpage>68</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Meroño-Peñuela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hoekstra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gangemi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bloem</surname>
          </string-name>
          , R. de Valk,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stringer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Janssen</surname>
          </string-name>
          , V. de Boer,
          <string-name>
            <given-names>A.</given-names>
            <surname>Allik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Schlobach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Page</surname>
          </string-name>
          ,
          <article-title>The MIDI Linked Data Cloud</article-title>
          ,
          <source>in: The Semantic Web - ISWC 2017</source>
          , Springer International Publishing, Cham,
          <year>2017</year>
          , pp.
          <fpage>156</fpage>
          -
          <lpage>164</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Rashid</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. De Roure</surname>
            ,
            <given-names>D. L.</given-names>
          </string-name>
          <string-name>
            <surname>McGuinness</surname>
          </string-name>
          ,
          <article-title>A music theory ontology</article-title>
          ,
          <source>in: Proceedings of the 1st International Workshop on Semantic Applications for Audio and Music</source>
          , SAAM '18,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2018</year>
          , p.
          <fpage>6</fpage>
          -
          <lpage>14</lpage>
          . URL: https://doi.org/10.1145/3243907.3243913.
          <source>doi:1 0 . 1 1</source>
          <volume>4 5 / 3 2 4 3 9 0 7 . 3 2 4 3 9 1 3 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>J.</given-names>
            <surname>Jones</surname>
          </string-name>
          , D. de Siqueira Braga,
          <string-name>
            <given-names>K.</given-names>
            <surname>Tertuliano</surname>
          </string-name>
          , T. Kauppinen,
          <article-title>MusicOWL: The music score ontology</article-title>
          ,
          <source>in: Proceedings of the International Conference on Web Intelligence</source>
          , WI '17,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2017</year>
          , p.
          <fpage>1222</fpage>
          -
          <lpage>1229</lpage>
          . URL: https://doi.org/10.1145/3106426.3110325.
          <source>doi:1 0 . 1 1</source>
          <volume>4 5 / 3 1 0 6 4 2 6 . 3 1 1 0 3 2 5 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29] S. S.-s. Cherfi,
          <string-name>
            <given-names>C.</given-names>
            <surname>Guillotel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hamdi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rigaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Travers</surname>
          </string-name>
          ,
          <article-title>Ontology-based annotation of music scores</article-title>
          ,
          <source>in: Proceedings of the Knowledge Capture Conference, K-CAP</source>
          <year>2017</year>
          ,
          <article-title>Association for Computing Machinery</article-title>
          , New York, NY, USA,
          <year>2017</year>
          . URL: https://doi.org/10. 1145/3148011.3148038.
          <source>doi:1 0 . 1 1</source>
          <volume>4 5 / 3 1 4 8 0 1 1 . 3 1 4 8 0 3 8 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Page</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Weigl</surname>
          </string-name>
          ,
          <article-title>Meld: A linked data framework for multimedia access to music digital libraries</article-title>
          ,
          <source>in: 2019 ACM/IEEE Joint Conference on Digital Libraries (JCDL)</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>434</fpage>
          -
          <lpage>435</lpage>
          .
          <source>doi:1 0 . 1 1</source>
          <volume>0</volume>
          <fpage>9</fpage>
          <string-name>
            <surname>/ J C D L</surname>
          </string-name>
          .
          <volume>2 0 1 9 . 0 0 1 0 6 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>A.</given-names>
            <surname>Poltronieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gangemi</surname>
          </string-name>
          ,
          <article-title>The music note ontology</article-title>
          , in: K. Hammar,
          <string-name>
            <given-names>C.</given-names>
            <surname>Shimizu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Küçük McGinty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Asprino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. A.</given-names>
            <surname>Carriero</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the 12th Workshop on Ontology Design and Patterns (WOP</source>
          <year>2021</year>
          ), Online, October
          <volume>24</volume>
          ,
          <year>2021</year>
          .,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>V.</given-names>
            <surname>Presutti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Daga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gangemi</surname>
          </string-name>
          , E. Blomqvist,
          <article-title>extreme design with content ontology design patterns</article-title>
          , in: E. Blomqvist,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sandkuhl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Scharfe</surname>
          </string-name>
          , V. Svátek (Eds.),
          <source>Proceedings of the Workshop on Ontology Patterns (WOP</source>
          <year>2009</year>
          ) ,
          <article-title>collocated with the 8th International Semantic Web Conference ( ISWC-</article-title>
          <year>2009</year>
          ), Washington D.C., USA, 25 October,
          <year>2009</year>
          , volume
          <volume>516</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2009</year>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>516</volume>
          / pap21.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>J.</given-names>
            <surname>Hentschel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Neuwirth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rohrmeier</surname>
          </string-name>
          ,
          <article-title>The annotated mozart sonatas: Score, harmony, and cadence</article-title>
          ,
          <source>Trans. Int. Soc. Music. Inf. Retr</source>
          .
          <volume>4</volume>
          (
          <year>2021</year>
          )
          <fpage>67</fpage>
          -
          <lpage>80</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mauch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cannam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Davies</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dixon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Harte</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kolozali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tidhar</surname>
          </string-name>
          ,
          <source>OMRAS2 metadata project</source>
          <year>2009</year>
          , in: In Late-breaking
          <source>session at the 10th International Conference on Music Information Retrieval (ISMIR)</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>