<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">AUTOMATIC BEAT-SYNCHRONOUS GENERATION OF MUSIC LEAD SHEETS</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Jean-Louis</forename><surname>Durrieu</surname></persName>
							<email>durrieu@enst.fr</email>
							<affiliation key="aff0">
								<orgName type="institution">TELECOM ParisTech -TSI</orgName>
								<address>
									<addrLine>LTCI ; 46 rue Barrault</addrLine>
									<postCode>F-75634, Cedex 13</postCode>
									<settlement>Paris</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Weil</forename><surname>Jan</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">TUB -Communication Systems Group</orgName>
								<address>
									<addrLine>Einsteinufer 17</addrLine>
									<postCode>10587</postCode>
									<settlement>Berlin</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">AUTOMATIC BEAT-SYNCHRONOUS GENERATION OF MUSIC LEAD SHEETS</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">4AFF82BE2EF6AE431E6F00EE4006339D</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T07:30+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Most of the popular music scores are written in a specific format, the lead sheet format. It sums up a song by representing the notes of the main melody, along with the chord sequence together with other cues such as style, tempo and time signature. This sort of representation is very common in jazz and pop music, where the accompaniment playing the chord sequence usually is improvised. The aim of our study is to bring together two techniques, a chord detection system and a lead melody transcriber, in order to produce a lead sheet. In addition to the respective issues inherent to each problem, we also need to address tempo estimation, time signature estimation, and, based on these estimations, time quantification of both the chord sequence and the melody line. We propose a tempo tracker that aligns the beats to the audio, and adapt the chord detection and melody extraction systems so as to take into account this new piece of information. Future works include cover song detection based on lead sheet representation, query-by-similarity applications and so on.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">INTRODUCTION</head><p>The lead-sheet format in music is well-known among jazz and rock players. It consists of the main melody, along with the chords of the accompaniment. It can also include more information such as the style of the song, the lyrics, the structure, etc.</p><p>Here we are interested in combining two existing systems: a chord detection algorithm and a melody extractor, in order to obtain such a representation. We however missed the temporal information such as tempi and time signature. Tempo estimation is a well studied problem and we base our system on some previous works <ref type="bibr" target="#b1">[1]</ref>. We also designed a method that aligns the beat to the data. The time signature estimation still is an open problem, for which we propose some general directions. Some improvements can also come from the fusion of the results of the different algorithms. We propose some of such improvements, but we expect that further studies could unravel even more of those correlated results. This document is organized as follows: first, we present the proposed beat tracking and pulse alignment algorithms. Then we explain the chord and melody estimation modules. Thereafter, a short evaluation of these tasks is proposed. A short insight in what can be done for time signature is also presented before the conclusion. At last, we conclude with some futur works and perspectives.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">BEAT TRACKING MODULE</head><p>In order to produce musically relevant lead-sheets, we need to determine the temporal structure of the song, i.e. the tempo, dealt in this section, and the time signature, dealt in section 5. In order to do so, the tempo is first estimated on 10s-long frames, with a .5s hopsize. This estimation is based on a detection function proposed in <ref type="bibr" target="#b1">[1]</ref>. From this function, at each window an auto-correlation function (ACF) is computed, which gives us a "ACF-map". A viterbi algorithm allows us to find the optimal tempo-path, with a trade-off between the tempo variation smoothness and the maxima of the ACF-map. We also output an estimate of the "tatum", which supposedly is the smallest time-unit of the song, in use for the melody quantification part. We tackle the beat/tatum location problem using a dynamic programming approach. We use the same hopsize for the windows as previously, but their length is at least 10 beats/tatums per window -i.e. 10 times the maximum time lag between two beats. For each window, an impulse comb is generated with a period corresponding to the estimated tempo. The cross-correlation between the comb and the data in the window is stored in a matrix, the maxima of which give the time lags or "phases" necessary to align the combs to the data. In order to avoid off-beat problems, which are common in rock and jazz music, we designed a Viterbi algorithm that smoothes the variability of location of the pulses. Instead of smoothing the path "horizontally" in this phase-matrix, it takes into account the tempo changes and favors phase locations where they are expected to occur according to the previous window. At last, for each window, we place the pulses according to the estimated phase, taking care of possible "double" pulses by choosing a location between them where the onset detection is at a local maximum.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">DETECTION MODULES</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Chord sequence detection</head><p>The chord detection method we developped is close to the system introduced in <ref type="bibr" target="#b3">[3]</ref>: the chosen features are the tonal centroids, derived from the chroma vectors. A Hidden Markov Model (HMM) is assumed for the chord sequence. As in <ref type="bibr" target="#b3">[3]</ref>, we assume the transition probabilities to depend only on the interval between the chords. Further studies should aim at using key-specific HMMs, in order to estimate the main key at the same time.</p><p>In order to integrate beat information, either we compute the features within segments obtained thanks to the beat location given in section 2 or we constrain the Viterbi decoding of the chord sequence: within each segment, the state (i.e. the chord) is assumed constant. However, neither of these two solutions gave better results for now.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Main melody extraction</head><p>The main melody transcription module is based on the leading melody estimation of <ref type="bibr" target="#b2">[2]</ref>. A source-filter model catches F 0 candidates for each frame, and the main melody is computed thanks to a Viterbi smoothing algorithm, accomplishing a trade-off between the energy and the frequency proximity of consecutive candidates. This system only outputs a framewise sequence of frequencies in Hz. In order to obtain the desired sequence of temporally quantified notes (i.e. on the Western music scale), we use the tatum estimation of section 2. It provides segments on which we can decide which note was intended. Most of pop music singers do not have strong vibrato, which makes this task rather straight-forward in those cases: a simple decision like taking the mean or the median of the output frequency sequence within each segment will give satisfying results. More studies on vibrato estimation may be useful in order to deal with classical music. As pointed out in section 3.1, the algorithm can also separate the singer voice from the background music. This output can then be used as a pre-processing step to other tasks such as chord detection or multi-F 0 estimation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">EVALUATION</head><p>The evalution of such a transcription system as a whole is not clear yet. However, we can evaluate the different modules separately. The Chord detection was tested on a database of beatles songs along with MIDI synthetized ones. The recognition recall vary from 65% to 70%. As for the main melody extractor, as was stated in <ref type="bibr" target="#b2">[2]</ref>, performs amongst the state-of-the-art systems, with 78% framewise recall on the pitched frames. One of the main drawback of this module for now is the lack of silence detection in vocal activity. As such, we observe a significant drop in the re-sults when taking into account non-vocal frames with 65% of global recall. It also leads to spurious notes in the transcription. In order to avoid these, some heuristics can be applied, e.g. penalizing segments in which the melody is varying to deeply.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">ABOUT TIME SIGNATURE</head><p>As we discussed in the previous sections, we also need an estimate of the time signature of the song. This signature is a fraction: the denominator gives the musical unit related to the beat, while the numerator tells how many of these units there are in 1 measure. We propose the following direction for future works on the topic: there usually are two trends for choosing the denominator. The song either has binary rhythmic patterns or ternary ones. In the first case, one usually can assume the unit to be the eighth note, with symbol 4, in the other case, it is often chosen as the sixteenth note, symbol 8. As a first approximation, one can assign either of these two denominators. The tatum to beat ratio may give some insight as to which of them it should be. Assuming that the chord changes mainly occur on the beats, and more specifically on the up-beats, at the beginning of measures, the numerator could be infered from the harmonic structure. More evidence is needed for this last assumption, but this should give a rather straight-forward way of estimating the time signature of the analyzed song.</p></div>			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0">This work was partly supported by the European Commission under contract FP6-027026-K-SPACE.</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">CONCLUSIONS</head><p>In this study, we have found that each system can take advantage of the beat/tatum estimation, especially on the quantification step. This seems to produce musically relevant material. The result is not yet completely ready and we still need to estimate the time-signature. This feature is closely related to the beat and tatum ratio, but also, we believe, to the melodic and harmonic structure. Further studies aim at designing a robust way of estimating the time-signature as well as the overall structure of the musical piece, which would for example help avoiding repetitions in the output lead-sheet.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title/>
		<author>
			<persName><surname>References</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Extracting Note Onsets from Musical Recordings</title>
		<author>
			<persName><forename type="first">M</forename><surname>Alonso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Richard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>David</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2005">2005</date>
			<publisher>ICME</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Singer melody extraction in polyphonic signals using source separation methods</title>
		<author>
			<persName><forename type="first">J.-L</forename><surname>Durrieu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Richard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>David</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2008">2008</date>
			<publisher>ICASSP</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Acoustic Chord Transcription and Key Extraction From Audio Using Key-Dependent HMMs Trained on Synthesized Audio</title>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Slaney</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. on ASLP</title>
		<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
