<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Methods for Training Convolutional Neural Networks to Identify Bird Species in Complex Soundscape Recordings</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Konstantin</forename><surname>Dmitriev</surname></persName>
							<email>presentatio@mail.ru</email>
							<affiliation key="aff0">
								<orgName type="institution">Lomonosov Moscow State University</orgName>
								<address>
									<addrLine>1 Leninskie Gory</addrLine>
									<postCode>119992</postCode>
									<settlement>Moscow</settlement>
									<country key="RU">Russian Federation</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Methods for Training Convolutional Neural Networks to Identify Bird Species in Complex Soundscape Recordings</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">7F01AA87B597184F1C67BCF1AA281E41</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:55+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Audio classification</term>
					<term>sound event detection</term>
					<term>signal processing</term>
					<term>convolutional neuron network</term>
					<term>augmentations</term>
					<term>spectrogram</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The task of bird species identification is very important in ecosystem monitoring. Modern methods based on the use of deep learning will allow such research to be carried out cheaply and on a regular basis. However, creating such algorithms is not an easy task due to the wide variety of birds, their calls, recording conditions, and equipment used. In this paper, some methods are presented for training Convolutional Neural Networks (CNNs) that improve the effectiveness of these models. This includes recording length standardization, data augmentation, mixing, sample selection, and weighting.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Bird species diversity and its change in time serve as good indicators of ecosystem state. Traditional methods of monitoring require the presence of a qualified observer who can manually identify the bird. It's quite hard and expensive to conduct such surveys regularly, especially in the case of large areas. Many birds are small and hard to notice, but they have a loud voice. So, it seems promising to use small and cheap audio recording devices with omnidirectional microphones instead of human observers and to process the recordings using modern methods based on deep learning.</p><p>Creating and training such algorithms is a difficult task, however. The first problem is the diversity of bird species and their recording conditions. There are many birds that can imitate other birds or even repeat a sound they once heard and liked. Many animals and insects sound like birds. The second problem is the difference between the available training data and the real recordings to be processed. Usually, the training data is a set of short bird call recordings made by different people at different locations. They try to make the recordings clean and loud, without noise or interference. So, good equipment is used, including directional microphones, and bad recordings are dropped. The third problem is that only weak labels are given that identify the bird's existence in each recording but not the exact call position.</p><p>BirdCLEF 2024 is a competition that is supposed to address the mentioned problems <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. It is a part of the LifeCLEF 2024 conference <ref type="bibr" target="#b2">[3]</ref>. The task is to identify bird calls in a set of recordings made in the Western Ghats, India. The training dataset consists of recordings from the xeno-canto project <ref type="bibr" target="#b3">[4]</ref>. Each of them has primary and secondary labels. The primary label corresponds to the main bird that can be heard, and the secondary labels are used to mark additional birds that can accompany the main bird. 182 bird species, whose existence needs to be predicted, were selected by the organizers. The predictions must be made for each of the 5-second-long intervals of about 1100 recordings that form the hidden dataset. An additional constraint is that the task must be completed in 2 hours using CPU only. The macro-averaged AUC ROC that discards classes with no true labels was used as a metric in the competition. To prevent overfitting, the full hidden dataset is split into the public and private parts </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>The CV accuracy score calculated using xeno-canto data for the models with different parameters: the backbone, the embedding dimension 𝐷, the number 𝐻 of attention heads, and the number 𝐵 of multi-head attention blocks.  (approximately 35% and 65% of the data, respectively). The corresponding public and private scores are calculated independently, and only the public score is known at the competition time. This article presents the methods that can be used to overcome the aforementioned difficulties and improve the results of bird call recognition.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Methods</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Model architecture</head><p>The used model is based on the model proposed in <ref type="bibr" target="#b4">[5]</ref>. Its scheme is presented in Fig 1 <ref type="figure">.</ref> After the signal is loaded and normalized, the spectrogram transform is performed. Then, it is fed to a backbone CNN, followed by 𝐵 multi-head attention blocks. Finally, a log-sum-exp pooling layer is used to extract the label. The multi-head attention mechanism is described in the paper <ref type="bibr" target="#b5">[6]</ref>, and it is implemented in PyTorch by the torch.nn.MultiheadAttention class. Its main parameters are the number 𝐻 of parallel attention heads in each of the blocks and the embedding dimension 𝐷.</p><p>To find the best combination of the model parameters, a number of tests were conducted. Instead of using the macro-averaged AUC ROC score, which became very close to one after a few epochs, the accuracy score was used in a 5-fold cross-validation (CV) scheme. In the tests conducted, the backbone as well as the values 𝐵, 𝐻, and 𝐷 were varied. The results are presented in Table <ref type="table">1</ref>.</p><p>The simple resnet18 <ref type="bibr" target="#b6">[7]</ref> backbone was used to check different parameters. Using it, the best results were achieved with 𝐷 = 256, 𝐻 = 64 and 𝐵 = 2. This slightly differs from the parameters presented in <ref type="bibr" target="#b4">[5]</ref>, where they were set as 𝐷 = 768, 𝐻 = 8 and 𝐵 = 2. The results significantly improve with heavier backbones, among which seresnext26t_32x4d <ref type="bibr" target="#b7">[8]</ref> seems the best.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Dealing with different lengths of the recordings</head><p>An important problem with the recordings is that they have different lengths. The shortest of them lasts only 0.5 seconds, while the longest is more than 1.5 hours. Different lengths don't allow using these recordings in batches while training the model. This causes the training to be slow.</p><p>There are several possible ways to overcome this difficulty. Let 𝑡 0 be the fixed final length of each recording processed. If the initial recording is shorter, then it is simply padded with zeros. If it is longer, the "First" approach is to use only the first 𝑡 0 -long interval of the recording. The "First and last" approach is to use the first 𝑡 0 /2 and last 𝑡 0 /2 intervals stacked together. This seems reasonable because, as a rule, the recordings were processed by their authors before being uploaded to the website. So, one can suspect that irrelevant sounds were cut off from the beginning and the end of each recording. These two approaches are quite popular among the competition solutions. However, a lot of information is lost. Instead, the third approach is proposed, which is called the "Sum" approach and illustrated in Fig 2 <ref type="figure">.</ref> It consists of the following steps.</p><p>1. Each recording, having length 𝑡, is padded with 𝑡 z = 𝑡 − 𝑡 0 𝑛 int zeros, where 𝑛 int = ⌈𝑡/𝑡 0 ⌉, i.e., the resulting recording contains a whole number of intervals with the length of 𝑡 0 . 2. As an augmentation, a random circular shift is performed. 3. The recording is split into 𝑛 int intervals, and they are summed together. The length of the result is equal to 𝑡 0 .</p><p>There is no information loss in the third approach. The overlapping of bird calls that may occur doesn't seem to be a problem since it corresponds to a situation when many birds vocalize at the same time.</p><p>The same model (seresnext26t_32x4d; 𝐷 = 768, 𝐻 = 8, 𝐵 = 2) was trained using all the described approaches with different 𝑡 0 values. Every training was repeated three times with different random seeds, and the "best" of them with the highest score on the public dataset was selected. The resulting public and private scores are presented in Table <ref type="table" target="#tab_1">2</ref>.</p><p>The results produced with different approaches are close to each other. However, the scores of the "First" approach are slightly better with low 𝑡 0 values. The growth of 𝑡 0 doesn't improve the scores of the "First" and "First and last" approaches, but the scores of the "Sum" approach increase, and it becomes preferable with large 𝑡 0 . At the same time, increasing 𝑡 0 makes the model training longer.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Train data selection</head><p>The train dataset suggested at BirdCLEF 2024 competition contains approximately half of all the recordings from xeno-canto project <ref type="bibr" target="#b3">[4]</ref>, with primary labels corresponding to 182 target birds. So, the obvious step is to download the absent data and create an additional dataset. Merged together, these two datasets contain about 40 000 Using the whole dataset, however, doesn't improve the model score. It seems strange because the higher diversity of data usually causes an increase in the generalization ability of the model. So, one may expect that the additional data corrupts the dataset somehow, making it not correspond to the hidden dataset.</p><p>To find out the reason for the described behavior, the AUC ROC probing technique is proposed. This technique is based on splitting the target data into several parts and masking the model predictions so that only one part of the data is scored each time. In the BirdCLEF competitions, the bird species can be grouped together. For example, let 𝑁 = 182 be the total number of bird species. One can select a group of 0 &lt; 𝑛 &lt; 𝑁 bird species. During the submission, the predictions corresponding to the rest 𝑁 − 𝑛 species are set to zero. The constant prediction produces an AUC ROC score of 0.5. So, the resulting AUC ROC score 𝑆 is equal to 𝑆 = (𝑆 𝑛 𝑛 + 0.5(𝑁 − 𝑛))/𝑁 , and the AUC ROC score 𝑆 𝑛 of the selected group is equal to 𝑆 𝑛 = (𝑆𝑁 − 0.5(𝑁 − 𝑛))/𝑛. Using this simple formula, it is possible to estimate the model performance for different groups of species. If these scores differ significantly and the size of each group is large enough, one may assume that the feature used for group selection is important.</p><p>One of the possible features that can affect the model's performance is the number 𝑁 rec of available recordings of each bird species, i.e., the frequency of its occurrence in the dataset. Following the proposed AUR ROC probing technique, the species were split into five groups (Table <ref type="table" target="#tab_2">3</ref>). It can be assumed that the model will work well for those birds for which the training set contains many records and poorly in the other case. However, the situation is different. Group 5 with at least 200 recordings of each bird species, has almost as low scores as Group 1 with a maximum of 20 recordings.</p><p>One can conclude that, for some reason, the model has significant difficulties when dealing with common birds. One of the reasons for this is that common birds may be present in many recordings in the background while they are not marked, even with secondary labels. Another possible reason is the geographical distribution of the places at which the recordings were made. Indeed, many common birds were recorded in Europe, America, or Africa, far away from the region of interest. Birds can have local dialects. Also, a bird, which is common in Europe, can be rare in India.</p><p>The geographic information can be easily taken into account since GPS coordinates are provided. The locations of places at which five bird species were recorded are presented in Fig 3 <ref type="figure">.</ref> Here, "zitcis1", "commoo3" and "barswa" are the primary labels of common birds, with the number of recordings equal to 500 in the competition dataset. The fourth bird, "revbul" is medium-rare and has 101 recordings, while the fifth, "maltro1", is a rare bird with 17 recordings. However, it is noticeable that almost all recordings of common birds were made outside India, while "revbul" and "maltro1" are endemic birds. As a result, the number of recordings of all common and rare species that are made in the Indian region is quite low and significantly less than that of medium-rare birds. This observation explains the differences in model scores across different groups of species.</p><p>To handle this observation, the algorithm for train data selection is proposed, which consists of the following steps.</p><p>1. Prepare the whole dataset with all the recordings from the xeno-canto project <ref type="bibr" target="#b3">[4]</ref> that contain the target bird species calls. 2. For each recording, calculate its distance 𝐿 WG to the Western Ghats region. This can be done, for example, by placing a large number of points in the Western Ghats region and calculating the minimal distance between these points and the point where the recording was made. 3. Specify the maximum distance 𝐿 max and drop all the recordings with larger distances.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Calculate the number of recordings ̃︀</head><p>𝑁 rec for each bird species in the resulting dataset. 5. Calculate the distance weight 𝑤 𝐿 for each of the recordings. This weight is a decreasing function of 𝐿 WG . For example, 𝑤 𝐿 = 1 + cos(𝜋𝐿 WG /𝐿 max ) may be used. 6. Calculate the class imbalance weight for each of the recordings. This weight is a decreasing function of ̃︀ 𝑁 rec . For example, 𝑤 imb = 1/ ̃︀ 𝑁 rec may be used. 7. The weight of each of the recordings in the final dataset is the product of distant and class imbalance weights:</p><formula xml:id="formula_0">𝑤 = 𝑤 𝐿 • 𝑤 imb .</formula><p>The value of 𝐿 max is important. In the current competition, setting 𝐿 max = 4000 km was a good choice. From a geographical point of view, it allows to discard European, American, and most African data while covering Southern Asia. Using the lower 𝐿 max decreases the diversity of species, and with its higher value, the training set includes irrelevant data. As a result, the public score of the model worsens in both cases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Additional noise sources</head><p>As it was mentioned earlier, there is a huge domain shift between the competition data used for training and for model scoring. The training dataset contains the recordings from the xeno-canto project <ref type="bibr" target="#b3">[4]</ref>. As a rule, these are recordings of high quality. However, the model is supposed to work well with the data recorded with an omnidirectional microphone in a noisy environment. The unlabeled soundscapes dataset is provided with recordings similar to those used for model scoring.</p><p>Listening to the unlabeled recordings, it is possible to make a list of present noise sources. They include car traffic and horns, aircraft noises, sirens, human voices, music, frogs, and cicadas, as well as broadband noises of rain, wind, or even uncertain nature. The example spectrogram of such an unlabeled sounscape is presented in Fig 4 <ref type="figure">.</ref> One has to introduce all these kinds of interferences to the model to increase its generalization ability and reduce the domain shift. At the same time, the addition of an extra sound to the recording must not contain bird calls that can disorient the model.</p><p>There are several datasets that can help introduce these noises; for example, the Vehicle Type Sound Dataset <ref type="bibr" target="#b8">[9]</ref>, the Noise Audio Data Dataset with short sounds of different natures <ref type="bibr" target="#b9">[10]</ref>, the Rain Forest Dataset <ref type="bibr" target="#b10">[11]</ref> with recordings of several frog species, and the Hindi Speech Classification Dataset with recordings of short phrases <ref type="bibr" target="#b11">[12]</ref>. As an addition, manual selection of recordings not containing bird calls can be used <ref type="bibr" target="#b12">[13]</ref>. Although the use of a dataset with the regional dialects spoken in the Western Ghats alongside Hindi may seem more appropriate, it's quite hard to find a sufficient number of such recordings distributed freely. At the same time, the influence of including these dialects on model performance seems minor.</p><p>The broadband noises are quite hard to add. On the one hand, many existing recordings of rain and wind noises contain bird calls, which have to be manually filtered. On the other hand, these noises are nonstationary, so they can't be precisely modeled with any kind of simple stationary noise. A similar situation takes place with the sounds produced by cicadas.</p><p>To deal with this situation, it is proposed to use unlabeled soundscapes. These recordings, however, contain bird calls that must be excluded. The idea of doing it is based on the fact that background noise as well as cicadas form patterns on the spectrogram that slowly change over time. The patterns of bird calls and other noises are irregular. The first step of the algorithm is taking the 1D Fourier transform of the spectrogram along the time axis. On the second step, only the components with the largest absolute values remain, while the others are put to zero. In the third step, the inverse 1D Fourier transform is performed, and the result is multiplied by random noise. The described filtering procedure significantly reduces the amount of information a spectrogram contains, and its "thin structure" disappears, including bird calls. The example is presented in Fig 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.5.">Standard augmentations and post-processing</head><p>The "standard" augmentations can be used in the BirdCLEF competition. They are performed on spectrograms and include XY masking, random grid shuffle, and recording mixing. XY masking selects a few rectangular areas in the spectrogram and sets the data inside it to a constant. Random grid shuffle splits the spectrogram into a grid and shuffles all its cells. This transform can be used only along the time axis, and the size of each cell must be greater than the length of a potential bird call, say, 5 seconds. Recording mixing is the technique of adding two or more recordings together before passing them to a model. In this case, the resulting recording contains all the birds from the initial recordings. Its weight is also the sum of the initial weights. These augmentations make the training dataset more diverse.</p><p>The model predictions may be post-processed using sliding window averaging. This approach assumes that if there is a bird call in a certain time interval, the probability of the same bird call in the  neighboring intervals is also high. So, the final prediction for the current interval is a sum of predictions for the current, previous, and next intervals with the weights of 1 − 2𝛼, 𝛼, and 𝛼 respectfully. The coefficient 𝛼 is an averaging parameter, which is often set to 0.25 in BirdCLEF competitions. The results of using different 𝛼 are presented in Table <ref type="table" target="#tab_3">4</ref>. It can be seen, that the value of 𝛼 = 0.25 is indeed optimal, however, the public score is maximized by 𝛼 = 0.3.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.6.">Inference time optimization</head><p>The inference time on the CPU is one of the crucial factors in the competition. However, it was noticed that the models became extremely slow after training. For example, the used model (seresnext26t_32x4d; 𝐷 = 768, 𝐻 = 8, 𝐵 = 2) processed the 240-seconds-long recording in 60-70 seconds after training, while the untrained model did the same job in 3.5 seconds. The training procedure doesn't change the model architecture and only adjusts its weights. After some research, the problem was localized. In the model computational graph, some of the paths are unnecessary, and the corresponding weights must be set to zero during training. However, using L2 regularization makes these weights very low but not exactly zero. As a result, not only do these paths consume computational resources, but the computations with such low values are extremely slow. To prevent this behavior, one has to retrain the model with an additional L1 regularization term, which causes the small weights to be exactly zero. A simple solution for an already-trained model is weight rounding. To do it, one has to convert the model precision from float32 to float16 and then back to float32. Weight rounding can be performed with one line of code in PyTorch: model.half().float().</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Model architecture.</figDesc><graphic coords="2,72.00,65.61,451.29,131.05" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>resnet18, 𝐻 = 8, 𝐵 = 2 resnet18, 𝐷 = 768, 𝐵 = 2 resnet18, 𝐷 = 768, 𝐻 = 8 𝐷 = 768, 𝐻 = 8, 𝐵 = 2</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2:The "Sum" approach to making the lengths of recordings equal.</figDesc><graphic coords="3,184.82,65.61,225.65,100.41" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: The locations where the recordings were made.</figDesc><graphic coords="4,72.00,193.60,451.26,211.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: The spectrogram of unlabeled soundscape 101125218.ogg with various sound sources.</figDesc><graphic coords="6,162.25,65.60,270.77,164.06" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: The spectrogram of unlabeled soundscape 1000308629.ogg (a) before and (b) after filtering procedure.</figDesc><graphic coords="7,72.00,65.60,451.27,164.17" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>The comparison of different approaches</figDesc><table><row><cell>𝑡 0</cell><cell cols="6">"First" approach Public score Private score Public score Private score Public score Private score "First and last" approach "Sum" approach</cell></row><row><cell>5</cell><cell>0.606</cell><cell>0.584</cell><cell>-</cell><cell>-</cell><cell>0.595</cell><cell>0.560</cell></row><row><cell>10</cell><cell>0.616</cell><cell>0.563</cell><cell>0.611</cell><cell>0.569</cell><cell>0.607</cell><cell>0.566</cell></row><row><cell>20</cell><cell>0.586</cell><cell>0.572</cell><cell>0.599</cell><cell>0.561</cell><cell>0.607</cell><cell>0.571</cell></row><row><cell>30</cell><cell>0.614</cell><cell>0.570</cell><cell>-</cell><cell>-</cell><cell>0.620</cell><cell>0.582</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc>The score of the model in different groups of bird species.</figDesc><table><row><cell>Group</cell><cell>𝑁 rec</cell><cell cols="3">Number of bird species Public score Private score</cell></row><row><cell>1</cell><cell>0 &lt; 𝑁 rec ≤ 20</cell><cell>34</cell><cell>0.543</cell><cell>0.536</cell></row><row><cell>2</cell><cell>20 &lt; 𝑁 rec ≤ 50</cell><cell>47</cell><cell>0.667</cell><cell>0.610</cell></row><row><cell>3</cell><cell>50 &lt; 𝑁 rec ≤ 100</cell><cell>27</cell><cell>0.652</cell><cell>0.585</cell></row><row><cell>4</cell><cell>100 &lt; 𝑁 rec ≤ 200</cell><cell>33</cell><cell>0.672</cell><cell>0.603</cell></row><row><cell>5</cell><cell>200 &lt; 𝑁 rec ≤ 500</cell><cell>41</cell><cell>0.569</cell><cell>0.569</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 4</head><label>4</label><figDesc>The results of applying sliding window averaging.</figDesc><table><row><cell>𝛼</cell><cell cols="2">Public score Private score</cell><cell>𝛼</cell><cell cols="2">Public score Private score</cell></row><row><cell>0.000</cell><cell>0.681</cell><cell>0.616</cell><cell>0.250</cell><cell>0.693</cell><cell>0.621</cell></row><row><cell>0.100</cell><cell>0.687</cell><cell>0.619</cell><cell>0.275</cell><cell>0.694</cell><cell>0.621</cell></row><row><cell>0.150</cell><cell>0.690</cell><cell>0.620</cell><cell>0.300</cell><cell>0.694</cell><cell>0.621</cell></row><row><cell>0.200</cell><cell>0.692</cell><cell>0.621</cell><cell>0.350</cell><cell>0.693</cell><cell>0.621</cell></row><row><cell>0.225</cell><cell>0.693</cell><cell>0.621</cell><cell>0.400</cell><cell>0.692</cell><cell>0.620</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 5</head><label>5</label><figDesc>The inference time of different frameworks per 240-seconds-long recording.</figDesc><table><row><cell cols="5">PyTorch ONNX OpenVino OpenVino with HT OpenVino with TPE</cell></row><row><cell>3.3 sec</cell><cell>2.8 sec</cell><cell>2.5 sec</cell><cell>2.0 sec</cell><cell>2.0 sec</cell></row></table></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Further acceleration is possible with the help of powerful frameworks, such as ONNX and OpenVino. The model was converted to ONNX format and then exported to OpenVino. The inference time summary is presented in Table <ref type="table">5</ref>. OpenVino seems faster than ONNX, but one may notice that it doesn't use all available CPU cores when running. To do so, hyperthreading (HT) must be switched on. The alternative is to run multiple threads with multiprocessing. In Python, this can be done in several ways, for example, by using the ThreadPoolExecutor class (TPE). As expected, the results are the same as with the use of HT. It should be noted that many laptops have multiple cores and support HT now, so using it may accelerate the model outside of the competition environment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Results</head><p>The main difficulty of the BirdCLEF 2024 competition is an unstable public score. The minor changes in the model and its training procedure may significantly increase or decrease this score. This makes it quite hard to test different approaches. For example, the results of training with exactly the same model and data and different random seeds are presented in Table <ref type="table">6</ref>. On the one hand, the standard deviation of the public score may be several times larger than the improvement of the score with the use of some clever technique. On the other hand, this makes it hard to introduce a reliable CV. It can't be consistent with the public score because it is unstable, and with the private score because there is a huge domain shift between the data. So, the reasonable way is to conduct many experiments, take the average of the public score, and hope that this will not cause overfitting to the public score.</p><p>The methods described in Section 2 were applied consequently, and the results are presented in Table <ref type="table">7</ref>. The most significant improvement was caused by the use of geographical data.</p><p>The inference time of the resulting models was 28 minutes, so an ensemble of four models with the best public scores was made. The public score of this ensemble was 0.713 (13th place in the competition public leaderboard). However, the selected models overfit, and the private score of the ensemble was as low as 0.616. Despite the unlucky model selection, the presented methods seem good and may be successfully used in the future competitions and applications.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Overview of BirdCLEF 2024: Acoustic identification of under-studied bird species in the western ghats</title>
		<author>
			<persName><forename type="first">S</forename><surname>Kahl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Denton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Klinck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Ramesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Srivathsa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Anand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Arvind</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Cp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sawant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Robin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Glotin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Goëau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W.-P</forename><surname>Vellinga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Planqué</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joly</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Birdclef 2024 -birdcall species identification from audio</title>
		<ptr target="https://www.kaggle.com/competitions/birdclef-2024" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Overview of lifeclef 2024: Challenges on species distribution prediction and identification</title>
		<author>
			<persName><forename type="first">A</forename><surname>Joly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Picek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kahl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Goëau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Espitalier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Botella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Deneu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Marcos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Estopinan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Leblanc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Larcher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Šulc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hrúz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Servajean</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference of the Cross-Language Evaluation Forum for European Languages</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Xeno-canto sharing wildlife sounds from around the world</title>
		<ptr target="https://xeno-canto.org" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Birdclef 2021: building a birdcall segmentation model based on weak labels</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Shugaev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Tanahashi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dhingra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Patel</surname></persName>
		</author>
		<ptr target="https://ceur-ws.org/Vol-2936/paper-141.pdf" />
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page">2936</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Attention is all you need</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vaswani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shazeer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Parmar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Uszkoreit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Gomez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kaiser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Polosukhin</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/1706.03762.arXiv:1706.03762" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Deep residual learning for image recognition</title>
		<author>
			<persName><forename type="first">K</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sun</surname></persName>
		</author>
		<idno>CoRR abs/1512.03385</idno>
		<ptr target="http://arxiv.org/abs/1512.03385.arXiv:1512.03385" />
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Squeeze-and-excitation networks</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Albanie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Wu</surname></persName>
		</author>
		<ptr target="http://arxiv.org/abs/1709.01507.arXiv:1709.01507" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Vehicle type sound dataset</title>
		<ptr target="https://www.kaggle.com/datasets/brinkor/vehicle-type-sound-dataset" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Noise audio data dataset</title>
		<ptr target="https://www.kaggle.com/datasets/javohirtoshqorgonov/noise-audio-data" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Rainforest connection species audio detection data</title>
		<ptr target="https://www.kaggle.com/competitions/rfcx-species-audio-detection/data" />
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Hindi speech classification dataset</title>
		<ptr target="https://www.kaggle.com/datasets/vivmankar/hindi-speech-classification" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Nocall manual classification dataset</title>
		<ptr target="https://www.kaggle.com/datasets/janmpia/nocall-manual-classification" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
