<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Indian Classical Raga Identification using Machine Learning</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><roleName>Dr</roleName><forename type="first">Dipti</forename><surname>Joshi</surname></persName>
							<email>joshidipti1408@gmail.com</email>
						</author>
						<author>
							<persName><forename type="first">Jyoti</forename><surname>Pareek</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Pushkar</forename><surname>Ambatkar</surname></persName>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">Gujarat University</orgName>
								<address>
									<settlement>Ahmedabad</settlement>
									<region>Gujarat</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<address>
									<addrLine>25-27</addrLine>
									<postCode>2021</postCode>
									<settlement>New Delhi</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Indian Classical Raga Identification using Machine Learning</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">953DABD8580439367CD699AEF4138FB2</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T00:59+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Raga identification</term>
					<term>Feature extraction</term>
					<term>Machine Learning</term>
					<term>KNN</term>
					<term>SVM ISIC&apos;21: International Semantic Intelligence Conference</term>
					<term>February</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Ragas demonstrate the pride of Indian classical music. Raga is the original musical form in Indian classical music. It consists of set of swaras (lyrical notes) that made up of various characteristics as a melodious conception which is played by the instruments and the singer. Based on the features of the raga, the Indian classical music is separated into two Parts: Hindustani (North Indian) and Carnatic (South Indian) classical music. Our experiment is concentrate on Hindustani classical music. In our experiment, K Nearest Neighbor (KNN) and Support vector machine (SVM) classifiers are used on the raga dataset of Yaman and Bhairavi to achieve classification and identification of the raga. We have done accurate outcomes with both KNN and SVM classifiers.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Indian classical music is the music of the Indian subcontinent. Raga or Raag hold a prominent position in Indian Classical Music. A Raag is a collection of musical notes that, when sung or performed on a musical instrument, are quite attractive. Raga recognition comprises of methods that define and classify notes from a piece of music into a suitable raga. In Hindustani classical music, Ragas is a very significant idea and express the moods and sentiments of concert. The classification of ragas comes only after an enough amount of exposure as it is an intellectual process. 1  Any of the attributes of ragas have to be translated into appropriate characteristics for automated recognition.</p><p>The characteristics of Ragas are based on Indian Classical Music techniques, which blend notes with the following features to qualify as a Raga.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Notes (swaras)</head><p>There have to be at least 5 or 7 notes (swaras) in a Raag. The primary seven notes are S (Sa), R (Re or Ri), G (Ga), M (Ma), P (Pa), D (Dha), N (Ni).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Aaroh and Avroh</head><p>Each Raga or Raag is composed of a "Aaroh" that implies swaras scale up and a "Avroh" that implies swaras scale down.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Vadi and Samvadi</head><p>Each raag consisting of "Vadi" means main notes and "Samvadi" means supporting swaras.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Gamakas</head><p>It has a constant frequency rate. Notes in a raga are a series of continuous (back and forth movement in a rhythm) variation, such sort of notes are known as Gamakas. Pakad A set of Swaras which are distinctively recognizes a raga. There is a particular Pakad for each raga. Tala Tala refers to a rhythmic form, which is constructed from variety of beats.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Thaat</head><p>Thaat is used in raga classification. There are unique ten Thaats namely Kalyan, Bilawal, Bhairav, Khamaj, Poorvi, Marwa, Kafi, Asawari, Bhairavi, and Todi. The piece of music has to be converted to Swara for classification. Due to the following factors, there are several difficulties in converting the piece of music in Swara <ref type="bibr" target="#b0">[1]</ref> 1. During any performance, a music part is made up of many instruments. 2. The notes in Indian classical music are on a relative scale. 3. In a raga, there is no static initial Swara. 4. In Indian music, the notes do not have a predetermined frequency rate. 5. In classical music, the series of the swaras in the particular ragas is not static as it allows various innovations. The key purpose of raga recognition is that it will provide a good start for Hindustani music information retrieval and it allows us to predict the raga's performance and accuracy. Besides this, for music analysis, we can also create a playlist focused on ragas.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related work</head><p>In this section, we have reviewed different work done by other authors and analyzed their work for future scope. We have tried to give an analysis of various classifiers, their relevance and performance for Raga identification. In <ref type="bibr" target="#b1">[2]</ref> Sharma, Hiteshwari, Bali, Rasmeet S, Raga identification have been done on the four ragas like Des, Bhupali, Yaman, and Todi -dataset of live performances of both voice-based and instrumental, and executed identification using pitch class profile and n-gram histogram machine learning classifiers. For the pitch class profile, they received 83.39% accuracy and 97.3% for the n-gram histogram. In this paper <ref type="bibr" target="#b2">[3]</ref> Ekta Patel and Savita Chauhan, have used the MATLAB toolbox for extracting track functions. A machine learning tool WEKA is used which works on .arff file format. Bayesian net, Naive Bayes, Support vector machine (SVM), J48, Decision table, Random forest classifiers on Bhairav, Yaman, Shanakara, Saarang dataset. The predominant demanding situations are the complicated variables like pitch and mood in the music track, skipping greater tones, the transformation of various dataset parameters and Raag. The effects are as compared before and after discretization, though in this Raag music identification, the accuracy of the possibility-based classifier is greater. It shows that a probability-based classifier gave accurate results. Comparatively, Bayesian Net provides better performance. <ref type="bibr" target="#b3">[4]</ref> Hiteshwari Sharma and R. S. Bali have recognized various key variables for raga classification and Soft computing fuzzy sets technique for recognition of raga. They used a dataset of five ragas like Des, Bhupali, Yaman, Todi, and Pahadi with three parameters as time, dirgaswaras, and vadi and they have achieved reasonable accuracy as well. <ref type="bibr" target="#b4">[5]</ref> G. Pandey, C. Mishra, and Paul Ipe introduced the Hidden Markov Model and pakad matching on the dataset of two ragas Bhupali and Yaman kalian. They have achieved an 77% accuracy with basic HMM and 87% accuracy with both HMM and Pakad matching methods. In <ref type="bibr" target="#b5">[6]</ref> Muhammad Asim Ali and Zain Ahmed Siddiqui, their research was based on Automatic Music Genres Classification using Machine Learning. For which they have used algorithms like the K Nearest Neighbor (KNN) and Support Vector Machine (SVM) to anticipate the genre of songs. Using the GTZAN dataset, which has a wide range of ten genres, such as blues, hip-hop, jazz, classical, metal, reggae, country, pop, disco, and rock, they gathered musical data. They used the data set of 1000 songs. The above comparison shows that SVM is a more efficient classifier than KNN. <ref type="bibr" target="#b6">[7]</ref> SnigdhaChillara, Kavitha A, Shwetha A Neginhal, Shreya Haldia, Vidyullatha K, proposed to solve the classification problem and comparison among some other models using the Free Music Archive small (fma_small) dataset. In that, two sorts of inputs were given to the models. Wherein CNN models used the spectrogram images and .csv file for Logistic Regression and ANN model used audio features stored in. They have received 88.5% accuracy using CNN on the spectrogram based model which is quite good compared to different algorithms used by other authors.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Work Done</head><p>In this section, we will discuss different characteristics of audio and Machine Learning algorithms like K Nearest Neighbor and Support Vector Machine in a brief way.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Feature Extraction</head><p>Each audio signal comprises of several features. But, it requires fetching the characteristics that are suitable for the issue that we want to solve. The method of fetching characteristics to apply for the study is referred to as feature extraction. We will have a brief idea about some of the characteristics below, in detail.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Power Spectrogram</head><p>A spectrogram is a graphical demonstration of the spectrum of frequencies of a signal as it differs with time. When it is used with an audio signal, spectrograms are sometimes referred to as the sonographs, voiceprints or voicegrams. To determine the raga, we are using the mean of spectrogram to get which tone/ pitch is used more.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>MFCC-Mel-Frequency Cepstral Coefficients</head><p>This feature is one of the most necessary techniques to extract attributes of an audio signal and it is used mostly when we are working on audio signals. The mfccs of a signal are a set of characteristics (approximately (10-20)) which in brief illustrates the general form of a spectral cover.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Spectral Centroid</head><p>It shows that the "center of mass" that considered the weighted mean of the frequencies present within the sound. If it gets the equal frequencies in tune for a particular time span, then the spectral centroid might be around the center and if there are excessive frequencies at the end of the sound then the centroid tends to be closer to its end.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Zero-Crossing Rate</head><p>The rate at which sign varies is known as the zerocrossing rate. Zero crossing rate is the rate wherein the signal varies from positive to negative and vice versa. Speech recognition and music information retrieval are being commonly used in Zero crossing rates. It has excessive values for loud and noisy sounds like in metal and rock.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Roll-Off Frequency</head><p>Particularly Roll-off suggests the activity of a particular sort of channel; one planned to Roll-off frequencies raised or lowered at a certain point. It is called roll-off as the method is progressive. Spectral Bandwidth A radiated spectral quantity is not less than half its maximum value in spectral bandwidth. It determines the extent of the Spectrum. This is an interval difference between lower and higher frequency. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Dataset and Strategy</head><p>In this experiment, we have used the dataset of Audio files. The dataset was created by extracting 60-second audio clips from the internet. For music and audio analysis, a python package Librosa is used. It provides the segments that are required for creating music information retrieval systems.</p><p>Another open-source machine learning library is Scikit-learn which supports both, supervised and unsupervised learning methods. It also provides a variety of tools for model selection, data preprocessing, model fitting, and estimation. In this experiment, we have chosen Yaman and Bhairavi Raga. We have split them into 60-sec frames which allow the computer to work only on the specific part of the song like Pakad, Aaroh, and Avroh and remove other noise and empty fields from the audio. Librosa Library, we have created a .CSV file to save all the features like Mfcc, Spectrogram, Bandwidth, Centroid, zerocrossing, and Roll-off which are to be extracted from the audio file. Below is the flow of the process.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 1: Process of Raga Identification [8]</head><p>This is the visual representation of the classification procedure in which features are taken out from the audio frame and compared with the weight of the closer mean. Using Scikit Learn library, we have implemented KNN and SVM algorithms on the .CSV data file. We found that both KNN and SVM fit best for the classification of raga other than Logistic Regression.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Result and Discussion</head><p>In our experiment, we have chosen Ragas like Yaman and Bhairavi. We have used vocalinstrument Dataset consisting of 341 audio clips, out of which, 194 audio clips are of Yaman and 147 audio clips are of Bhairavi.</p><p>The tables below show the accuracy of KNN and SVM algorithms. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion and Future work</head><p>A short introduction on raga and its attributes are considered. Prior policies for raga classification and recognition are observed with their data records, implementation applications, correctness, and problems. In this paper, we have discussed the classification of different ragas like Yaman and Bhairavi by applying K-Nearest-Neighbor (KNN), Support vector machine (SVM) machine learning algorithms. We have obtained good results with KNN and SVM, but in our experiment, KNN seems to be performing slightly better.</p><p>In the future, we will expand our dataset with many other Ragas for acquiring more accurate results and also implement other classifiers for detecting ragas in a well-defined manner.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>labels and class on different sides of the hyperplane.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table .</head><label>.</label><figDesc></figDesc><table><row><cell cols="2">1 Results of</cell><cell>Raga</cell><cell>Identification</cell></row><row><cell cols="2">accuracy Using KNN</cell><cell></cell></row><row><cell cols="2">KNN Train/Test</cell><cell cols="2">Classification</cell></row><row><cell></cell><cell>Ratio</cell><cell cols="2">Accuracy</cell></row><row><cell></cell><cell>80 / 20</cell><cell></cell><cell>98%</cell></row><row><cell>1</cell><cell>60 / 40</cell><cell></cell><cell>94%</cell></row><row><cell></cell><cell>40 / 60</cell><cell></cell><cell>93%</cell></row><row><cell></cell><cell>80 / 20</cell><cell></cell><cell>97%</cell></row><row><cell>2</cell><cell>60 / 40</cell><cell></cell><cell>93%</cell></row><row><cell></cell><cell>40 / 60</cell><cell></cell><cell>92%</cell></row><row><cell></cell><cell>80 / 20</cell><cell></cell><cell>97%</cell></row><row><cell>3</cell><cell>60 / 40</cell><cell></cell><cell>94%</cell></row><row><cell></cell><cell>40 / 60</cell><cell></cell><cell>93%</cell></row><row><cell></cell><cell>80 / 20</cell><cell></cell><cell>95%</cell></row><row><cell>4</cell><cell>60 / 40</cell><cell></cell><cell>93%</cell></row><row><cell></cell><cell>40 / 60</cell><cell></cell><cell>93%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table . 2 Results of Raga Identification accuracy using SVM Classifier Train/Test Ratio</head><label>.</label><figDesc>From this comparison, we can conclude that KNN with all Neighbor values for Train/Test ratio 80/20 gives the highest accuracy which is better in comparison with SVM.</figDesc><table><row><cell>Train/ Test ratio into 3 different categories that are</cell><cell></cell><cell></cell></row><row><cell>80/20, 60/40, and 40/60 respectively. For all KNN</cell><cell></cell><cell></cell></row><row><cell>values, we got the highest accuracy in Train/Test</cell><cell></cell><cell></cell></row><row><cell>ratio 80/20 i.e. For KNN value 1 to 4; we received</cell><cell></cell><cell></cell></row><row><cell>98%, 97%, 97%, and 95% accuracy respectively.</cell><cell></cell><cell></cell></row><row><cell>Using SVM, we got 95% accuracy for the 80/20</cell><cell></cell><cell></cell></row><row><cell>Train/Test ratio.</cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell>Classification</cell></row><row><cell></cell><cell></cell><cell>Accuracy</cell></row><row><cell></cell><cell>80/20</cell><cell>95%</cell></row><row><cell>SVM</cell><cell>60/40</cell><cell>95%</cell></row><row><cell></cell><cell>40/60</cell><cell>94%</cell></row><row><cell cols="3">In both the tables given above, we have</cell></row><row><cell cols="3">implemented two different classifiers i.e. KNN</cell></row><row><cell cols="3">and SVM. Using KNN, we can observe that the</cell></row><row><cell cols="3">accuracy of raga identification is varied for</cell></row><row><cell cols="3">different Neighbour values. We have divided the</cell></row></table></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Machine Learning Algorithms</head><p>During our analysis, we have found that the Supervised Machine Learning approach might be a good fit for our problem. We have tried to implement various classification algorithms and found that K Nearest Neighbor and Support Vector Machine is quite appropriate for our experiment. K Nearest Neighbor (KNN) K Nearest Neighbor (KNN) is a supervised learning method. It is the simplest but robust algorithm that is applied for both regression and classification problems. To build a prediction, the KNN algorithm uses the whole dataset in which we attempt to classify data points to a particular category with the help of the training set. Support Vector Machine (SVM) Support vector machine (SVM) is also a supervised learning method that is usually used for classification. A hyperplane that clearly divides the sampling points with various labels is identified by this algorithm. It separates sample</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Identifying Ragas in Indian Music</title>
		<author>
			<persName><forename type="first">Vijay</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Harit</forename><surname>Pandya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">V</forename><surname>Jawahar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">22nd International Conference on Pattern Recognition</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Comparison of ML classifiers for Raga recognition</title>
		<author>
			<persName><forename type="first">H</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Bali</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Int. J. Sci. Res. Publ</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">10</biblScope>
			<biblScope unit="page" from="1" to="5" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Raag detection in music using supervised machine learning approach</title>
		<author>
			<persName><forename type="first">E</forename><surname>Patel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chauhan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Int. J. Adv. Technol. Eng. Explor</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">29</biblScope>
			<biblScope unit="page" from="58" to="67" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Raga identification of Hindustani music using soft computing techniques</title>
		<author>
			<persName><forename type="first">H</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Bali</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Recent Adv. Eng. Comput. Sci. RAECS</title>
		<imprint>
			<biblScope unit="page" from="6" to="8" />
			<date type="published" when="2014">2014. 2014. 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Tansen: A system for automatic raga identification</title>
		<author>
			<persName><forename type="first">G</forename><surname>Pandey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Mishra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Ipe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Indian Int. Conf. Artif. Intell</title>
				<imprint>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="1350" to="1363" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Automatic Music Genres Classification using Machine Learning</title>
		<author>
			<persName><forename type="first">Muhammad</forename><surname>Asim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ali</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Zain</forename><forename type="middle">Ahmed</forename><surname>Siddiqui</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Advanced Computer Science and Applications</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">8</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Music Genre Classification using Machine Learning Algorithms : A comparison</title>
		<author>
			<persName><forename type="first">S</forename><surname>Chillara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Kavitha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Neginhal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Haldia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">S</forename><surname>Vidyullatha</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019-05">May. 2019</date>
			<biblScope unit="page" from="851" to="858" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Raga Identification Techniques for Classifying Indian Classical Music: A Survey</title>
		<author>
			<persName><forename type="first">C</forename><surname>Kalyani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Balwant</forename><forename type="middle">A</forename><surname>Waghmare</surname></persName>
		</author>
		<author>
			<persName><surname>Sonkamble</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Signal Processing Systems</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">4</biblScope>
			<date type="published" when="2017-12">December 2017</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
