<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">On Discovering Deterministic Relationships in Multi-Label Learning via Linked Open Data</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Eirini</forename><surname>Papagiannopoulou</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Informatics</orgName>
								<orgName type="institution">Aristotle University of Thessaloniki</orgName>
								<address>
									<postCode>54124</postCode>
									<settlement>Thessaloniki</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Grigorios</forename><surname>Tsoumakas</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Informatics</orgName>
								<orgName type="institution">Aristotle University of Thessaloniki</orgName>
								<address>
									<postCode>54124</postCode>
									<settlement>Thessaloniki</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nick</forename><surname>Bassiliades</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Informatics</orgName>
								<orgName type="institution">Aristotle University of Thessaloniki</orgName>
								<address>
									<postCode>54124</postCode>
									<settlement>Thessaloniki</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">On Discovering Deterministic Relationships in Multi-Label Learning via Linked Open Data</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">453DE32B2801569DDC7B201E6D4C3A79</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T06:24+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>multi-label learning</term>
					<term>linked open data</term>
					<term>semantics</term>
					<term>WordNet</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In multi-label learning, each instance can be related with one or more binary target variables. Multi-label learning problems are commonly found in many applications, e.g. in text classification where a news article is possible to be both on politics and finance. The main motivation of multi-label learning algorithms is the exploitation of label dependencies in order to improve prediction accuracy. In this paper, we present ongoing work on a method that uses the linked open data cloud to detect relationships between labels, enriches the set of labels with new concepts which are super classes of two or more labels, trains a model on the enhanced training set and finally, makes predictions on the enhanced test set in order to improve the prediction accuracy of the initial labels.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>In multi-label data, instances are related to one ore more binary target variables, commonly called labels. Learning from such data has received a lot of attention from the machine learning and data mining communities in recent years. This is due to the multitude of practical applications it arises in, and the interesting research challenges it presents, such as exploiting label dependencies, learning from rare labels and scaling up to large number of labels <ref type="bibr" target="#b0">[1]</ref>.</p><p>In several multi-label learning problems, the labels are organized as a tree or a directed acyclic graph, and there exist approaches that exploit such structure <ref type="bibr" target="#b1">[2]</ref>. However, in most multi-label learning problems, flat labels are only provided without any accompanying structure. Yet, it is often the case that implicit deterministic relationships exist among the labels. For example, in the ImageCLEF 2011 photo annotation task <ref type="bibr" target="#b2">[3]</ref>, the learning problem involved 99 labels without any accompanying semantic meta-data, among which certain deterministic relationships did exist. In particular, there were several groups of mutually exclusive labels, such as the four seasons autumn, winter, spring, summer and the personrelated labels single person, small group, big group, no persons. There were also several positive entailment (consequence) relationships, such as river → water and car → vehicle.</p><p>These observations motivated us to consider the automated discovery of such deterministic relationships as potentially interesting and useful knowledge, and the exploitation of this explicit knowledge for improving the accuracy of multilabel learning algorithms. In our previous work in this direction <ref type="bibr" target="#b3">[4]</ref>, we discovered such relationships from the data using techniques based on association rule mining and exploited them via a deterministic Bayesian network representation. Here, we investigate the possibility of exploiting Linked Open Data (LOD) for discovering such relationships. In particular we aim at finding common ancestor concepts among the labels. We then introduce these ancestors as additional labels, expanding thus the label space, inspired by approaches that similarly expand the feature space for data mining problems based on LOD <ref type="bibr" target="#b4">[5]</ref>. Finally, we apply traditional multi-label learning algorithms that indirectly exploit relationships and notice that accuracy gains are often achieved this way.</p><p>The rest of this paper is organized as follows. Section 2 discusses related work on discovering and exploiting deterministic relationships in multi-label learning. In Section 3, we present our method in detail, giving examples to be comprehensible. In Section 4, we present the evaluation approach we follow and describe the results of our experiments. Finally, in the last Section, we set open research issues for discussion and some future extensions for this work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related Work</head><p>The idea of discovering and exploiting deterministic label relationships from multi-label data was first discussed in <ref type="bibr" target="#b5">[6]</ref>, where relationships were referred to as constraints. An interesting general point of <ref type="bibr" target="#b5">[6]</ref> was that label constraints can be exploited either at the learning phase or at a post-processing phase. In addition, it presented four basic types of constraints and noted that more complex types of constraints can be represented by combining these basic constraints with logical connectors. For discovering constraints, it proposed association rule mining, followed by removal of redundant rules that were more general than others. For exploiting constraints, it proposed two post-processing approaches for the label ranking task in multi-label learning. These approaches correct a predicted ranking when it violates the constraints by searching for the nearest ranking that is consistent with the constraints. They only differ in the function used to evaluate the distance between the invalid and a valid ranking. Results on synthetic data with known constraints showed that constraint exploitation can be helpful, but results on real-world data and automatically discovered constraints did not lead to predictive performance improvements.</p><p>A rule-learning approach for discovering and exploiting deterministic label relationships in multi-label learning was recently proposed in <ref type="bibr" target="#b6">[7]</ref>. In particular, for each label a separate rule model is constructed, but this model uses the rest of the labels as additional features. This can lead to rules that include labels with/without ordinary input features in the preconditions of a rule deriving a particular label. Such rules are a natural representation of entailment relationships among labels (and input features).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Our Approach</head><p>Our general goal is to discover deterministic relationships among labels via the LOD cloud. In this first step towards our goal, we focused specifically on discovering common ancestors of existing labels, with which we extend the existing label space of a learning problem. We expect standard multi-label learning algorithms to benefit from this explicit representation of latent relationships among labels. We also focused on WordNet<ref type="foot" target="#foot_0">1</ref> as the LOD resource to search for these relationships, and leave as future work the exploitation of DBPedia and other resources.</p><p>Names of label attributes sometimes contain an auxiliary keyword that emphasizes their role as target variables. This typically arises in textual datasets, where the name of label attributes may coincide with names of input attributes (words). For example, all labels of the bibtex and bookmarks datasets <ref type="bibr" target="#b7">[8]</ref>, the delicious dataset <ref type="bibr" target="#b8">[9]</ref> and the EUR-Lex datasets <ref type="bibr" target="#b9">[10]</ref> start with "tag ". To deal with this issue automatically, we first tokenize label names with a standard set of delimiters, such as dash, underscore, space, comma and period. Tokens that appear in all label names are then removed and the rest of the tokens are concatenated again with a space separator between them.</p><p>We then look up each label name in WordNet. If a label name has multiple senses, we assume that the correct sense is the most frequent one according to WordNet, which is based on semantically tagged corpora. Next, we obtain recursively the hypernym-synsets of the determined sense of the label (i.e. the broader concepts) up to the root of WordNet. We then examine all pairs of labels that we managed to find in WordNet and their common ancestors are used for expanding the original label space of the learning problem that we are examining. In section 4, we describe two different approaches that ignore some very general senses typically arising as common parents of labels that seem to have no semantic relationship.</p><p>To exemplify our approach, let's consider we are examining the pair of labels (Winter, Summer) from the ImageCLEF 2011 photo annotation task <ref type="bibr" target="#b2">[3]</ref>. No auxiliary keywords are used inside these label names, so the first pre-processing step leaves them unaltered. We then examine the senses that are related to these labels (the number inside parentheses at the end of the sense description denotes the sense's frequency):</p><p>-Winter 1. the coldest season of the year; in the northern hemisphere it extends from the winter solstice to the vernal equinox (24) 2. spend the winter (2) -Summer 1. the warmest season of the year; in the northern hemisphere it extends from the summer solstice to the autumnal equinox (58) 2. the period of finest development, happiness, or beauty (0) 3. spend the summer (0) Based on their frequency, we assume that the 1st sense of each label is the correct one and continue to obtain the following branches of hypernym-synsets of these senses up to the root of WordNet:</p><p>-Winter:wintertime → season → period →measure:quantity:amount → abstraction → entity -Summer:summertime → season → period → measure:quantity:amount → abstraction → entity This will lead to the addition of the following labels: season, period, measure:quantity:amount, abstraction and entity. Recall that our method ignores some common parents that appear at the top of the WordNet hierarchy, because they will be added for all label pairs (or for most of them) and thus will not bring any new information. In this example we would not add as labels abstraction and entity.</p><p>After the label-addition process we should fill in the values of the new labels: for every new label of each instance if at least one of its children is true then the specific new label will be true. In all other cases the new label will be false for the specific instance. In our running example, if label Winter or label Summer is true, then Season will be true, too. Otherwise, Season will be false.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Experiments</head><p>We use the Calibrated Label Ranking (CLR) <ref type="bibr" target="#b10">[11]</ref> problem transformation method for learning multi-label models, which we have mentioned above (Section 2). For the binary classification problems we employ linear Support Vector Machines. We use the implementations from Mulan <ref type="bibr" target="#b11">[12]</ref> and Weka <ref type="bibr" target="#b12">[13]</ref> respectively. We experiment on the 6 multi-label datasets that are shown in Table <ref type="table">1</ref>. Those datasets were selected because their labels don't have obscure names (e.g. label1, label2 or class1, class2, etc). We split each dataset in train set and test set with percentage 70% -30%, respectively. We discuss the results in terms of Mean Average Precision (MAP) and Logarithmic Loss (LL).</p><p>Table <ref type="table">1</ref> presents the above measures for CLR applied to: (i) the original data, and (ii) the expanded label space via two versions of our approach. In the 1 st version, called LOD1, we take into consideration the hypernym-synsets of the determined sense of the label, for up to two layers, i.e. we obtain parent and grandparent senses of the determined sense and we ignore the following 32 very general senses that typically arise as common parents of labels that seem to have no semantic relationship: substance, content, message, theme, topic, subject, whole, unit, object, entity, abstraction, domain, activity, individual, someone, somebody, mortal, soul, organism, being, cause, go, locomote, formation, alter, modify, change, alteration, modification, happening, occurence, occurent. These were selected based on early experiments. In the 2 nd version of our approach, called LOD2, we take into consideration the hypernym-synsets of the determined sense of the label for all layers up to the root of WordNet and we only ignore the following 6 general senses: whole, unit, object, entity, abstraction, tag.</p><p>In Table <ref type="table">1</ref>, we notice that in all datasets LOD1 leads to improvements in the Log-Loss measure (smaller is better). In addition, there are 4 datasets (bibtex, delicious, bookmarks, IMDB-F) where our method improves MAP but there are also 2 (IC2011, corel5k) where we observe reduction in this measure. Furthermore, we notice that in almost all datasets LOD2 leads also to improvements in the Log-Loss measure (except for ImageCLEF2011 where we have a reduction). Moreover, there are 3 datasets (bibtex, delicious, bookmarks) where our method improves MAP but in the other 3 (IC2011, corel5k, IMDB-F) we observe reduction in this measure.</p><p>Table <ref type="table">1</ref>. Mean Average Precision (MAP) and Log Loss of CLR: without using our method i.e. standard CLR (std CLR), following the 1 st approach (LOD1) and following the 2 nd approach (LOD2). The next three columns show the number of additional labels for the LOD1 and LOD2 approach and the proportion of labels that are found in WordNet per dataset. In Table <ref type="table">1</ref>, we also give the number of additional labels for the LOD1 and LOD2 approach per dataset in order to have an impression of the increase in the number of labels. We also give the proportion of labels that are found in WordNet for each dataset.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusions and Future Work</head><p>This work has introduced a method that detects relationships among labels within multi-label datasets using WordNet (as LOD resource) and exploits them, enriching the set of labels with the common parents of them in order to make the learning process more accurate. It is in our immediate plans to conduct experiments where we will add the lowest common subsumer (LCS) instead of all common parents for each pair of labels. It would be useful to insert a criterion that determines whether or not to reject the LCS depending on how far it is from the root or the initial label of the dataset (its child). We believe that our approach can be further extended and improved by exploiting additional resources of the LOD cloud, such as DBPedia, LinkedGeoData, Geospecies knowledge base and Bio2RDF. It is also in our immediate plans to include a general first check of all labels in order to detect the domain that they are referred to. In this way, we will select the appropriate sense of a label based on the domain of the dataset's labels and not on the assumption that the correct sense is the most common among all the senses. Another important direction is the generalization of our approach so as to be able to discover additional types of relationships among the existing labels (e.g. mutual exclusion, spouse). In this work, we simply extended the original label space with the ancestor labels and relied on standard multi-label learning algorithms to exploit the discovered knowledge. In the future we can also investigate direct exploitation of this knowledge via techniques such as <ref type="bibr" target="#b3">[4]</ref>. Finally, we intend to apply our approach to additional datasets and to employ additional evaluation measures using cross-validation as well as including some statistical tests for the significance of our method's improvements.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>The last columns show the number of examples of each dataset.</figDesc><table><row><cell>MAP</cell><cell>Log Loss</cell><cell cols="2">extra labels</cell><cell></cell></row><row><cell cols="6">dataset CLR LOD1 LOD2 CLR LOD1 LOD2 LOD1 LOD2 % found samples</cell></row><row><cell cols="2">IC2011 .3275 .3263 .3245 .7221 .7003 .7315</cell><cell>17</cell><cell>69</cell><cell>75/99</cell><cell>8000</cell></row><row><cell cols="2">bibtex .3757 .3838 .3833 .9288 .8840 .7881</cell><cell>8</cell><cell cols="3">63 119/159 7395</cell></row><row><cell cols="3">delicious .1596 .1646 .1661 .9273 .8305 .7901 190</cell><cell cols="3">319 778/983 16105</cell></row><row><cell cols="3">bookmarks .2353 .2435 .2358 .9401 .8842 .7945 22</cell><cell cols="3">79 148/208 87856</cell></row><row><cell cols="2">corel5k .0612 .0584 .0580 .9795 .8503 .7481</cell><cell>84</cell><cell cols="3">214 367/374 5000</cell></row><row><cell cols="2">IMDB-F .1161 .1163 .1160 .8256 .7109 .6411</cell><cell>5</cell><cell>14</cell><cell cols="2">23/28 120919</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">http://wordnet.princeton.edu/</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Introduction to the special issue on learning from multi-label data</title>
		<author>
			<persName><forename type="first">G</forename><surname>Tsoumakas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><forename type="middle">H</forename><surname>Zhou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">88</biblScope>
			<biblScope unit="issue">1-2</biblScope>
			<biblScope unit="page" from="1" to="4" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Decision trees for hierarchical multi-label classification</title>
		<author>
			<persName><forename type="first">C</forename><surname>Vens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Struyf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Schietgat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Džeroski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Blockeel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">73</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="185" to="214" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">The clef 2011 photo annotation and conceptbased retrieval tasks</title>
		<author>
			<persName><forename type="first">S</forename><surname>Nowak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Nagel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Liebetrau</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CLEF (Notebook Papers/Labs/Workshop)</title>
				<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Discovering and exploiting entailment relationships in multi-label learning</title>
		<author>
			<persName><forename type="first">C</forename><surname>Papagianopoulou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tsoumakas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Tsamardinos</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1404.4038[cs.LG</idno>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Unsupervised generation of data mining features from linked open data</title>
		<author>
			<persName><forename type="first">H</forename><surname>Paulheim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fürnkranz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2nd International Conference on Web Intelligence, Mining and Semantics, WIMS &apos;12</title>
				<meeting><address><addrLine>Craiova, Romania</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2012">June 6-8, 2012. 2012</date>
			<biblScope unit="page">31</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Multi-label classification with label constraints</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fürnkranz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ECML PKDD 2008 Workshop on Preference Learning</title>
				<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Stacking label features for learning multilabel rules</title>
		<author>
			<persName><forename type="first">E</forename><surname>Loza Mencía</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Janssen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Discovery Science -17th International Conference, DS 2014</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<meeting><address><addrLine>Bled, Slovenia, Oc</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2014">tober 8-10, 2014. 2014</date>
			<biblScope unit="volume">8777</biblScope>
			<biblScope unit="page" from="192" to="203" />
		</imprint>
	</monogr>
	<note>Proceedings.</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Multilabel text classification for automated tag suggestion</title>
		<author>
			<persName><forename type="first">I</forename><surname>Katakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tsoumakas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Vlahavas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ECML/PKDD 2008 Discovery Challenge</title>
				<meeting>the ECML/PKDD 2008 Discovery Challenge<address><addrLine>Antwerp, Belgium</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Effective and efficient multilabel classification in domains with large number of labels</title>
		<author>
			<persName><forename type="first">G</forename><surname>Tsoumakas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Katakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Vlahavas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. ECML/PKDD 2008 Workshop on Mining Multidimensional Data (MMD&apos;08)</title>
				<meeting>ECML/PKDD 2008 Workshop on Mining Multidimensional Data (MMD&apos;08)</meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="30" to="44" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Efficient pairwise multilabel classification for large scale problems in the legal domain</title>
		<author>
			<persName><forename type="first">E</forename><surname>Loza Mencia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fürnkranz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">12th European Conference on Principles and Practice of Knowledge Discovery in Databases, PKDD 2008</title>
				<meeting><address><addrLine>Antwerp, Belgium</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="50" to="65" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Multilabel classification via calibrated label ranking</title>
		<author>
			<persName><forename type="first">J</forename><surname>Fürnkranz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hüllermeier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">L</forename><surname>Mencia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Brinker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">73</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="133" to="153" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Mulan: A java library for multi-label learning</title>
		<author>
			<persName><forename type="first">G</forename><surname>Tsoumakas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Spyromitros-Xioufis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Vilcek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Vlahavas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Machine Learning Research (JMLR)</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="2411" to="2414" />
			<date type="published" when="2011-07-12">July 12 2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">The weka data mining software: An update</title>
		<author>
			<persName><forename type="first">M</forename><surname>Hall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Frank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Holmes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Pfahringer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Reutemann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">H</forename><surname>Witten</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">SIGKDD Explorations</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
