<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Evaluation of Unsupervised Learning Results: Making the Seemingly Impossible Possible</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
				<date type="published" when="2019-05-04">May 4, 2019</date>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Ricardo</forename><forename type="middle">J G B</forename><surname>Campello</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Newcastle</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Evaluation of Unsupervised Learning Results: Making the Seemingly Impossible Possible</title>
					</analytic>
					<monogr>
						<imprint>
							<date type="published" when="2019-05-04">May 4, 2019</date>
						</imprint>
					</monogr>
					<idno type="MD5">1E4E4F6BC2458318C2B30268788FCBFF</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T06:36+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>When labels are not around, evaluating the final or intermediate results produced by a learning algorithm is usually not simple. In cluster analysis and unsupervised outlier detection, evaluation is important in many different aspects. It is a crucial task, e.g., for model selection, model validation, assessment of ensemble members accuracy and diversity, among others. In cluster analysis this task has been investigated for decades, but it is relatively well understood only under certain oversimplified model assumptions. In outlier detection, unsupervised evaluation is still in its infancy. Even when labels are available in the form of a ground-truth, such as in controlled benchmarking experiments, evaluation can still be challenging because the semantic described by the ground-truth labels may not be properly captured by the data as represented in the given feature space.</p><p>In this talk, I intend to discuss some particular aspects regarding outlier and clustering evaluation, focusing on a few recent results as well as some challenges for future research.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body/>
		<back>
			<div type="references">

				<listBibl/>
			</div>
		</back>
	</text>
</TEI>
