<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">PrefWork -a framework for the user preference learning methods testing</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Alan</forename><surname>Eckhardt</surname></persName>
							<email>eckhardt@ksi.mff.cuni.cz</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Software Engineering</orgName>
								<orgName type="institution">Charles University</orgName>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">Institute of Computer Science</orgName>
								<orgName type="institution">Czech Academy of Science</orgName>
								<address>
									<settlement>Prague</settlement>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">PrefWork -a framework for the user preference learning methods testing</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">796EE59A67FAE5E151752B1144ECC761</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T04:47+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>PrefWork is a framework for testing of methods of induction of user preferences. PrefWork is thoroughly described in this paper. A reader willing to use Pref-Work finds here all necessary information -sample code, configuration files and results of the testing are presented in the paper. Related approaches for data mining testing are compared to our approach. There is no software available specially for testing of methods for preference learning to our best knowledge.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>User preference learning is a task that allows many different approaches. There are some specific issues that differentiate this task from a usual task of data mining. User preferences are different from measurements of a physical phenomenon or a demographic information about a country; they are much more focused on the objects of interest and involve psychology or economy.</p><p>When we want to choose the right method for user preference learning, e.g. for an e-shop, the best way is to evaluate all possible methods and to choose the best one. The problems with the testing of methods for preference learning are:</p><p>how to evaluate these methods automatically, how to cope with different sources of data, with different types of attributes, how to measure the suitability of a method, to personalise the recommendation for every user individually.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related work</head><p>The most popular tool related to PrefWork is the open source projec t Weka <ref type="bibr" target="#b0">[1]</ref>. Weka is in development for many years and has achieved to become the most widely used tool for data mining. It offers many classificators, regression methods, clustering, data preprocessing, etc. However this variability is also its weakness</p><p>The work on this paper was supported by Czech projects MSM 0021620838, 1ET 100300517 and GACR 201/09/H057.</p><p>-it can be used for any given task, but it has to be customised, the developer has to choose from a very wide range of possibilities. For our case, Weka is too strong.</p><p>RapidMiner <ref type="bibr" target="#b1">[2]</ref> has a nice user interface and is in a way similar to Weka. It is also written in Java and has source codes available. However the ease of use is not better than that of Weka. The user interface is nicer than in Weka but the layout of Weka is more intuitive (allowing to connect various components that are represented on a plane).</p><p>R <ref type="bibr" target="#b2">[3]</ref> is a statistical software that is based on its own programming language. This is the biggest inconvenience -a user willing to use R has to learn yet another programming language.</p><p>There are also commercial tools as SAS miner <ref type="bibr" target="#b3">[4]</ref>, SPSS Clementine <ref type="bibr" target="#b4">[5]</ref>, etc. We do not consider these, because of the need to buy a (very expensive) licence.</p><p>We must also mention the work of T. Horváth -Winston <ref type="bibr" target="#b5">[6]</ref>, which was developed recently. Winston may suit our needs, because it is light-weighted, has also a nice user interface, but in the current stage there are few methods and no support for the method testing. It is more a tool for the data mining lecturing than the real world method testing.</p><p>We are working with ratings the user has associated to some items. This use-case is well-known and used across the internet. An inspiration for extending our framework is many other approaches to user preference elicitation. An alternative to ratings has been proposed in <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref> -instead of ratings, the system requires direct feedback from the user about the attribute values. The user has to specify in which values the given recommendation can be improved. This approach is called critique based recommendations.</p><p>Among other approaches, we should mention also work of Kiessling <ref type="bibr" target="#b8">[9]</ref>, which uses the user behaviour as the source for the preference learning.</p><p>We also need some implementations of algorithms of the user preference learning that are publicly available for being able to compare various methods among themselves. This is a strength of PrefWork -any existing method, which works with ratings, can be integrated into PrefWork using a special adaptor for each tool (see Section 4.3). There is a little bit old implementation of collaborative filtering Cofi <ref type="bibr" target="#b9">[10]</ref> and a brand new one (released 7.4.2009) Mahout <ref type="bibr" target="#b10">[11]</ref>, developed by Apache Lucene project. Cofi uses Taste framework <ref type="bibr" target="#b11">[12]</ref>, which became a part of Mahout. The expectations are that Taste in Mahout would perform better than Cofi, so we will try to migrate our PrefWork adaptor for Cofi to Mahout. Finally there is IGAP <ref type="bibr" target="#b12">[13]</ref> -a tool for learning of fuzzy logic programs in form of rules, which correspond to user preferences. Unfortunately, IGAP is not yet available publicly for download.</p><p>We did not find any other mining algorithm specialised on user preferences available for free download, but we often use already mentioned Weka. It is a powerful tool that can be more or less easily integrated into our framework and provide a reasonable comparison of a non-specialised data mining algorithm to other methods that are specialised for preference learning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">User model</head><p>For making this article self-contained, we describe in brief our user model, as in <ref type="bibr" target="#b13">[14]</ref>. In this section, we describe our user model. This model is based on a scoring function that assigns the score to every object. User rating of an object is a fuzzy subset of X(set of all objects), i.e. a function R(o) : X → [0, 1], where 0 means the least preferred and 1 means the most preferred object. Our scoring function is divided into two steps.</p><p>Local preferences In the first step, which we call local preferences, all attribute values of object o are normalised using fuzzy sets f i : D Ai → [0, 1]. These fuzzy sets are also called objectives or preferences over attributes. With this transformation, the original space of objects' attributes</p><formula xml:id="formula_0">X = N i=1 D Ai is transformed into X = [0, 1] N .</formula><p>Moreover, we know that the object o ∈ X with transformed attribute values equal to [1, . . . , 1] is the most preferred object. It probably does not exist in the real world, though. On the other side, the object with values [0, . . . , 0] is the least preferred, which is more probable to be found in reality.</p><p>Global preferences In the second step, called global preferences, the normalised attribute values are aggregated into the overall score of the object using an aggregation function @ : [0, 1] N → [0, 1]. Aggregation function is also often called utility function.</p><p>Aggregation function may have different forms; one of the most common is a weighted average, as in the following formula:</p><formula xml:id="formula_1">@(o) =(2 * f P rice (o) + 1 * f Display (o) + 3 * f HDD (o)+ 1 * f RAM (o))/7 ,</formula><p>where f A is the fuzzy set for the normalisation of attribute A.</p><p>Another totally different approach was proposed in <ref type="bibr" target="#b14">[15]</ref>. It uses the training dataset as partitioning of normalised space X . For example, if we have an object with normalised values [0.4, 0.2, 0.5] with rating 3, any object with better attribute values (e.g. [0.5, 0.4, 0.7]) is supposed to have the rating at least 3. In this way, we can find the highest lower bound on any object with unknown rating. In <ref type="bibr" target="#b14">[15]</ref> was also proposed a method for interpolation of ratings between the objects with known ratings and even using the ideal (non-existent) virtual object with normalised values [1, ..., 1] with rating 6.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">PrefWork</head><p>Our tool PrefWork was initially developed as a master thesis of Tomáš Dvořák <ref type="bibr" target="#b15">[16]</ref>, who has implemented it in Python. In this initial implementation, only Id3 decision trees and collaborative filtering was implemented. For better ease of use and also for the possibility of integrating other methods, PrefWork was later rewritten to Java by the author. Many more possibilities were added until the today state. In the following sections, components of PrefWork are described.</p><p>Most of the components can be configured by XML configurations. Samples of these configurations and Java interfaces will be provided for each component. We omit methods for configuration from Java interfaces such as configTest(configuration,section) which is configured using a configuration from a section in an XML file. Also data types of function arguments are omitted for brevity.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">The workflow</head><p>In this section a sample of workflow with PrefWork is described.</p><p>The structure of PrefWork is in Figure <ref type="figure" target="#fig_0">1</ref>. There are four different configuration files -one for database access configuration (confDbs), one for datasources (confDatasources), one for methods (confMethods) and finally one for PrefWork runs (confRuns). A run consists of three components -a set of methods, a set of datasets and a set of ways to test the method. Every method is tested on every dataset using every way to test. For each case, results of the testing are written into a csv file.</p><p>A typical situation a researcher working with PrefWork finds himself in is: "I have a new idea X. I am really interested, how it performs on that dataset Y."</p><p>The first thing is to create corresponding Java class X that implements interface InductiveMethod (see 4.3) and add a section X to confMethods.xml. Then copy an existing entry defining a run (e.g. IFSA, see 4.5) and add method X to section methods. Run ConfigurationParser and correct all errors in the new class (and there will be some, for sure). After the run has finished correctly, process the csv file with results to see how X performed in comparison with other methods.</p><p>A similar case is when introducing a new dataset into PrefWork -confDatasets.xml and confDBs.xml have to be edited if the data are in SQL database or in a csv file. Otherwise a new Java class (see 4.2) able to handle the new type of data has to be created. For example, we still have not implemented the class for handling of arff files -these files have the definition of attributes in themselves, so the configuration in confDatasets.xml would be much more simple (see Section 4.2 for an example of a configuration of a datasource with its attributes).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Datasource</head><p>Datasource is, as the name hints, the source of data for inductive methods. Currently, we are working only with ratings of objects. Data are vectors, where the first three attributes typically are: the user id, the object id and the rating of the object. The attributes of the object follow. There is a special column that con-tains a random number associated to each rating. Its purpose will described later.</p><p>Every datasource has to implement the following methods:</p><p>interface BasicDataSource{ boolean hasNextRecord(); void setFixedUserId(value); List&lt;Object&gt; getRecord();</p><p>Attribute[] getAttributes(); Integer getUserId(); void setLimit(from, to, recordsFromRange); void restart(); void restartUserId(); } There are two main attributes of datasource -a list of all users and a list of ratings of the current user. getUserId returns the id of the current user. The most important function is getRecord, which returns a vector containing the rating of the object and its attributes. Following calls of getRecords return all objects rated by the current user. A typical sequence is: int userId = data.getUserId(); data.setFixedUserId(userId); data.restart(); while(data.hasNextRecord()){ List&lt;Object&gt; record = data.getRecord(); // Work with the record ... } Another important function is setLimit, which limits the data using given boundaries from and to. The random number associated to each vector returned by getRecord has to fit into this interval. If recordsFromRange is false, then the random number should be outside of the given interval on the contrary. This method is used when dividing the data into training and testing sets. For example, let us divide the data to 80% training set and 20% testing set. First, we call setLimit(0.0,0.8,true) and let the method train on these data. Then, setLimit( 0.0,0.8,false) is executed and vectors returned by the datasource are used for the testing of the method.</p><p>Let us show a sample configuration of a datasource that returns data about notebooks: First, a set of attributes is defined. Every attribute has a name and a type -numerical, nominal or list. An example of list attribute is actors in a film. This attribute can be found in the IMDb dataset <ref type="bibr" target="#b16">[17]</ref>.</p><formula xml:id="formula_2">&lt;NotebooksIFSA&gt;</formula><p>Let us also note the select for obtaining the user ids (section usersSelect) and the name of the column that contains the random number used in setLimit (randomColumn).</p><p>Other types of user preferences. PrefWork as it is now supports only ratings of objects. There are many more types of data containing user preferences -user clickstream, user profile, filtering of the result set etc.</p><p>PrefWork does not work with any information about the user, either demographic like age, sex, place of birth, occupation etc. or his behaviour. These types of information may bring a large improvement in the prediction accuracy, but they are typically not present -users do not want to share any personal information for the sole purpose of a better recommendation. Another issue is the complexity of user information; a semantic processing would have to be used.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Inductive method</head><p>InductiveMethod is the most important interface -it is what we want to evaluate. Inductive method has two main methods:</p><p>interface InductiveMethod { int buildModel(trainingDataset, userId); Double classifyRecord(record, targetAttribute); } buildModel uses the training dataset and the userId for the construction of a user preference model. After having it constructed, the method is tested -it is being given records via method classifyRecord and is supposed to evaluate them.</p><p>Various inductive methods were implemented. Among the most interesting are our method Statistical ( <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b14">15]</ref> ) and Instances ( <ref type="bibr" target="#b14">[15]</ref>), WekaBridge that allows to use any method from Weka (such as Support vector machine) and ILPBridge that transforms data to a prolog program and then uses Progol <ref type="bibr" target="#b18">[19]</ref>  Every method requires a different configuration, only the name of the class is obligatory. Note that the methods based on our two-step user model (Statistical and Instances for now) can be easily configured to test different heuristics for the processing of different types of attributes. Configuration contains three sections: numericalNormalizer, nominalNormalizer and listNormalizer for the specification of the method for the particular type of attribute. Also see Section 4.5 for an example of this configuration.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4">Ways of the testing of the method</head><p>Several possible ways for the testing of methods can be defined, the division to training and testing sets is the most typically used. The method is trained on the training set (using buildModel) and then tested on the testing set (using classifyRecord). Another typical method is k-fold cross validation that divides data into k sets. In each of k runs, one set is used as the testing set and the rest as the training set.</p><p>interface Test { void test(method, trainDataSource, testDataource); } When the method is tested, the results in the form userid, objectid, predictedRating, realUserRating have to be processed. The interpretation is done by a TestResultsInterpreter. The most common is DataMiningStatistics, which computes such measures as correlation, RMSE, weighted RMSE, MAE, Kendall rank tau coefficient, etc. Others are still waiting to be implemented -ROC curves or precision-recall statistics.</p><p>abstract class TestInterpreter { abstract void writeTestResults( testResults); }</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.5">Configuration parser</head><p>The main class is called ConfigurationParser. First, we have specified which methods are to be tested -in our case it is two variants of Statistical, then Mean and SVM. Note that some attributes of Statistical, which was defined in confMethods, can be "overridden" here. The basic configuration of Statistical is in Section 4.3. Then the datasource for testing of the methods is specified -we are using MySql database with datasource NotebooksIFSA. Several datasources or databases can be specified here. Finally, the ways of the testing and interpretation are given in section tests. TestTrain requires ratio of the training and the testing sets, the path where the results are to be written, and the interpretation of the test results. The definitions of runs are in confRuns.xml in section runs. The specification of the run to be executed is in section run of the same file.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.6">Results of testing</head><p>In Figure <ref type="figure">2</ref> is a sample of the resulting csv file. In our example, there are three runs with method Statistical with normaliser StandardNorm2CP and three runs with normaliser Peak. Runs were performed on different settings of the training and the testing sets, so the results are different even for the same method.</p><p>The results contain all necessary information required for generation of a graph or a table with the results. Csv format was chosen for its simplicity and wide acceptance, so any other possible software can handle it. We are currently using Microsoft Excel and its Pivot table that allows aggregation of results by different criteria. Among other possibilities is also the already mentioned R <ref type="bibr" target="#b2">[3]</ref>.</p><p>Example figures of the output of PrefWork are in Figures <ref type="figure" target="#fig_1">3 and 4</ref>. The lines represent different methods, X axis represents the size of the training set and the Y axis the value of the error function. In Figure <ref type="figure" target="#fig_1">3</ref> the error function is Kendall rank tau coefficient (the higher it is the better) and in Figure <ref type="figure">4</ref> is RMSE weighted by the original rating (the lower the better). The error function can be chosen, as is described in Section 4.4.</p><p>It is impossible to compare PrefWork to another framework generally. A simple comparison to other such systems is in Section 2. This can be done only qualitatively; there is no attribute of frameworks that can be quantified. The user itself has to choose among them the one that suits his needs the most.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.7">External dependencies</head><p>PrefWork is dependent on some external libraries. Two of them are sources for inductive methods -Weka <ref type="bibr" target="#b0">[1]</ref> and Cofi <ref type="bibr" target="#b9">[10]</ref>. Cofi also requires taste.jar.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Average of Tau coefficient</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>PrefWork has been presented in this paper with a thorough explanation and description of every component. Interested reader should be now able to install Pref-Work, run it, and implement a new inductive method or a new datasource. The software can be downloaded at http://www. ksi.mff.cuni.cz/ ∼ eckhardt/PrefWork.zip as an Eclipse project containing all java sources and all required libraries or can be downloaded as SVN checkout at <ref type="bibr" target="#b19">[20]</ref>. The SVN archive contains Java sources and sample configuration files.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Future work</head><p>We plan to introduce time dimension to PrefWork. Netflix <ref type="bibr" target="#b20">[21]</ref> datasets uses a timestamp for each rating. This will enable to study the evolution of the preferences in time, which is a challenging problem. However, the integration of the time dimension into PrefWork can be done in several ways and the right one is yet to be chosen.</p><p>Allowing other sources of data apart from the ratings is a major issue. The clickthrough data can be collected without any effort of the user and can be substantially larger than the number of ratings. But its integration into PrefWork would require a large reorganisation of existing methods.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. PrefWork structure.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. Tau coefficient.</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">H</forename><surname>Witten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Frank</surname></persName>
		</author>
		<title level="m">Data Mining: Practical Machine Learning Tools and Techniques</title>
				<meeting><address><addrLine>San Francisco</addrLine></address></meeting>
		<imprint>
			<publisher>Morgan Kaufmann</publisher>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
	<note>2nd Edition</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Yale: Rapid prototyping for complex data mining tasks</title>
		<author>
			<persName><forename type="first">I</forename><surname>Mierswa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wurst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Klinkenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Scholz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Euler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">KDD&apos;06: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining</title>
				<editor>
			<persName><forename type="first">L</forename><surname>Ungar</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Craven</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Gunopulos</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Eliassi-Rad</surname></persName>
		</editor>
		<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2006-08">August 2006</date>
			<biblScope unit="page" from="935" to="940" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">R-Project</forename></persName>
		</author>
		<ptr target="http://www.r-project.org/" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<ptr target="http://www.sas.com/" />
		<title level="m">SAS enterprise miner</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<ptr target="http://www.spss.com/software/modeling/modeler/" />
		<title level="m">SPSS Clementine</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Winston: A data mining assistant</title>
		<author>
			<persName><forename type="first">Š</forename><surname>Pero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Horváth</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">To appear in proceedings of RDM 2009</title>
				<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Implementing example-based tools for preference-based search</title>
		<author>
			<persName><forename type="first">P</forename><surname>Viappiani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Faltings</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICWE&apos;06: Proceedings of the 6th international conference on Web engineering</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="89" to="90" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Preference-based search with adaptive recommendations</title>
		<author>
			<persName><forename type="first">P</forename><surname>Viappiani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Faltings</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AI Commun</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="page" from="155" to="175" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Preference mining: A novel approach on mining user preferences for personalized applications</title>
		<author>
			<persName><forename type="first">S</forename><surname>Holland</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ester</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Kiessling</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Knowledge Discovery in Databases: PKDD 2003</title>
				<meeting><address><addrLine>Berlin / Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="204" to="216" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<ptr target="http://www.nongnu.org/cofi/" />
		<title level="m">Cofi: A Java-Based Collaborative Filtering Library</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<ptr target="http://lucene.apache.org/mahout/" />
		<title level="m">Apache Mahout project</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<ptr target="http://taste.sourceforge.net/old.html" />
		<title level="m">Taste project</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Induction of fuzzy and annotated logic programs</title>
		<author>
			<persName><forename type="first">T</forename><surname>Horváth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Vojtáš</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ILP06 -Revised Selected papers on Inductive Logic Programming</title>
		<title level="s">Lecture Notes In Computer Science</title>
		<editor>
			<persName><forename type="first">S</forename><surname>Muggleton</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Tamaddoni-Nezhad</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Otero</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer Verlag</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="volume">4455</biblScope>
			<biblScope unit="page" from="260" to="274" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Various aspects of user preference learning and recommender systems</title>
		<author>
			<persName><forename type="first">A</forename><surname>Eckhardt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Richta</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Pokorný</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Snášel</surname></persName>
		</editor>
		<imprint>
			<publisher>Česká technika -nakladatelství ČVUT</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="56" to="67" />
		</imprint>
	</monogr>
	<note>DATESO 2009</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Considering data-mining techniques in user preference learning</title>
		<author>
			<persName><forename type="first">A</forename><surname>Eckhardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Vojtáš</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Workshop on Web Information Retrieval Support Systems</title>
				<imprint>
			<date type="published" when="2008">2008. 2008</date>
			<biblScope unit="page" from="33" to="36" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Induction of user preferences in semantic web</title>
		<author>
			<persName><forename type="first">T</forename><surname>Dvořák</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2008">2008</date>
			<pubPlace>Czech Republic</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Charles University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Czech. Master Thesis</note>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<ptr target="http://www.imdb.com/" />
		<title level="m">The Internet Movie Database</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Inductive models of user preferences for semantic web</title>
		<author>
			<persName><forename type="first">A</forename><surname>Eckhardt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Pokorný</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Snášel</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Richta</surname></persName>
		</editor>
		<meeting><address><addrLine>Praha</addrLine></address></meeting>
		<imprint>
			<publisher>Matfyz Press</publisher>
			<date type="published" when="2007">2007. 2007</date>
			<biblScope unit="volume">235</biblScope>
			<biblScope unit="page" from="108" to="119" />
		</imprint>
	</monogr>
	<note>DATESO</note>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Learning from positive data</title>
		<author>
			<persName><forename type="first">S</forename><surname>Muggleton</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1997">1997</date>
			<biblScope unit="page" from="358" to="376" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<ptr target="http://code.google.com/p/prefwork/" />
		<title level="m">PrefWork -a framework for testing methods for user preference learning</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<ptr target="http://www.netflixprize.com" />
		<title level="m">Netflix dataset</title>
				<imprint/>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
