<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Learning from Ordinal Data with Inductive Logic Programming in Description Logic</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Nunung</forename><forename type="middle">Nurul</forename><surname>Qomariyah</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">AI Group</orgName>
								<orgName type="institution">Computer Science University of York</orgName>
								<address>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Dimitar</forename><surname>Kazakov</surname></persName>
							<email>dimitar.kazakov@york.ac.uk</email>
							<affiliation key="aff1">
								<orgName type="laboratory">AI Group</orgName>
								<orgName type="institution">Computer Science University of York</orgName>
								<address>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Learning from Ordinal Data with Inductive Logic Programming in Description Logic</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">696B47C0E2E94EFEFAA6CD4AF05EC542</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T21:09+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Here we describe a Description Logic (DL) based Inductive Logic Programming (ILP) algorithm for learning relations of order. We test our algorithm on the task of learning user preferences from pairwise comparisons. The results have implications for the development of customised recommender systems for e-commerce, and more broadly, wherever DL-based representations of knowledge, such as OWL ontologies, are used. The use of DL makes for easy integration with such data, and produces hypotheses that are easy to interpret by novice users. The proposed algorithm outperforms SVM, Decision Trees and Aleph on data from two domains.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>ILP is now a well-established research area where machine learning meets logic programming <ref type="bibr" target="#b17">[18]</ref>, which has shown the potential to address many real world problems <ref type="bibr" target="#b5">[6]</ref>. Description Logics (DLs) are a family of languages representing domain knowledge as descriptions of concepts, roles and individuals. DLs can be considered as fragments of First Order Logic (FOL). Because of their well-defined semantics and powerful inference tools, DL have been the representation of choice for ontologies, including applications, such as the Semantic Web. ILP algorithms using DL representation have the potential to be applied to large volumes of linked open data and to benefit from the tools available for such data, e.g. IDEs, such as Protégé, triplestores, providing efficient storage and retrieval, and reasoners, which can test the data for consistency and infer additional knowledge. DLs are subsets of FOL, and the representation they use has a variable-free syntax. This is claimed to make the knowledge base statements easier to read than the corresponding FOL formulae <ref type="bibr" target="#b1">[2]</ref>. Traditionally, ILP algorithms use the Closed World Assumption (CWA), while DLs operate under the Open World Assumption (OWA). Existing attempts to apply ILP to DL data have explored both possibilities, as shown in Section 3.1.</p><p>The chosen application area of Preference Learning (PL) aims to induce predictive preference models from empirical data, e.g. by assigning an absolute rank to the choices available <ref type="bibr" target="#b7">[8]</ref>. Online vendors already employ methods, such as collaborative filtering, which allow to produce selective lists of the most popular products among users with similar tastes <ref type="bibr" target="#b15">[16]</ref>.</p><p>The automation of PL has now become essential in many e-commerce applications following the current call for customer personalisation. The method of pairwise comparisons, which we focus on, is relatively new in preference learning, although there are exceptions <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b19">20,</ref><ref type="bibr">19,</ref><ref type="bibr" target="#b13">14]</ref>. Balakrishnan and Chopra <ref type="bibr" target="#b2">[3]</ref> published a research study on pairwise preferences by using a Bayesian framework. <ref type="bibr">Qian et al. [19]</ref> published a similar idea, proposing an approach to learn user preferences by using pairwise comparisons to explore each attribute in turn through "orthogonal queries" and then applying linear SVM to approximate the preferences. Similarly, Jensen et al. <ref type="bibr" target="#b9">[10]</ref> apply a Gaussian Process regression model for comparisons between music tracks. Another similar work was performed by Rokach and Kisilevich <ref type="bibr" target="#b19">[20]</ref> which uses lazy decision trees with pairwise comparisons at each node.</p><p>Most studies on pairwise preference learning use statistical machine learning approaches while in this paper, we propose a logic-based approach. There is one other attempt at using ILP for preference learning <ref type="bibr" target="#b13">[14]</ref>, which proposes a framework for learning weak constraints in Answer Set Programming (ASP). We build on previous work combining ILP and DL, namely, the DL-Learner <ref type="bibr" target="#b14">[15]</ref>, which aims to find a correct class description in a specific ontology from a set of positive and negative examples of individuals. It builds a hypothesis in the form of a class description, which can contain conjunction, disjunction and existential quantification. DL-Learner itself develops previous work on systems, such as YinYang <ref type="bibr" target="#b8">[9]</ref> and DL-FOIL <ref type="bibr" target="#b4">[5]</ref>. Kietz's <ref type="bibr" target="#b11">[12]</ref> and Konstantopoulos' <ref type="bibr" target="#b12">[13]</ref> work is also relevant in this context. While DL-Learner can only learn class definitions, we aim to learn definitions of relations. Here we focus on the binary relation of order, and in particular, strict order, a relation that is transitive, anti-reflexive and anti-symmetric. Our aim is to learn each user's individual preferences using a DL-based representation, and an algorithm inspired by the Progol/Aleph family of ILP tools. We evaluate our algorithm on two datasets, one on car preferences <ref type="bibr" target="#b0">[1]</ref>, the second on sushi preferences <ref type="bibr" target="#b10">[11]</ref>. Both datasets provide pairwise preference choices of multiple users.</p><p>The rest of the paper is organized as follows: in Section 2 we explain the learning task in terms of both logic programming and DL to make the problem clearer to readers from either background. We then propose an algorithm based on ILP in Section 3 and provide the analysis of the evaluation in Section 4. Finally, we conclude our work and provide our plan for further work in Section 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Problem Representation</head><p>DLs represent the world in terms of concepts, objects and roles. Concepts can be seen as a formal definition of classes in the application domain, e.g. one can define a father as a man who has at least one child. Concepts have two functions: (1) to describe a collection of objects and (2) to describe the properties that a class should hold. Objects are the entities that belong to one concept or another. Roles represent binary relationships between objects, e.g. John is Anna's father. These can also be used to describe object properties, e.g. John's age is 42. In FOL terminology, objects correspond to unique constants (i.e. ground terms), concepts correspond to unary predicates, and roles correspond to binary predicates. All this information is stored in the form of a knowledge base which comprises of two components, an assertional part (ABox ) and a terminological part (TBox ).The TBox is a finite set of General Concept Inclusions (GCI) and role inclusions. It corresponds to the notion of a schema in a relational database. The assertional set in the knowledge base is called the ABox. It is used to state properties of individuals. In a relational database, this is the actual data, while in the FOL it corresponds to facts (aka ground terms).</p><p>The knowledge representation in DL can be expressed using the Resource Description Framework (RDF), which is one of the most powerful general-purpose knowledge representation languages. The syntax of RDF is based on triples in the form of subject-predicate-object expressions. An RDF triple is a statement expressing a relation ("predicate") between the object in its first argument, and the object or literal in its third argument (e.g. "Anna-has father-John"). In this section, we use Turtle<ref type="foot" target="#foot_0">1</ref> syntax to describe RDF data <ref type="foot" target="#foot_1">2</ref>The preference learning problem is defined as follows:</p><p>Given: a set of individuals, a class hierarchy, a mapping from individuals to classes, and a set of preferences represented as pairs of individuals, where the first individual is preferred over (strictly better than) the second, Find: a disjunction of axioms defining the domain and range of the relation betterthan (where each axiom is expressed as a conjunction of classes) that is complete and consistent with the given preferences. This is a supervised learning problem as the user assigns one of two possible labels for each pair of items after considering their attributes. We then use these labels to search for a definition of the Domain and Range class memberships that render the relation true.</p><p>We propose an approach to learn relations and apply the resulting system to learn strict order (i.e. better than). The properties of strict order relation are anti-symmetric, which means that if item A is better than item B, then in any case item B cannot be better than item A, unless A=B. That special case is then excluded by the additional assumption of betterthan being anti-reflexive (i.e. X cannot be seen as better than itself). It is also transitive, which means whenever item A is better than item B, and item B is better than item C, then item A is better than item C. The ontology reasoner can always be used to extract all pairs of items satisfying the  The representation of a sample preference learning problem in DL is shown in Figure <ref type="figure" target="#fig_1">1</ref>. The class hierarchy is given to the system as an input. We evaluate our algorithm using a simple class hierarchy as shown in Figure <ref type="figure" target="#fig_1">1a</ref> in order to make the solutions comparable to the Aleph representation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Hypothesis Language</head><p>The aim of ILP is to find a theory that is complete (it covers all the given positive examples) and consistent (it covers no negative examples). In our algorithm learning from RDF data, we build a hypothesis for an object property we want to learn, as in the case of the property betterthan. The fact that betterthan is an object property is described in Turtle as shown below: myontology:betterthan rdf:type owl:ObjectProperty.</p><p>Each of the possible hypotheses about this relation is then described as a pair of class definitions, which specify the membership of the domain D and the range R of the relation. The same hypothesis language can be described in Aleph notation as the following mode declarations:</p><p>:-modeh(1,betterthan(+car,+car)). :-modeb(1,hasbodytype(+car,#bodytype)). :-modeb(1,hasfuelcons(+car,#fuelconsumption)). :-modeb(1,hasbodytype(+car,#bodytype)). :-modeb(1,hastransmission(+car,#transmission)). :-modeb(1,hasenginesize(+car,#enginesize)).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Background Knowledge</head><p>Our algorithm represents background knowledge through classes and their membership, as shown in the example below <ref type="bibr">(</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Examples</head><p>In our algorithm, the set of positive examples is defined by the user's preferences, and the following pre-processing step that computes the transitive closure of these preferences, which are then negated to produce all negative examples. The following Turtle code expresses that car1 is better than car3, car4, car5, and car10: myontology:car1 myontology:betterthan myontology:car3 , myontology:car4 , myontology:car5 , myontology:car10 .</p><p>In traditional ILP syntax, the same examples are represented as ground facts of the predicate betterthan/2, where the arguments are of type car. So, the positive examples in Aleph are written as: betterthan(car1,car3), and the negative, obtained by their reversal, as :-betterthan(car3,car1).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Proposed Algorithm</head><p>We propose an algorithm called APARELL (Active<ref type="foot" target="#foot_2">3</ref> Pairwise Relation Learner) which learns the relation of strict order in DL. We implement our algorithm in Java using the OWL API<ref type="foot" target="#foot_3">4</ref>  <ref type="bibr" target="#b6">[7]</ref> and RDF4J API<ref type="foot" target="#foot_4">5</ref> to handle DL. The algorithm is shown in Algorithm 1. We follow the four basic steps used in the Progol/Aleph greedy learning approach:</p><p>1. Select a positive example. Each instance of the relation can be seen as a pair of object IDs.</p><p>2. Build the bottom clause. The bottom clause is constructed from the conjunction of all non-disjoint classes of which each individual in the pair is a member.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3.</head><p>Search. This step is to find all generalisations of the bottom clause of the same, lowest possible, length that are consistent with the data. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Search and Refinement Operator</head><p>We use a top-down approach similar to the one in Progol/Aleph starting from the shortest clause possible. Note that for the specific class of problems, namely, learning strict order, the most general, unconstrained definition stating that for any pair of objects, the first is better than the second, is true in exactly 50% of the cases for every pair x, y , x = y, as where this is true, the relation will not hold for the pair y, x . It will also always be false for pairs of type x, x . This is why the algorithm proceeds by considering an increasing, and strictly positive number of properties (literals) constraining either object in the relation. The bottom clause contains the conjunction of n constraints (of type class membership) on the Domain side and the same number of constraints again on the Range side of the relation. We evaluate all combinations of constraints, except the ones that imply the same class membership of both arguments (i.e. X is better than Y because they both share the same property/class membership) or those that have already been considered. An example of the refinement operator steps starting from a positive example car1 is better than car3 is illustrated in Figure <ref type="figure" target="#fig_3">2</ref> (please see Section 4.1 for a description of the dataset of car preferences).</p><p>We express the coverage of a clause through P and N , where P -the number of positive examples covered, and N -the number of negative examples covered. In the case that the solution has the same score as another alternative, Aleph will only return the first solution found. In our algorithm, we consider all new clauses that cover at least two positive examples and none of the negatives. (This is done to ensure consistency, and that at least a minimum level of generalisation takes place.) The search will not stop until all the possible combinations at each level have been considered. The resulting theory is a disjunction of clauses. Therefore, any test example covered by one of them is classified as positive. Our algorithm retains all consistent clauses generalised from a given positive example, rather than just one of them, as Aleph would have done. (At the same time, both algorithms discard the positive example from the training set after this generalisation step in a greedy learning manner.) This is the most important difference between the two algorithms and a possible explanation for any observed difference in their performance.</p><p>If a consistent clause has not been found yet, the algorithm continues to refine the current candidate by adding one literal to constrain either object in the relation. Similarly to Aleph, when APARELL cannot find a consistent generalisation, it adds the bottom clause itself to the theory.</p><p>We implement our algorithm using the Closed World Assumption (CWA). For the problem of learning strict order, it makes virtually no difference whether our system operates under the CWA or OWA. Under the OWA, we learn two hypotheses: one as mentioned before, the other with the positive and negative examples swapped. for all end if 30: end while 31: Return T the given domain (of learning strict order), the second hypothesis (i.e. the one that being swapped between the negative and positive examples) is almost the exact negation of the first (modulo the choice of a training sample). The resulting coverage is therefore almost identical to the one of the CWA hypothesis.</p><formula xml:id="formula_0">x 1 ⊆ x 1 , x 2 ⊆ x 2 , |x 1 | + |x 2 | == clause</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Search in a Complex Class Hierarchy</head><p>In the case of a more complex class hierarchy that consists of more than one level, we consider the hypothesis search over all the inferred class memberships of each individual. For example, if the class hierarchy and its membership in Figure <ref type="figure" target="#fig_5">3</ref> are given to the system, the algorithm progresses as follows:</p><p>1. Select a positive example. As an example, we choose car 1 is better than car 3 to be generalised. Please see Table <ref type="table" target="#tab_2">1</ref> for the properties of the cars.</p><p>2. Build the bottom clause. In this step, we use the reasoner to infer all the classes which include car 1 and car 3 as their direct and indirect membership. The bottom class of the above example is shown as follows:</p><p>(Car and EconomyCar and FamilyCar and Manual and NonHybrid and SmallCar and Suv) betterthan (Car and Manual and MediumCar and NonHybrid and Sedan)</p><p>The above bottom clause is only used to guide the refinement search, as SmallCar EconomyCar Car. We do not consider clauses referring to the membership of the intersection between a class and its superclass, as this will be the same as the membership of the class itself. For example, the membership of SmallCar EconomyCar is {car1,car8}, which is the same as the membership of SmallCar itself.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3.</head><p>Search. The search begins from the shortest clause length considered (2 literals) as shown below:</p><p>Car betterthan Manual Car betterthan MediumCar Car betterthan NonHybrid Car betterthan Sedan EconomyCar betterthan Car EconomyCar betterthan Manual ...</p><p>In the case that we have not found any consistent hypothesis of length 2, this parameter is increased to 3, etc.:</p><p>EconomyCar and FamilyCar betterthan Car EconomyCar and FamilyCar betterthan Manual ... Another property of our proposed algorithm is that, unlike Aleph, we do not provide a single consistent clause covering a given positive example, but add to the resulting theory all such clauses of the minimal possible length. While this can produce a more accurate result, the learning process takes longer than in Aleph. For example, given 45 training examples in which each item has 4 attributes, our algorithm takes 409 ms to finish while Aleph performs faster in just 88 ms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Evaluation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Datasets</head><p>We use two publicly available preference datasets <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b10">11]</ref>. Both the sushi and the car datasets have 10 items to rank which leads to 45 preference pairs per user. We take 60 users from each dataset and perform 10-fold cross  <ref type="table" target="#tab_2">1</ref>. Despite the difference in the number of attributes in the two datasets, we found that the maximum clause length of 4 (in Aleph and in our algorithm) is sufficient to produce consistent hypotheses.</p><p>The second dataset used in the experiments is about sushi preferences<ref type="foot" target="#foot_5">6</ref> which has been published by Kamishima <ref type="bibr" target="#b10">[11]</ref>. The dataset contains user individual preferences about 10 different sushi types. The users asked to sort the sushis according to their preferences in an ascending order. For the experiment, we generate the pairs order from each set of individual preferences. Similar to the previously mentioned car dataset, 45 pairs of sushi preferences are produced for 10 types of sushi (which represents all possible pairs). In this dataset, there were 5000 users involved in the survey. The sushi preference dataset is quite large when compared to the car preference dataset as it has 7 attributes describing each type of sushi. With a dataset of such size, we can test the performance of our algorithm and evaluate how it grows.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Evaluation Method</head><p>The goal of this evaluation is to assess the accuracy of the predictive power of our algorithm to solve the preference learning problem. To do this, we set up four experiments as below:</p><p>1. Assess the accuracy of our algorithm compared to the three baseline algorithms, SVM, Aleph and DT, on 60 users in car dataset and sushi dataset. Here, we also perform an ANOVA test and a post-hoc test to assess which algorithms significantly differ.</p><p>2. Assess the accuracy and the performance of our algorithm on a larger dataset by conducting an experiment on 5000 users of the sushi dataset.</p><p>3. Assess the potential of our algorithm to learn relations from a more complex class hierarchy by using the car dataset. Accuracy on 60 users.</p><p>We compare our algorithm with three other machine learning algorithms: SVM, the Matlab CART Decision Tree (DT) learner, and Aleph. SVM is a very common statistical classification algorithm, which is used in many domains. Previous work of pairwise preference learning was performed by <ref type="bibr">Qian et al. [19]</ref> showing that SVM can also be used to learn in this domain. Both DT and Aleph are learners expressing their models in logic, where the former uses propositional logic and the latter -(a subset of) First Order Logic (namely, Horn clauses). We build a simple class hierarchy as explained in Section 2 for each dataset. We learn from each set of individual preferences and evaluate the model by using 10-fold cross validation. We repeat the same test for all users then calculate the average accuracy. The result is shown in Table <ref type="table" target="#tab_3">2</ref>. In this experiment, the search stopped at a maximum length of 4 literals (this is the same as Aleph's default clause length of 4). According to the ANOVA test with α = 0.05, the result shows that there is a significant difference amongst the algorithms with a p-value of 1.14 × 10 −21 for the car dataset and 2.97 × 10 −3 for the sushi dataset. ANOVA is conceptually similar to multiple two-sample t-tests, but is more conservative (results in less type I error). After performing the ANOVA test, we find which algorithms are significantly different by using Fisher's Least Significant Difference (LSD). The result of this post-hoc test is shown in Table <ref type="table">3</ref>. Please note that the result of this 10-fold cross validation may have introduced a bias in terms of absolute performance levels due to the fact that the use of transitivity chains (closures) may have created an overlap between training and test data. However, the same training and test data have been used to compare all algorithms, so the results for their comparison remain unaffected (are not biased). In this experiment, our algorithm is still showing the highest accuracy amongst the other three baseline algorithms. The average accuracy on individual preferences from 5000 sushi dataset users is shown in Table <ref type="table">4</ref>. In the first experiment, we have already evaluated the mean difference amongst the algorithms using ANOVA and a post-hoc test. In here, the p-value of ANOVA (α=0.05) test is 0. This is a common case in large datasets. Performing statistical test to analyze the mean differences in a large dataset can be problematical as p-value As mentioned in <ref type="bibr" target="#b1">[2]</ref>, by using DL representation, which is variable-free syntax, our algorithm can produce a set of solutions which are more readable compared to the First Order Logic representation. An example of a consistent hypothesis found by our algorithm is shown below:</p><p>(Manual) betterthan (Sedan) (Automatic and Hybrid) betterthan (MediumCar and Suv)</p><p>The above rules simply say that: "Manual car is better than sedan car OR automatic hybrid car is better than medium size SUV car." In comparison, Aleph produces rules, such as: betterthan(A,B) :-hasenginesize(B,large), hasenginesize(A,small). betterthan(A,B) :-hasfuelcons(B,nonhybrid), hasbodytype(B,sedan).</p><p>The first Aleph's rule states that: "car A is better than car B if car B has a large engine size and car A has a small engine size." In the second rule, it says:"car A is better than car B if car B is a non hybrid car and sedan." In other words, if the car has characteristics as a non-hybrid sedan car, it will not be better than any other car.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion and Further Work</head><p>In this paper, we have shown that the implementation of ILP in DL can be useful to learn a user's preference from pairwise comparisons. In fact, the proposed algorithm has proven statistically significantly better than all tested alternatives in all but one case. The exception in question is when compared to SVM on the car dataset, where our algorithm achieves a seemingly higher mean accuracy, but the result is not statistically significant (in other words, it is a draw).</p><p>We are currently working to address some of the limitations of the algorithm. In terms of accuracy, we show that our algorithm outperformed the other algorithms. However, to produce the final theory, our algorithm takes much longer than each of the three baseline algorithms. Currently, the only way to restrict the search complexity is by limiting its depth. We need to work on the improvement of this aspect without reducing the accuracy performance. The possible alternatives include the use of heuristic functions to predict the cost of expanding a node (i.e. informed search). We are also working on expanding the refinement operator by allowing the use of different operators (e.g. union or negation) with the aim of improving the accuracy.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>(a) A simple car class hierarchy (b) The user annotation is translated into the object property "betterthan".</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Problem representation preference relation as a result of applying its transitivity property. We add these inferred examples to the set of positive examples so that the complete closure is used by the learning algorithm. We also use the anti-symmetry property to produce all negative examples as a 'mirror image' of the positive (after completing their transitive closure), i.e., all pairs derived from a positive example through the swap of the first and second element in the pair.The representation of a sample preference learning problem in DL is shown in Figure1. The class hierarchy is given to the system as an input. We evaluate our algorithm using a simple class hierarchy as shown in Figure1ain order to make the solutions comparable to the Aleph representation.</figDesc><graphic coords="3,64.80,157.90,185.50,164.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>4 .</head><label>4</label><figDesc>Remove covered positive examples. Our algorithm is greedy in its treatment of positive training examples. We remove all covered positive examples once all clauses found by the search step are added to the current theory.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Refinement operator search graph</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Algorithm 1 APARELL 1 :</head><label>11</label><figDesc>Input : knowledge B, a set of positive examples E + = { e 1 , e 2 , . . . , e n , e m }, a set of negative examples E − = { e 2 , e 1 , . . . , e m , e n }, maximum number of literals (depth limit) l a relation name r = betterthan 2: Output : A theory T represented as a set of clauses, each defining a pair of concepts specifying the relation's domain and range 3: set T = ∅ /* initialization*/ 4: set x 1 = ∅ /* the list of constraints on Domain */ 5: set x 2 = ∅ /* the list of constraints on Range */ 6: set f lag = false /* whether a consistent generalisation of the bottom clause has been found */ 7: while E + is not empty do 8: set clause size = 2 /* initial value for |x 1 | + |x 2 | */ 9: set x 1 = ∅, x 2 = ∅, f lag=false /* reset in every loop*/ 10: select e + ∈ E + to be generalised, and define e 1 , e 2 = e + 11: /* build the bottom clause x 1 , x 2 for e + as follows: */ 12: set x 1 = {C 1 , . . . , C m } ∀C i such that e 1 ∈ C i 13: set x 2 = {D 1 , . . . , D n } ∀D i such that e 2 ∈ D i 14: while f lag==false and clause size &lt; l do /* search (top-down) through all generalisations of the bottom clause */ 15:</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: An example of more one level of car class hierarchy used for scoring the hypothesis coverage. The search in our algorithm follows breadth-first search with limited depth (specified as the l parameter).Another property of our proposed algorithm is that, unlike Aleph, we do not provide a single consistent clause covering a given positive example, but add to the resulting theory all such clauses of the minimal possible length. While this can produce a more accurate result, the learning process takes longer than in Aleph. For example, given 45 training examples in which each item has 4 attributes, our algorithm takes 409 ms to finish while Aleph performs faster in just 88 ms.</figDesc><graphic coords="8,177.80,54.07,260.00,218.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>4 .</head><label>4</label><figDesc>Assess the accuracy of our algorithm on training examples of different size, and compare it to the three baseline algorithms.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>4 :</head><label>4</label><figDesc>Accuracy by varying number of training examplesSample solutions found.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>which uses Turtle syntax):</figDesc><table><row><cell cols="3">myontology:Sedan rdf:type owl:Class ;</cell></row><row><cell></cell><cell cols="2">rdfs:subClassOf myontology:Car ;</cell></row><row><cell></cell><cell cols="2">owl:disjointWith myontology:Suv .</cell></row><row><cell cols="3">myontology:car1 rdf:type owl:NamedIndividual ,</cell></row><row><cell></cell><cell cols="2">myontology:Car ,</cell></row><row><cell></cell><cell cols="2">myontology:Manual ,</cell></row><row><cell></cell><cell cols="2">myontology:NonHybrid ,</cell></row><row><cell></cell><cell cols="2">myontology:SmallCar ,</cell></row><row><cell></cell><cell cols="2">myontology:Suv .</cell></row><row><cell cols="3">The same background knowledge can be spelt out in Aleph as follows:</cell></row><row><cell>car(car1).</cell><cell cols="2">%type predicate</cell></row><row><cell cols="3">bodytype(sedan). %type predicate</cell></row><row><cell cols="2">hasbodytype(car1,suv).</cell><cell>%bk</cell></row><row><cell cols="3">hasfuelcons(car1,nonhybrid). %bk</cell></row><row><cell cols="3">hastransmission(car1,manual). %bk</cell></row><row><cell cols="2">hasenginesize(car1, small).</cell><cell>%bk</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 1 :</head><label>1</label><figDesc>Car Dataset -Item Descriptions</figDesc><table><row><cell cols="4">Car ID Bodytype Transmission Fuel</cell><cell>Engine size</cell></row><row><cell>1</cell><cell>suv</cell><cell>manual</cell><cell cols="2">non-hybrid small</cell></row><row><cell>2</cell><cell>sedan</cell><cell>automatic</cell><cell>hybrid</cell><cell>large</cell></row><row><cell>3</cell><cell>sedan</cell><cell>manual</cell><cell cols="2">non-hybrid medium</cell></row><row><cell>4</cell><cell>sedan</cell><cell>manual</cell><cell cols="2">non-hybrid large</cell></row><row><cell>5</cell><cell>suv</cell><cell>manual</cell><cell cols="2">non-hybrid medium</cell></row><row><cell>6</cell><cell>suv</cell><cell>automatic</cell><cell>hybrid</cell><cell>medium</cell></row><row><cell>7</cell><cell>sedan</cell><cell>automatic</cell><cell>hybrid</cell><cell>medium</cell></row><row><cell>8</cell><cell>suv</cell><cell>automatic</cell><cell>hybrid</cell><cell>small</cell></row><row><cell>9</cell><cell>sedan</cell><cell>automatic</cell><cell cols="2">non-hybrid medium</cell></row><row><cell>10</cell><cell>suv</cell><cell>automatic</cell><cell cols="2">non-hybrid medium</cell></row><row><cell cols="5">validation for each user's individual preferences. The car dataset has 4 attributes: body type, transmission, fuel</cell></row><row><cell cols="3">consumption and engine size [1], as shown in Table</cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 2 :</head><label>2</label><figDesc>Mean and standard deviation of 10-fold cross validation test</figDesc><table><row><cell></cell><cell>SVM</cell><cell>DT</cell><cell>Aleph</cell><cell>Our algorithm</cell></row><row><cell>car dataset</cell><cell cols="4">0.8264±0.12 0.7470±0.10 0.7292±0.08 0.8456±0.06</cell></row><row><cell cols="5">sushi dataset 0.7604±0.09 0.7869±0.07 0.7789±0.06 0.8138±0.07</cell></row><row><cell>Accuracy on 5000 users.</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://www.w3.org/TR/turtle/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">Turtle stands for Terse RDF Triple Language.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">The active learning algorithm feature has not been described or used in this work.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">http://owlapi.sourceforge.net/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">http://rdf4j.org/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_5">http://www.kamishima.net/sushi/</note>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>It should be noted that the algorithm skips the evaluation of the same value comparisons like: Car betterthan Car to speed up the search. <ref type="bibr" target="#b3">4</ref>. Remove covered positive examples. We count the coverage for each hypothesis we build and remove the covered positive examples from the training dataset.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Current State</head><p>Our algorithm can handle a multi-level class hierarchy, but, in order to limit the search, we only allow conjunctions of literals in the clauses, effectively limiting the language to EL description logic. With this limitation, we can still produce accurate models for our test cases, and a result in a form that is easy to interpret. The most expensive process is checking the class membership (by using the ontology reasoner) for every possible hypothesis. This is As an additional experiment, we evaluate the car dataset with the complex class hierarchy (see Figure <ref type="figure">3</ref>) using literal depth limit=4. The accuracy of our algorithm is performed using 10-fold cross validation. The result of the experiment with the complex class hierarchy is a mean of 0.8409 and standard deviation of 0.1405. The accuracy result shows no difference between using a simple class hierarchy and a complex class hierarchy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Accuracy on different training examples size.</head><p>We perform several experiments with the algorithms by varying the proportion of training examples and test it on 10% of examples. For a more robust result, we validate each cycle with 10-fold cross validation. The result of these experiments is shown in Figure <ref type="figure">4</ref>. We show that our algorithm still works better even with the smallest number of training examples.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm performance.</head><p>We run another test to see the algorithm performance by different clause length settings (see Table <ref type="table">5</ref>). The evaluation is performed on both datasets by using a 90:10% training-test split. We also record the algorithm execution times of 60 users × 90% of examples <ref type="bibr" target="#b1">(2,</ref><ref type="bibr">382</ref> total examples in car dataset and 2400 total examples in sushi dataset, please note that some of the positive examples are removed during the search). The algorithm was executed on Java 8 Eclipse IDE with 8 GB Memory 1867 MHz DDR3 and 2.9 GHz Processor Intel Core i5 machine. The result shows there is no significant accuracy improvement after 4 literals. Surprisingly, the algorithm runs very slow at 2 literals for the car dataset (where the achieved accuracy is the lowest for all tested clause lengths). The reason is in Step 4 i.e. the removal of covered positive examples (see Section 3). If we cannot find any consistent hypothesis when generalising a positive example, we add an exception and only remove one example. But when the algorithm can find consistent hypotheses, it removes more than one example which results in much fewer positive examples thus speeding up the search. This anomaly does not occur in the sushi dataset due to the larger number of attributes so that the possibility to find a consistent hypothesis for clause length 2 is higher. </p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Learning Community-based Preferences via Dirichlet Process Mixtures of Gaussian Processes</title>
		<author>
			<persName><forename type="first">E</forename><surname>Abbasnejad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sanner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">V</forename><surname>Poupart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI)</title>
				<meeting>the 23rd International Joint Conference on Artificial Intelligence (IJCAI)</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Description Logics</title>
		<author>
			<persName><forename type="first">F</forename><surname>Baader</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Horrocks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Sattler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Handbook of Knowledge Representation</title>
				<meeting><address><addrLine>Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="135" to="179" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Two of a kind or the ratings game? Adaptive pairwise preferences and latent factor models</title>
		<author>
			<persName><forename type="first">S</forename><surname>Balakrishnan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chopra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IEEE 10th International Conference on Data Mining (ICDM)</title>
				<meeting>IEEE 10th International Conference on Data Mining (ICDM)</meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="725" to="730" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">The rank analysis of incomplete block designs. I. The method of paired comparisons</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Bradley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Terry</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Biometrika</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="issue">3-4</biblScope>
			<biblScope unit="page" from="324" to="345" />
			<date type="published" when="1952">1952</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">DL-FOIL Concept Learning in Description Logics</title>
		<author>
			<persName><forename type="first">N</forename><surname>Fanizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Amato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Esposito</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Inductive Logic Programming</title>
		<title level="s">ser. LNCS</title>
		<meeting>Inductive Logic Programming<address><addrLine>Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="volume">5194</biblScope>
			<biblScope unit="page" from="107" to="121" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Inductive Programming Meets the Real World</title>
		<author>
			<persName><forename type="first">S</forename><surname>Gulwani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hernandez-Orallo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Kitzelmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Muggleton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Schmid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zorn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page" from="90" to="99" />
			<date type="published" when="2015">2015</date>
			<publisher>ACM</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">The OWL API: A Java API for OWL ontologies</title>
		<author>
			<persName><forename type="first">M</forename><surname>Horridge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bechhofer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Semantic Web</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="11" to="21" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Label ranking by learning pairwise preferences</title>
		<author>
			<persName><forename type="first">E</forename><surname>Hüllermeier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fürnkranz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Brinker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">172</biblScope>
			<biblScope unit="issue">16</biblScope>
			<biblScope unit="page" from="1897" to="1916" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">An Algorithm Based on Counterfactuals for Concept Learning in The Semantic Web</title>
		<author>
			<persName><forename type="first">L</forename><surname>Iannone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Palmisano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Fanizzi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Intelligence</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="139" to="159" />
			<date type="published" when="2007">2007</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A predictive model of music preference using pairwise comparisons</title>
		<author>
			<persName><forename type="first">B</forename><surname>Jensen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Saez Gallego</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Larsen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</title>
				<meeting>IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="1977" to="1980" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Nantonac Collaborative Filtering: Recommendation Based on Order Responses</title>
		<author>
			<persName><forename type="first">T</forename><surname>Kamishima</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</title>
				<meeting>the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining<address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="583" to="588" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Learnability of Description Logic Programs</title>
		<author>
			<persName><forename type="first">J</forename><surname>Kietz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Inductive Logic Programming, ser. LNCS</title>
				<meeting>Inductive Logic Programming, ser. LNCS<address><addrLine>Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2002">2002</date>
			<biblScope unit="volume">2583</biblScope>
			<biblScope unit="page" from="117" to="132" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Formulating Description Logic Learning as An Inductive Logic Programming Task</title>
		<author>
			<persName><forename type="first">S</forename><surname>Konstantopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Charalambidis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IEEE World Congress on Computational Intelligence</title>
				<meeting>IEEE World Congress on Computational Intelligence</meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="1" to="7" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Learning weak constraints in answer set programming</title>
		<author>
			<persName><forename type="first">M</forename><surname>Law</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Russo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Broda</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Theory and Practice of Logic Programming</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">4-5</biblScope>
			<biblScope unit="page" from="511" to="525" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">DL-Learner: Learning Concepts in Description Logics</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lehmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Journal of Machine Learning Research</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="2639" to="2642" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Collaborative recommendations using item-to-item similarity mappings</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">D</forename><surname>Linden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Jacobi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Benson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">US Patent</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page">649</biblScope>
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Inverse Entailment and Progol</title>
		<author>
			<persName><forename type="first">S</forename><surname>Muggleton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">New Generation Computing</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">3-4</biblScope>
			<biblScope unit="page" from="245" to="286" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">ILP Turns 20</title>
		<author>
			<persName><forename type="first">S</forename><surname>Muggleton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>De Raedt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Poole</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Bratko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Flach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Inoue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Srinivasan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Machine Learning</title>
				<meeting><address><addrLine>Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="volume">86</biblScope>
			<biblScope unit="page" from="3" to="23" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Learning user preferences by adaptive pairwise comparison</title>
		<author>
			<persName><forename type="first">L</forename><surname>Qian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Jagadish</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the VLDB Endowment</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page" from="1322" to="1333" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Initial profile generation in recommender systems using pairwise comparison</title>
		<author>
			<persName><forename type="first">L</forename><surname>Rokach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kisilevich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="1854" to="1859" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">The Aleph Manual</title>
		<author>
			<persName><forename type="first">A</forename><surname>Srinivasan</surname></persName>
		</author>
		<ptr target="http://web.comlab.ox.ac.uk/oucl/research/areas/machlearn/Aleph/" />
		<imprint>
			<date type="published" when="2000">2000</date>
		</imprint>
		<respStmt>
			<orgName>Computing Laboratory, Oxford University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
