<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Formal Characterization of Concept Learning in Description Logics</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Francesca</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
							<email>lisi@di.uniba.it</email>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Informatica</orgName>
								<orgName type="institution">Università degli Studi di Bari &quot;Aldo Moro&quot;</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A Formal Characterization of Concept Learning in Description Logics</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">E5A9E92C4D474AC50FF3312694FC0AE2</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T23:11+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Among the inferences studied in Description Logics (DLs), induction has been paid increasing attention over the last decade. Indeed, useful non-standard reasoning tasks can be based on the inductive inference. Among them, Concept Learning is about the automated induction of a description for a given concept starting from classified instances of the concept. In this paper we present a formal characterization of Concept Learning in DLs which relies on recent results in Knowledge Representation and Machine Learning.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Building and maintaining large ontologies pose several challenges to Knowledge Representation (KR) because of their size. In DL ontologies, although standard inferences help structuring the knowledge base (KB), e.g., by automatically building a concept hierarchy, they are, for example, not sufficient when it comes to (automatically) generating new concept descriptions from given ones. They also fail if the concepts are specified using different vocabularies (i.e. sets of concept names and role names) or if they are described on different levels of abstraction. Altogether it has turned out that for building and maintaining large DL KBs, besides the standard inferences, additional so-called non-standard inferences are required <ref type="bibr" target="#b26">[27,</ref><ref type="bibr" target="#b18">19]</ref>. Among them, the first ones to be studied have been the Least Common Subsumer (LCS) of a set concepts <ref type="bibr" target="#b2">[3]</ref> and the Most Specific Concept (MSC) of an individual <ref type="bibr" target="#b31">[32,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b19">20,</ref><ref type="bibr" target="#b0">1]</ref>. Very recently, a unified framework for non-standard reasoning services in DLs has been proposed <ref type="bibr" target="#b7">[8]</ref>. It is based on the use of second-order sentences in DLs <ref type="bibr" target="#b6">[7]</ref> as the unifying definition model for all those constructive reasoning tasks which rely on specific optimality criteria to build up the objective concept. E.g., LCS is one of the cases considered for one such reformulation in terms of optimal solution problems.</p><p>Since <ref type="bibr" target="#b26">[27]</ref>, much work has been done in DL reasoning to support the construction and maintenance of DL KBs. This work has been more or less explicitly related to induction. E.g., the notion of LCS has subsequently been used for the bottom-up induction of Classic concept descriptions from examples <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>. Induction has been widely studied in Machine Learning (ML). Therefore it does not come as a surprise that the problem of finding an appropriate concept description for given concept instances, reformulated as a problem of inductive learning from examples, has been faced in ML, initially attacked by heuristic means <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b13">14]</ref> and more recently in a formal manner <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b21">22]</ref> by adopting the methods and the techniques of that ML approach known as Concept Learning.</p><p>In this paper, we present a formal characterization of Concept Learning in DLs which relies on recent results in KR and ML. Notably, the proposed formulation can be justified by observing that the inductive inference deals with findingor constructing -a concept. Therefore, non-standard reasoning services based on induction can be considered as constructive reasoning tasks. Starting from this assumption, and inspired by Colucci et al 's framework, Concept Learning is modeled as a second-order concept expression in DLs and reformulated in terms that allow for a construction possibly subject to some optimality criteria.</p><p>The paper is structured as follows. Section 2 is devoted to preliminaries on Concept Learning according to the ML tradition. Section 3 defines the Concept Learning problem statement in the KR context. Section 4 proposes a reformulation of Concept Learning as a constructive reasoning task in DLs. Section 5 concludes the paper with final remarks and directions of future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Preliminaries</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Machine Learning</head><p>The goal of ML is the design and development of algorithms that allow computers to evolve behaviors based on empirical data <ref type="bibr" target="#b29">[30]</ref>. The automation of the inductive inference plays a key role in ML algorithms, though other inferences such as abduction and analogy are also considered. The effect of applying inductive ML algorithms depends on whether the scope of induction is discrimination or characterization <ref type="bibr" target="#b27">[28]</ref>. Discriminant induction aims at inducing hypotheses with discriminant power as required in tasks such as classification. In classification, observations to learn from are labeled as positive or negative instances of a given class. Characteristic induction is more suitable for finding regularities in a data set. This corresponds to learning from positive examples only.</p><p>Ideally, the ML task is to discover an operational description of a target function f : X → Y which maps elements in the instance space X to the values of a set Y . The target function is unknown, meaning that only a set D (the training data) of points of the form (x, f (x)) is provided. However, it may be very difficult in general to learn such a description of f perfectly. In fact, ML algorithms are often expected to acquire only some approximation f to f by searching a very large space H of possible hypotheses (the hypothesis space) which depend on the representation chosen for f (the language of hypotheses). The output approximation is the one that best fits D according to a scoring function score(f, D). It is assumed that any hypothesis h ∈ H that approximates f well w.r.t. a large set of training cases will also approximate it well for new unobserved cases. These notions have been mathematically formalized in computational learning theory within the Probably Approximately Correct (PAC) learning framework <ref type="bibr" target="#b35">[36]</ref>.</p><p>Summing up, given a hypothesis space H and a training data set D, ML algorithms are designed to find an approximation f of a target function f s.t.:</p><formula xml:id="formula_0">1. f ∈ H; 2. f (D) ≈ f (D); and/or 3. f = argmax f ∈H score(f, D).</formula><p>It has been recently stressed that the first two requirements impose constraints on the possible hypotheses, thus defining a Constraint Satisfaction Problem (CSP), whereas the third requirement involves the optimization step, thus turning the CSP into an Optimization Problem (OP) <ref type="bibr" target="#b8">[9]</ref>. We shall refer to the ensemble of constraints and optimization criterion as the model of the learning task. Models are almost by definition declarative and it is useful to distinguish the CSP, which is concerned with finding a solution that satisfies all the constraints in the model, from the OP, where one also must guarantee that the found solution be optimal w.r.t. the optimization function. Examples of typical CSPs in the ML context include Concept Learning for reasons that will become clearer by reading the following subsection.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Concept Learning</head><p>Concept Learning deals with inferring the general definition of a category based on members (positive examples) and nonmembers (negative examples) of this category. Here, the target is a boolean-valued function f : X → {0, 1}, i.e. a concept. When examples of the target concept are available, the resulting ML task is said supervised, otherwise it is called unsupervised. The positive examples are those instances with f (x) = 1, and negative ones are those with f (x) = 0.</p><p>In Concept Learning, the key inferential mechanism for induction is generalization as search through a partially ordered space of inductive hypotheses <ref type="bibr" target="#b28">[29]</ref>. Hypotheses may be ordered from the most general ones to the most specific ones. We say that an instance x ∈ X satisfies a hypothesis h ∈ H if and only if h(x) = 1. Given two hypotheses h i and h j , h i is more general than or equal to h j (written h i g h j , where g denotes a generality relation) if and only if any instance satisfying h j , also satisfies h i . Note that it may not be always possible to compare two hypotheses with a generality relation: the instances satisfied by the hypotheses may intersect, and not necessarily be subsumed by one another. The relation g defines a partial order (i.e., it is reflexive, antisymmetric, and transitive) over the space of hypotheses.</p><p>A hypothesis h that correctly classifies all training examples is called consistent with these examples. For a consistent hypothesis h it holds that h(x) = f (x) for each instance x. The set of all hypotheses consistent with the training examples is called the version space with respect to H and D. Concept Learning algorithms may use the hypothesis space structure to efficiently search for relevant hypotheses. E.g., they may perform a specific-to-general search through the hypothesis space along one branch of the partial ordering, to find the most specific hypothesis consistent with the training examples. Another well known approach, candidate elimination, consists of computing the version space by an incremental computation of the sets of maximally specific and maximally general hypotheses. An important issue in Concept Learning is associated with the so-called inductive bias, i.e. the set of assumptions that the learning algorithm uses for prediction of outputs given previously unseen inputs. These assumptions represent the nature of the target function, so the learning approach implicitly makes assumptions on the correct output for unseen examples.</p><p>Inductive Logic Programming (ILP) was born at the intersection between Concept Learning and the field of Logic Programming <ref type="bibr" target="#b30">[31]</ref>. From Concept Learning it has inherited the inferential mechanisms for induction <ref type="bibr" target="#b32">[33]</ref>. However, a distinguishing feature of ILP with respect to other forms of Concept Learning is the use of prior knowledge of the domain of interest, called background knowledge (BK), during the search for hypotheses. Due to the roots in Logic Programming, ILP was originally concerned with Concept Learning problems where both hypotheses, observations and BK are expressed with first-order Horn rules (usually Datalog for computational reasons). E.g., Foil is a popular ILP algorithm for learning sets of Datalog rules for classification purposes <ref type="bibr" target="#b33">[34]</ref>. It performs a greedy search in order to maximize an information gain function. Therefore, Foil implements an OP version of Concept Learning.</p><p>Over the last decade, ILP has widened its scope significantly, by considering, e.g., learning in DLs (see next section) as well as within those hybrid KR frameworks integrating DLs and first-order clausal languages <ref type="bibr" target="#b34">[35,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b24">25,</ref><ref type="bibr" target="#b25">26]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Learning Concepts in Description Logics</head><p>Early work on the application of ML to DLs essentially focused on demonstrating the PAC-learnability for various terminological languages derived from Classic. In particular, Cohen and Hirsh investigate the CoreClassic DL proving that it is not PAC-learnable <ref type="bibr" target="#b3">[4]</ref> as well as demonstrating the PAC-learnability of its sublanguages, such as C-Classic <ref type="bibr" target="#b4">[5]</ref>, through the bottom-up LcsLearn algorithm. It is also worth mentioning unsupervised learning methodologies for DL concept descriptions, whose prototypical example is Kluster <ref type="bibr" target="#b17">[18]</ref>, a polynomial-time algorithm for the induction of BACK terminologies. More recently, algorithms have been proposed that follow the generalization as search approach by extending the methodological apparatus of ILP to DL languages <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b20">21,</ref><ref type="bibr" target="#b21">22]</ref>. Supervised (resp., unsupervised) learning systems, such as YinYang <ref type="bibr" target="#b15">[16]</ref> and DL-Learner <ref type="bibr" target="#b22">[23]</ref>, have been implemented. Based on a set of refinement operators borrowed from YinYang and DL-Learner, a new version of the Foil algorithm, named DL-Foil, has been proposed <ref type="bibr" target="#b12">[13]</ref>. In DL-Foil, the information gain function takes into account the Open World Assumption (OWA) holding in DLs. Indeed, many instances may be available which cannot be ascribed to the target concept nor to its negation. This requires a different setting to ensure a special treatment of the unlabeled individuals.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">The Problem Statement</head><p>In this section, the supervised Concept Learning problem in the DL setting is formally defined. For the purpose, we denote: Note that the definition given above provides the CSP version of the supervised Concept Learning problem. However, as already mentioned, Concept Learning can be regarded also as an OP. Algorithms such as DL-Foil testify the existence of optimality criteria to be fulfilled in Concept Learning besides the conditions of completeness and consistency.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">The Solution Strategy</head><p>In Def. 1, we have considered a language of hypotheses DL H that allows for the generation of concept definitions in any DL. These definitions can be organized according to the concept subsumption relationship . Since induces a quasi-order (i.e., a reflexive and transitive relation) on DL H <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b10">11]</ref>, the problem stated in Def. 1 can be cast as the search for a correct (i.e., complete and consistent) concept definition in (DL H , ) according to the generalization as search approach in Mitchell's vision. In such a setting, one can define suitable techniques (called refinement operators) to traverse (DL H , ) either top-down or bottom-up.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 2 (Refinement operator in DLs). Given a quasi-ordered search space (DL H , )</head><p>a downward refinement operator is a mapping ρ :</p><formula xml:id="formula_1">DL H → 2 DL H such that ∀C ∈ DL H ρ(C) ⊆ {D ∈ DL H | D C} -an upward refinement operator is a mapping δ : DL H → 2 DL H such that ∀C ∈ DL H δ(C) ⊆ {D ∈ DL H | C D}</formula><p>Definition 3 (Refinement chain in DLs). Given a downward (resp., upward) refinement operator ρ (resp., δ) for a quasi-ordered search space (DL H , ), a refinement chain from C ∈ DL H to D ∈ DL H is a sequence</p><formula xml:id="formula_2">C = C 0 , C 1 , . . . , C n = D such that C i ∈ ρ(C i−1 ) (resp., C i ∈ δ(C i−1 )) for every 1 ≤ i ≤ n.</formula><p>Note that, given (DL, ), there is an infinite number of generalizations and specializations. Usually one tries to define refinement operators that can traverse efficiently throughout the hypothesis space in pursuit of one of the correct definitions (w.r.t. the examples that have been provided). The corresponding notions for upward refinement operators are defined dually.</p><p>Designing a refinement operator needs to make decisions on which properties are most useful in practice regarding the underlying learning algorithm. Considering the properties reported in Def. <ref type="bibr" target="#b3">4</ref>, it has been shown that the most feasible property combination for Concept Learning in expressive DLs such as ALC is {weakly complete, complete, proper} <ref type="bibr" target="#b20">[21]</ref>. Only for less expressive DLs like EL, ideal, i.e. complete, proper and finite, operators exist <ref type="bibr" target="#b23">[24]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Concept Learning as Constructive Reasoning in DLs</head><p>In this section, we formally characterize Concept Learning in DLs by emphasizing the constructive nature of the inductive inference.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Second-order Concept Expressions</head><p>We assume to start from the syntax of any Description Logic DL where N c , N r , and N o are the alphabet of concept names, role names and individual names, respectively. In order to write second-order formulas, we introduce a set N x = X 0 , X 1 , X 2 , ... of concept variables, which we can quantify over. We denote by DL X the language of concept terms obtained from DL by adding N x .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 5 (Concept term).</head><p>A concept term in DL X is a concept formed according to the specific syntax rules of DL augmented with the additional rule C −→ X for X ∈ N x .</p><p>Since we are not interested in second-order DLs as themselves, we restrict our language to particular existential second-order formulas of interest to this paper. In particular, we allow formulas involving an ABox. By doing so, we can easily model the computation of, e.g., the MSC, which was left out as future work in Colucci et al.'s framework. This paves the way to the modeling of Concept Learning as shown in the next subsection.</p><formula xml:id="formula_3">Definition 6 (Concept expression). Let a 1 , . . . , a m ∈ DL be individuals, C 1 , . . . , C m , D 1 , . . . , D m ∈ DL X be concept terms containing concept variables X 0 , X 1 , . . . , X n . A concept expression Γ in DL X is a conjunction (C 1 D 1 ) ∧ . . . ∧ (C l D l ) ∧ (C l+1 D l+1 ) ∧ . . . ∧ (C m D m )∧ (a 1 : D 1 ) ∧ . . . ∧ (a l : D l ) ∧ (a l+1 : ¬D l+1 ) ∧ . . . ∧ (a m : ¬D m )<label>(1)</label></formula><p>of (negated or not) concept subsumptions and concept assertions with 1 ≤ l ≤ m.</p><p>We use General Semantics, also called Henkin semantics, for interpreting concept variables <ref type="bibr" target="#b14">[15]</ref>. In such a semantics, variables denoting unary predicates can be interpreted only by some subsets among all the ones in the powerset of the domain 2 ∆ I -instead, in Standard Semantics a concept variable could be interpreted as any subset of ∆ I . Adapting General Semantics to our problem, the structure we consider is exactly the sets interpreting concepts in DL. That is, the interpretation X I of a concept variable X ∈ DL X must coincide with the interpretation E I of some concept E ∈ DL. The interpretations we refer to in the following definition are of this kind.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 7 (Satisfiability).</head><p>A concept expression Γ of the form (1) is satisfiable in DL iff there exist n + 1 concepts E 0 , . . . , E n ∈ DL such that, extending the semantics of DL for each interpretation I, with: (X i ) I = (E i ) I for i = 0, . . . , n, it holds that 1. for each j = 1, . . . , l, and every interpretation I, (C j ) I ⊆ (D j ) I and (a j ) I ∈ (D j ) I , and 2. for each j = l + 1, . . . , m, there exists an interpretation I s.t.</p><formula xml:id="formula_4">(C j ) I ⊆ (D j ) I and (a j ) I ∈ (D j ) I</formula><p>Otherwise, Γ is said to be unsatisfiable in DL.</p><p>Definition 8 (Solution). If a concept expression Γ of the form (1) is satisfiable in DL, then E 0 , . . . , E n is a solution for Γ . Moreover, we say that the formula</p><formula xml:id="formula_5">∃X 0 • • • ∃X n .Γ<label>(2)</label></formula><p>is true in DL if there exist at least a solution for Γ , otherwise it is false.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Modeling Concept Learning with Second-Order DLs</head><p>It has been pointed out that the constructive reasoning tasks can be divided into two main categories: tasks for which we just need to compute a concept (or a set of concepts) and those for which we need to find a concept (or a set of concepts) according to some minimality/maximality criteria <ref type="bibr" target="#b7">[8]</ref>. In the first case, we have a set of solutions while in the second one we also have a set of suboptimal solutions to the main problem. E.g., the set of sub-optimal solutions in LCS is represented by the common subsumers. Both MSC and Concept Learning belong to this second category of constructive reasoning tasks. We remind the reader that MSC can be easily reduced to LCS for DLs that admit the one-of concept constructor. However, this reduction is not trivial for the general case.</p><p>Hereafter, first, we show how to model MSC in terms of formula ( <ref type="formula" target="#formula_5">2</ref>). This step is to be considered as functional to the modeling of Concept Learning.</p><p>Most Specific Concept Intuitively, the MSC of individuals described in an ABox is a concept description that represents all the properties of the individuals including the concept assetions they occur in and their relationship to other individuals. Similar to the LCS, the MSC is uniquely determined up to equivalence. More precisely, the set of most specific concepts of individuals a 1 , . . . , a k ∈ DL forms an equivalence class, and if S is defined to be the set of all concept descriptions that have a 1 , . . . , a k as their instance, then this class is the least element in [S] w.r.t. a partial ordering on equivalence classes induced by the quasi ordering . Analogously to the LCS, we refer to one of its representatives by MSC(a 1 , . . . , a k ). The MSC need not exist. Three different phenomena may cause the non existence of a least element in [S], and thus, a MSC: A concept E is not the MSC of a 1 , . . . , a k iff the following formula is true in DL:</p><formula xml:id="formula_6">∃X.(a 1 : X) ∧ . . . ∧ (a k : X) ∧ (X E) ∧ (E X)<label>(3)</label></formula><p>that is, E is not the MSC if there exists a concept X which is a most specific concept, and is strictly more specific than E.</p><p>Concept Learning Following Def. 1, we assume that</p><formula xml:id="formula_7">Ind + C (A) = {a 1 , . . . , a m } and Ind − C (A) = {b 1 , . . . , b n }. A concept D ∈ DL H is a correct concept definition for the target concept name C w.r.t. Ind + C (A) and Ind − C (A)</formula><p>iff it is a solution for the following second-order concept expression:</p><formula xml:id="formula_8">(C X) ∧ (X C) ∧ (a 1 : X) ∧ . . . ∧ (a m : X) ∧ (b 1 : ¬X) ∧ . . . ∧ (b n : ¬X) (4)</formula><p>The CSP version of the task is therefore modeled with the following formula.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>∃X.(C</head><formula xml:id="formula_9">X)∧(X C)∧(a 1 : X)∧. . .∧(a m : X)∧(b 1 : ¬X)∧. . .∧(b n : ¬X) (5)</formula><p>A simple OP version of the task could be modeled with the formula:</p><formula xml:id="formula_10">∃X.(C X) ∧ (X C) ∧ (X E) ∧ (E X)∧ (a 1 : X) ∧ . . . ∧ (a m : X) ∧ (b 1 : ¬X) ∧ . . . ∧ (b n : ¬X)<label>(6)</label></formula><p>which asks for solutions that are compliant with a minimality criterion involving concept subsumption checks. Therefore, a concept E ∈ DL H is not a correct concept definition for C w.r.t. Ind + C (A) and Ind − C (A) if there exists a concept X which is a most specific concept, and is strictly more specific than E.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusions</head><p>In this paper, we have provided a formal characterization of Concept Learning in DLs according to a declarative modeling language which abstracts from the specific algorithms used to solve the task. To this purpose, we have defined a fragment of second-order logic under the general semantics which allows to express formulas involving concept assertions from an ABox. One such fragment enables us to cover the general case of MSC as well. Also, as a minor contribution, we have suggested that the generalization as search approach to Concept Learning in Mitchell's vision is just that unifying framework necessary for accompanying the declarative modeling language proposed in this paper with a way of computing solutions to the problems declaratively modeled with this language. More precisely, the computational method we refer to in this paper is based on the iterative application of suitable refinement operators. Since many refinement operators for DLs are already available in the literature, the method can be designed such that it can be instantiated with a refinement operator specifically defined for the DL in hand.</p><p>The preliminary results reported in this paper open a promising direction of research at the intersection of KR and ML. For this research we have taken inspiration from recent results in both areas. On one hand, Colucci et al.'s work provides a procedure which combines Tableaux calculi for DLs with rules for the substitution of concept variables in second-order concept expressions <ref type="bibr" target="#b7">[8]</ref>. On the other hand, De Raedt et al.'s work shows that off-the-shelf constraint programming techniques can be applied to various ML problems, once reformulated as CSPs and OPs <ref type="bibr" target="#b8">[9]</ref>. Interestingly, both works pursue a unified view on the inferential problems of interest to the respective fields of research. This match of research efforts in the two fields has motivated the work presented in this paper which, therefore, moves a step towards bridging the gap between KR and ML in areas such as the maintenance of KBs where the two fields have already produced interesting results though mostly indipendently from each other. New questions and challenges are raised by the cross-fertilization of these results. In the future, we intend to investigate how to express optimality criteria such as the information gain function within the second-order concept expressions and how the generalization as search approach can be effectively integrated with second-order calculus.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>-</head><label></label><figDesc>T and A are the TBox and the ABox, respectively, of a DL KB K -Ind(A) is the set of all individuals occurring in A -Retr K (C) is the set of all individuals occurring in A that are an instance of a given concept C w.r.t. T -Ind + C (A) = {a ∈ Ind(A) | C(a) ∈ A} ⊆ Retr K (C) -Ind − C (A) = {b ∈ Ind(A) | ¬C(b) ∈ A} ⊆ Retr K (¬C)These sets can be easily computed by resorting to retrieval inference services usually available in DL systems. Definition 1 (Concept Learning). Let K = (T , A) be a DL KB. Given: a (new) target concept name C a set of positive and negative examples Ind + C (A) ∪ Ind − C (A) ⊆ Ind(A) for C a concept description language DL H the Concept Learning problem is to find a concept definition C ≡ D such that D ∈ DL H satisfies the following conditions Completeness K |= D(a) ∀a ∈ Ind + C (A) and Consistency K |= ¬D(b) ∀b ∈ Ind − C (A)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Definition 4 (</head><label>4</label><figDesc>Properties of refinement operators in DLs). A downward refinement operator ρ for a quasi-ordered search space (DL H , ) is -(locally) finite iff ρ(C) is finite for all concepts C ∈ DL H . redundant iff there exists a refinement chain from a concept C ∈ DL H to a concept D ∈ DL H , which does not go through some concept E ∈ DL H and a refinement chain from C to a concept equal to D, which does go through E. proper iff for all concepts C, D ∈ DL H , D ∈ ρ(C) implies C ≡ D. complete iff, for all concepts C, D ∈ DL H with C D, a concept E ∈ DL H with E ≡ C can be reached from D by ρ. weakly complete iff, for all concepts C ∈ DL H with C , a concept E ∈ DL H with E ≡ C can be reached from by ρ.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>1 .</head><label>1</label><figDesc>[S] might be empty, or 2. [S] might contain different minimal elements, or 3. [S] might contain an infinite decreasing chain [D 1 ] [D 2 ] • • • .</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Least common subsumers and most specific concepts in a description logic with existential restrictions and terminological cycles</title>
		<author>
			<persName><forename type="first">F</forename><surname>Baader</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IJCAI&apos;03: Proceedings of the 18th International Joint Conference on Artificial intelligence</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Gottlob</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Walsh</surname></persName>
		</editor>
		<imprint>
			<publisher>Morgan Kaufmann Publishers</publisher>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="319" to="324" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A refinement operator for description logics</title>
		<author>
			<persName><forename type="first">L</forename><surname>Badea</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nienhuys-Cheng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Inductive Logic Programming</title>
		<title level="s">Lecture Notes in Artificial Intelligence</title>
		<editor>
			<persName><forename type="first">J</forename><surname>Cussens</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Frisch</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="2000">2000</date>
			<biblScope unit="volume">1866</biblScope>
			<biblScope unit="page" from="40" to="59" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Computing least common subsumers in description logics</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">W</forename><surname>Cohen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Borgida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hirsh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 10th National Conf. on Artificial Intelligence</title>
				<meeting>of the 10th National Conf. on Artificial Intelligence</meeting>
		<imprint>
			<publisher>The AAAI Press / The MIT Press</publisher>
			<date type="published" when="1992">1992</date>
			<biblScope unit="page" from="754" to="760" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Learnability of description logics</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">W</forename><surname>Cohen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hirsh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 5th Annual ACM Conf. on Computational Learning Theory</title>
				<editor>
			<persName><forename type="first">D</forename><surname>Haussler</surname></persName>
		</editor>
		<meeting>of the 5th Annual ACM Conf. on Computational Learning Theory</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="1992">1992</date>
			<biblScope unit="page" from="116" to="127" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Learning the CLASSIC description logic: Theoretical and experimental results</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">W</forename><surname>Cohen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hirsh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 4th Int. Conf. on Principles of Knowledge Representation and Reasoning (KR&apos;94)</title>
				<meeting>of the 4th Int. Conf. on Principles of Knowledge Representation and Reasoning (KR&apos;94)</meeting>
		<imprint>
			<publisher>Morgan Kaufmann</publisher>
			<date type="published" when="1994">1994</date>
			<biblScope unit="page" from="121" to="133" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">The learnability of description logics with equality constraints</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">W</forename><surname>Cohen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hirsh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">2-3</biblScope>
			<biblScope unit="page" from="169" to="199" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Second-order description logics: Semantics, motivation, and a calculus</title>
		<author>
			<persName><forename type="first">S</forename><surname>Colucci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Di Noia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Di Sciascio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Donini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ragone</surname></persName>
		</author>
		<ptr target="CEUR-WS.org" />
	</analytic>
	<monogr>
		<title level="m">Proc. of the 23rd Int. Workshop on Description Logics (DL 2010</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">V</forename><surname>Haarslev</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Toman</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><forename type="middle">E</forename><surname>Weddell</surname></persName>
		</editor>
		<meeting>of the 23rd Int. Workshop on Description Logics (DL 2010</meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="volume">573</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A unified framework for non-standard reasoning services in description logics</title>
		<author>
			<persName><forename type="first">S</forename><surname>Colucci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Di Noia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Di Sciascio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Donini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ragone</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ECAI 2010 -19th European Conference on Artificial Intelligence. Frontiers in Artificial Intelligence and Applications</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Coelho</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Studer</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Wooldridge</surname></persName>
		</editor>
		<imprint>
			<publisher>IOS Press</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="volume">215</biblScope>
			<biblScope unit="page" from="479" to="484" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Constraint programming for data mining and machine learning</title>
		<author>
			<persName><forename type="first">L</forename><surname>De Raedt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Guns</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nijssen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 24th AAAI Conference on Artificial Intelligence</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Fox</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Poole</surname></persName>
		</editor>
		<meeting>of the 24th AAAI Conference on Artificial Intelligence</meeting>
		<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">An efficient method for hybrid deduction</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Donini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lenzerini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Nardi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ECAI</title>
				<imprint>
			<date type="published" when="1990">1990</date>
			<biblScope unit="page" from="246" to="252" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Knowledgeintensive induction of terminologies from metadata</title>
		<author>
			<persName><forename type="first">F</forename><surname>Esposito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Fanizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Iannone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Palmisano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Semeraro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Semantic Web -ISWC 2004: Third International Semantic Web Conference</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Mcilraith</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Plexousakis</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Van Harmelen</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2004">2004</date>
			<biblScope unit="volume">3298</biblScope>
			<biblScope unit="page" from="441" to="455" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Concept formation in expressive description logics</title>
		<author>
			<persName><forename type="first">N</forename><surname>Fanizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Iannone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Palmisano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Semeraro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Machine Learning: ECML 2004</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Boulicaut</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Esposito</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Giannotti</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Pedreschi</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2004">2004</date>
			<biblScope unit="volume">3201</biblScope>
			<biblScope unit="page" from="99" to="110" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">DL-FOIL concept learning in description logics</title>
		<author>
			<persName><forename type="first">N</forename><surname>Fanizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Amato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Esposito</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Inductive Logic Programming</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">F</forename><surname>Zelezný</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Lavrač</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="volume">5194</biblScope>
			<biblScope unit="page" from="107" to="121" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">CLASSIC learning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Frazier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Pitt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">2-3</biblScope>
			<biblScope unit="page" from="151" to="193" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Completeness in the theory of types</title>
		<author>
			<persName><forename type="first">L</forename><surname>Henkin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Symbolic Logic</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="81" to="91" />
			<date type="published" when="1950">1950</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">An algorithm based on counterfactuals for concept learning in the semantic web</title>
		<author>
			<persName><forename type="first">L</forename><surname>Iannone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Palmisano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Fanizzi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Intelligence</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="139" to="159" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Learnability of description logic programs</title>
		<author>
			<persName><forename type="first">J</forename><surname>Kietz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Inductive Logic Programming</title>
		<title level="s">Lecture Notes in Artificial Intelligence</title>
		<editor>
			<persName><forename type="first">S</forename><surname>Matwin</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Sammut</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2003">2003</date>
			<biblScope unit="volume">2583</biblScope>
			<biblScope unit="page" from="117" to="132" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">A polynomial approach to the constructive induction of structural knowledge</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">U</forename><surname>Kietz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Morik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="193" to="217" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Non-Standard Inferences in Description Logics</title>
		<author>
			<persName><forename type="first">R</forename><surname>Küsters</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">Lecture Notes in Computer Science</title>
		<imprint>
			<biblScope unit="volume">2100</biblScope>
			<date type="published" when="2001">2001</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Approximating most specific concepts in description logics with existential restrictions</title>
		<author>
			<persName><forename type="first">R</forename><surname>Küsters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Molitor</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AI Communications</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="47" to="59" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Foundations of refinement operators for description logics</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lehmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Inductive Logic Programming</title>
		<title level="s">Lecture Notes in Artificial Intelligence</title>
		<editor>
			<persName><forename type="first">H</forename><surname>Blockeel</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Ramon</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Shavlik</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Tadepalli</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="volume">4894</biblScope>
			<biblScope unit="page" from="161" to="174" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Concept learning in description logics using refinement operators</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lehmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">78</biblScope>
			<biblScope unit="issue">1-2</biblScope>
			<biblScope unit="page" from="203" to="250" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">DL-Learner: Learning concepts in description logics</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lehmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Machine Learning Research</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="2639" to="2642" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Ideal downward refinement in the EL description logic</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lehmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Haase</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Inductive Logic Programming</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">L</forename><surname>De Raedt</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="volume">5989</biblScope>
			<biblScope unit="page" from="73" to="87" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Building rules on top of ontologies for the semantic web with inductive logic programming</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Theory and Practice of Logic Programming</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">03</biblScope>
			<biblScope unit="page" from="271" to="300" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Inductive logic programming in databases: From Datalog to DL+log</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Theory and Practice of Logic Programming</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="331" to="359" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Usability issues in knowledge representation systems</title>
		<author>
			<persName><forename type="first">D</forename><surname>Mcguinness</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Patel-Schneider</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 15th National Conf. on Artificial Intelligence and 10th Innovative Applications of Artificial Intelligence Conference</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Mostow</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Rich</surname></persName>
		</editor>
		<meeting>of the 15th National Conf. on Artificial Intelligence and 10th Innovative Applications of Artificial Intelligence Conference</meeting>
		<imprint>
			<publisher>AAAI Press / The MIT Press</publisher>
			<date type="published" when="1998">1998</date>
			<biblScope unit="page" from="608" to="614" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">A theory and methodology of inductive learning</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Michalski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Machine Learning: an artificial intelligence approach</title>
				<editor>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Michalski</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">G</forename><surname>Carbonell</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><forename type="middle">M</forename><surname>Mitchell</surname></persName>
		</editor>
		<meeting><address><addrLine>I</addrLine></address></meeting>
		<imprint>
			<publisher>Morgan Kaufmann</publisher>
			<date type="published" when="1983">1983</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Generalization as search</title>
		<author>
			<persName><forename type="first">T</forename><surname>Mitchell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page" from="203" to="226" />
			<date type="published" when="1982">1982</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<monogr>
		<title level="m" type="main">Machine Learning</title>
		<author>
			<persName><forename type="first">T</forename><surname>Mitchell</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1997">1997</date>
			<publisher>McGraw Hill</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Inductive logic programming</title>
		<author>
			<persName><forename type="first">S</forename><surname>Muggleton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 1st Conf. on Algorithmic Learning Theory</title>
				<editor>
			<persName><forename type="first">S</forename><surname>Arikawa</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Goto</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Ohsuga</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Yokomori</surname></persName>
		</editor>
		<meeting>of the 1st Conf. on Algorithmic Learning Theory</meeting>
		<imprint>
			<publisher>Springer/Ohmsha</publisher>
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Reasoning and revision in hybrid representation systems</title>
		<author>
			<persName><forename type="first">B</forename><surname>Nebel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">Lecture Notes in Computer Science</title>
		<imprint>
			<biblScope unit="volume">422</biblScope>
			<date type="published" when="1990">1990</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Foundations of inductive logic programming</title>
		<author>
			<persName><forename type="first">S</forename><surname>Nienhuys-Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>De Wolf</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">Lecture Notes in Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">1228</biblScope>
			<date type="published" when="1997">1997</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Learning logical definitions from relations</title>
		<author>
			<persName><forename type="first">J</forename><surname>Quinlan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="239" to="266" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Towards learning in CARIN-ALN</title>
		<author>
			<persName><forename type="first">C</forename><surname>Rouveirol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Ventos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Inductive Logic Programming</title>
		<title level="s">Lecture Notes in Artificial Intelligence</title>
		<editor>
			<persName><forename type="first">J</forename><surname>Cussens</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Frisch</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2000">2000</date>
			<biblScope unit="volume">1866</biblScope>
			<biblScope unit="page" from="191" to="208" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">A theory of the learnable</title>
		<author>
			<persName><forename type="first">L</forename><surname>Valiant</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page" from="1134" to="1142" />
			<date type="published" when="1984">1984</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
