<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Dealing with Incompleteness and Vagueness in Inductive Logic Programming</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Francesca</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Informatica</orgName>
								<orgName type="institution">Università degli Studi di Bari &quot;Aldo Moro&quot;</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Umberto</forename><surname>Straccia</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">ISTI -CNR</orgName>
								<address>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Dealing with Incompleteness and Vagueness in Inductive Logic Programming</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">7D45458FC935FD1266F3FFB8827D41BB</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T23:59+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Incompleteness and vagueness are inherent properties of knowledge in several real world domains and are particularly pervading in those domains where entities could be better described in natural language. In order to deal with incomplete and vague structured knowledge, several fuzzy extensions of Description Logics (DLs) have been proposed in the literature. In this paper, we address the issues raised by incomplete and vague knowledge in Inductive Logic Programming (ILP). We present a novel ILP method for inducing fuzzy DL inclusion axioms from crisp DL knowledge bases and discuss the results obtained in comparison with related works.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Incompleteness and vagueness are inherent properties of knowledge in several real world domains and are particularly pervading in those domains where entities could be better described in natural language. The issues raised by incomplete and vague knowledge have been traditionally addressed in the field of Knowledge Representation (KR).</p><p>Incomplete knowledge. The Open World Assumption (OWA) is used in KR to codify the informal notion that in general no single agent or observer has complete knowledge. The OWA limits the kinds of inference and deductions an agent can make to those that follow from statements that are known to the agent to be true. In contrast, the Closed World Assumption (CWA) allows an agent to infer, from its lack of knowledge of a statement being true, anything that follows from that statement being false. Heuristically, the OWA applies when we represent knowledge within a system as we discover it, and where we cannot guarantee that we have discovered or will discover complete information. In the OWA, statements about knowledge that are not included in or inferred from the knowledge explicitly recorded in the system may be considered unknown, rather than wrong or false. Description Logics (DLs) are KR formalisms compliant with the OWA, thus turning out to be particularly suitable for representing incomplete knowledge <ref type="bibr" target="#b0">[1]</ref>.</p><p>Vague knowledge. It is well known that "classical" DLs are not appropriate to deal with vague knowledge <ref type="bibr" target="#b19">[20]</ref>. We recall for the inexpert reader that there has been a long-lasting misunderstanding in the literature of artificial intelligence and uncertainty modelling, regarding the role of probability/possibility theory and vague/fuzzy theory. A clarifying paper is <ref type="bibr" target="#b4">[5]</ref>. Specifically, under uncertainty theory fall all those approaches in which statements are true or false to some probability or possibility (for example, "it will rain tomorrow"). That is, a statement is true or false in any world/interpretation, but we are "uncertain" about which world to consider as the right one, and thus we speak about, e.g., a probability distribution or a possibility distribution over the worlds. On the other hand, under fuzzy theory fall all those approaches in which statements (for example, "the car is long") are true to some degree, which is taken from a truth space (usually [0, 1]). That is, an interpretation maps a statement to a truth degree, since we are unable to establish whether a statement is entirely true or false due to the involvement of vague concepts, such as "long car" (the degree to which the sentence is true depends on the length of the car). Here, we shall focus on fuzzy logic only.</p><p>Learning in fuzzy DLs. Although a relatively important amount of work has been carried out in the last years concerning the use of fuzzy DLs as ontology languages <ref type="bibr" target="#b19">[20]</ref> and the use of DLs as representation formalisms in Inductive Logic Programming (ILP) <ref type="bibr" target="#b12">[13]</ref>, the problem of automatically managing the evolution of fuzzy ontologies by applying ILP algorithms still remains relatively unaddressed. Konstantopoulos and Charalambidis <ref type="bibr" target="#b8">[9]</ref> propose an ad-hoc translation of fuzzy Lukasiewicz ALC DL constructs into LP in order to apply a conventional ILP method for rule learning. However, the method is not sound as it has been recently shown that the mapping from fuzzy DLs to LP is incomplete <ref type="bibr" target="#b16">[17]</ref> and entailment in Lukasiewicz ALC is undecidable <ref type="bibr" target="#b3">[4]</ref>. Iglesias and Lehmann <ref type="bibr" target="#b6">[7]</ref> propose an extension of DL-Learner <ref type="bibr" target="#b9">[10]</ref> with some of the most up-to-date fuzzy ontology tools, e.g. the fuzzyDL reasoner <ref type="bibr" target="#b1">[2]</ref>. Notably, the resulting system can learn fuzzy OWL DL<ref type="foot" target="#foot_0">3</ref> equivalence axioms from FuzzyOWL 2 ontologies. <ref type="foot" target="#foot_1">4</ref> However, it has been tested only on a toy problem with crisp training examples and does not build automatically fuzzy concrete domains. Lisi and Straccia <ref type="bibr" target="#b13">[14]</ref> present SoftFoil, a logic-based method for learning fuzzy EL inclusion axioms from fuzzy DL-Lite ontologies (also, SoftFoil has not been implemented and tested).</p><p>Contribution of this paper. In this paper, we describe a novel method, named Foil-DL, for learning fuzzy EL(D) inclusion axioms from any crisp DL knowledge base. <ref type="foot" target="#foot_2">5</ref> Similarly to SoftFoil, it adapts the popular rule induction method Foil <ref type="bibr" target="#b17">[18]</ref>. However, Foil-DL differs from SoftFoil mainly by the fact that the latter learns fuzzy EL inclusion axioms from fuzzy DL-Lite ontologies, while the former learns fuzzy EL(D) inclusion axioms from any crisp DL ontology.</p><p>Structure of the paper. The paper is structured as follows. For the sake of selfcontainment, Section 2 introduces some basic definitions we rely on. Section 3 describes the learning problem and the solution strategy of Foil-DL. Section 4 illustrates some results obtained in a comparative study between Foil-DL and </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Preliminaries</head><p>Mathematical Fuzzy Logic. Fuzzy Logic is the logic of fuzzy sets. A fuzzy set A over a countable crisp set X is a function A : X → [0, 1]. Let A and B be two fuzzy sets. The standard fuzzy set operations conform to (A∩B)(x) = min(A(x), B(x)), (A ∪ B)(x) = max(A(x), B(x)) and Ā(x) = 1 − A(x), while the inclusion degree between A and B is defined typically as</p><formula xml:id="formula_0">deg(A, B) = x∈X (A ∩ B)(x) x∈X A(x)</formula><p>.</p><p>(</p><formula xml:id="formula_1">)<label>1</label></formula><p>The trapezoidal (Fig. <ref type="figure" target="#fig_0">1</ref>  ). However, one easy and typically satisfactory method to define the membership functions is to uniformly partition the range of values into 5 or 7 fuzzy sets using either trapezoidal functions, or triangular functions. The latter is the more used one, as it has less parameters and is also the approach we adopt. For instance, the figure below illustrates salary values (bounded by a minimum and maximum value), partitioned uniformly into 5 fuzzy sets.</p><p>In Mathematical Fuzzy Logic <ref type="bibr" target="#b5">[6]</ref>, the convention prescribing that a statement is either true or false is changed and is a matter of degree measured on an ordered scale that is no longer {0, 1}, but e.g. [0, 1]. This degree is called degree of truth </p><formula xml:id="formula_2">A A I ⊆ ∆ I role R R I ⊆ ∆ I × ∆ I individual a a I ∈ ∆ I concept negation ¬C ∆ I \ C I concept intersection C1 C2 C I 1 ∩ C I 2 concept union C1 C2 C I 1 ∪ C I 2 value restriction ∀R.C {x ∈ ∆ I | ∀y (x, y) ∈ R I → y ∈ C I } existential restriction ∃R.C {x ∈ ∆ I | ∃y (x, y) ∈ R I ∧ y ∈ C I } general concept inclusion C1 C2 C I 1 ⊆ C I 2 concept assertion a : C a I ∈ C I role assertion (a, b) : R (a I , b I ) ∈ R I</formula><p>of the logical statement φ in the interpretation I. For us, fuzzy statements have the form φ, α , where α ∈ (0, 1] and φ is a statement, encoding that the degree of truth of φ is greater or equal α.</p><p>A fuzzy interpretation I maps each atomic statement p i into [0, 1] and is then extended inductively to all statements:</p><formula xml:id="formula_3">I(φ ∧ ψ) = I(φ) ⊗ I(ψ), I(φ ∨ ψ) = I(φ) ⊕ I(ψ), I(φ → ψ) = I(φ) ⇒ I(ψ), I(¬φ) = I(φ), I(∃x.φ(x)) = sup y∈∆ I I(φ(y)), I(∀x.φ(x)) = inf y∈∆ I I(φ(y))</formula><p>, where ∆ I is the domain of I, and ⊗, ⊕, ⇒, and are so-called t-norms, t-conorms, implication functions, and negation functions, respectively, which extend the Boolean conjunction, disjunction, implication, and negation, respectively, to the fuzzy case. One usually distinguishes three different logics, namely Lukasiewicz, Gödel, and Product logics <ref type="bibr" target="#b5">[6]</ref>. Any other continuous t-norm can be obtained from them. The combination functions in Gödel logic are defined as follows:</p><formula xml:id="formula_4">a⊗b = min(a, b), a⊕b = max(a, b), a ⇒ b = 1 if a ≤ b b otherwise , a = 1 if a = 0 0 otherwise .<label>(2)</label></formula><p>The notions of satisfiability and logical consequence are defined in the standard way, where a fuzzy interpretation I satisfies a fuzzy statement φ, α or I is a model of φ, α , denoted as I |= φ, α , iff I(φ) ≥ α.</p><p>Fuzzy Description Logics. Description Logics (DLs) are a family of decidable First Order Logic (FOL) fragments that allow for the specification of structured knowledge in terms of classes (concepts), instances (individuals), and binary relations between instances (roles) <ref type="bibr" target="#b0">[1]</ref>. Complex concepts (denoted with C) can be defined from atomic concepts (A) and roles (R) by means of the constructors available for the DL in hand. The set of constructors for the ALC DL is reported in Table <ref type="table" target="#tab_0">1</ref>. A DL Knowledge Base (KB) K = T , A is a pair where T is the so-called Terminological Box (TBox) and A is the so-called Assertional Box (ABox). The TBox is a finite set of General Concept Inclusion (GCI) axioms which represent is-a relations between concepts, whereas the ABox is a finite set of assertions (or facts) that represent instance-of relations between individuals (resp. couples of individuals) and concepts (resp. roles). Thus, when a DL-based ontology language is adopted, an ontology is nothing else than a TBox, and a populated ontology corresponds to a whole KB (i.e., encompassing also an ABox).</p><p>The semantics of DLs can be defined directly with set-theoretic formalizations (as shown in Table <ref type="table" target="#tab_0">1</ref> for the case of ALC) or through a mapping to FOL (as shown in <ref type="bibr" target="#b2">[3]</ref>). An interpretation I = (∆ I , • I ) for a DL KB consists of a domain ∆ I and a mapping function • I . For instance, I maps a concept C into a set of individuals C I ⊆ ∆ I , i.e. I maps C into a function C I : ∆ I → {0, 1} (either an individual belongs to the extension of C or does not belong to it). Under the Unique Names Assumption (UNA) <ref type="bibr" target="#b18">[19]</ref>, individuals are mapped to elements of ∆ I such that a I = b I if a = b. However UNA does not hold by default in DLs. An interpretation I is a model of a KB K iff it satisfies all axioms and assertions in T and A . In DLs a KB represents many different interpretations, i.e. all its models. This is coherent with the OWA that holds in FOL semantics. A DL KB is satisfiable if it has at least one model.</p><p>The main reasoning task for a DL KB K is the consistency check which tries to prove the satisfiability of K. Another well known reasoning service in DLs is instance check, i.e., the check of whether an ABox assertion is a logical implication of a DL KB. A more sophisticated version of instance check, called instance retrieval, retrieves, for a DL KB K, all (ABox) individuals that are instances of the given (possibly complex) concept expression C, i.e., all those individuals a such that K entails that a is an instance of C.</p><p>Concerning fuzzy DLs, several fuzzy extensions of DLs have been proposed (see the survey in <ref type="bibr" target="#b14">[15]</ref>). We recap here the fuzzy variant of the DL ALC(D) <ref type="bibr" target="#b20">[21]</ref>.</p><p>A fuzzy concrete domain or fuzzy datatype theory D = ∆ D , • D consists of a datatype domain ∆ D and a mapping • D that assigns to each data value an element of ∆ D , and to every n-ary datatype predicate d an n-ary fuzzy relation over ∆ D . We will restrict to unary datatypes as usual in fuzzy DLs. Therefore, • D maps indeed each datatype predicate into a function from ∆ D to [0, 1]. Typical examples of datatype predicates d are the well known membership functions</p><formula xml:id="formula_5">d := ls(a, b) | rs(a, b) | tri(a, b, c) | trz(a, b, c, d) | ≥ v | ≤ v | = v ,</formula><p>where e.g. ls(a, b) is the left-shoulder membership function and ≥ v corresponds to the crisp set of data values that are greater or equal than the value v.</p><p>In ALC(D), each role is either an object property (denoted with R) or a datatype property (denoted with T ). Complex concepts are built according to the following syntactic rules:</p><formula xml:id="formula_6">C → | ⊥ | A | C1 C2 | C1 C2 | ¬C | C1 → C2 | ∃R.C | ∀R.C | ∃T.d | ∀T.d . (3)</formula><p>Axioms in a fuzzy ALC(D) KB K = T , A are graded, e.g. a GCI is of the form C 1 C 2 , α (i.e. C 1 is a sub-concept of C 2 to degree at least α). We may omit the truth degree α of an axiom; in this case α = 1 is assumed.</p><p>Concerning the semantics, let us fix a fuzzy logic. In fuzzy DLs, I maps C into a function C I : ∆ I → [0, 1] and, thus, an individual belongs to the extension of C to some degree in [0, 1], i.e. C I is a fuzzy set. Specifically, a fuzzy interpretation is a pair I = (∆ I , • I ) consisting of a nonempty (crisp) set ∆ I (the domain) and of a fuzzy interpretation function • I that assigns: (i) to each atomic concept A a function A I :</p><formula xml:id="formula_7">∆ I → [0, 1]; (ii) to each object property R a function R I : ∆ I × ∆ I → [0, 1]; (iii) to each data type property T a function T I : ∆ I × ∆ D → [0, 1]; (iv) to each individual a an element a I ∈ ∆ I ; and (v) to each concrete value v an element v I ∈ ∆ D .</formula><p>Now, • I is extended to concepts as specified below (where x ∈ ∆ I ):</p><formula xml:id="formula_8">⊥ I (x) = 0, I (x) = 1, (C D) I (x) = C I (x) ⊗ D I (x), (C D) I (x) = C I (x) ⊕ D I (x), (¬C) I (x) = C I (x), (C → D) I (x) = C I (x) ⇒ D I (x), (∀R.C) I (x) = inf y∈∆ I {R I (x, y) ⇒ C I (y)}, (∃R.C) I (x) = sup y∈∆ I {R I (x, y) ⊗ C I (y)}, (∀T.d) I (x) = inf y∈∆ D {T I (x, y) ⇒ d D (y)}, (∃T.d) I (x) = sup y∈∆ D {T I (x, y) ⊗ d D (y)} .</formula><p>Hence, for every concept C we get a function</p><formula xml:id="formula_9">C I : ∆ I → [0, 1].</formula><p>The satisfiability of axioms is then defined by the following conditions:</p><formula xml:id="formula_10">(i) I satisfies an axiom a:C, α if C I (a I ) ≥ α; (ii) I satisfies an axiom (a, b):R, α if R I (a I , b I ) ≥ α; (iii) I satisfies an axiom C D, α if (C D) I ≥ α where (C D) I = inf x∈∆ I {C I (x) ⇒ D I (x)}. I is a model of K iff I satisfies each axiom in K. We say that K entails an axiom τ, α , denoted K |= τ, α , if any model of K satisfies τ, α . The best entailment degree of τ w.r.t. K, denoted bed(K, τ ), is defined as bed(K, τ ) = sup{α | K |= τ, α } . (<label>4</label></formula><formula xml:id="formula_11">)</formula><p>3 Learning fuzzy EL(D) axioms with Foil-DL</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">The problem statement</head><p>The problem considered in this paper concerns the automated induction of fuzzy EL(D) <ref type="foot" target="#foot_3">6</ref> GCI axioms providing a sufficient condition for a given atomic concept H. It can be cast as a rule learning problem, provided that positive and negative examples of H are available. This problem can be formalized as follows.</p><p>Given:</p><p>-a consistent crisp DL KB K = T , A (the background theory); -an atomic concept H (the target concept); -a set E = E + ∪ E − of crisp concept assertions labelled as either positive or negative examples for H (the training set); -a set L H of fuzzy EL(D) GCI axioms (the language of hypotheses) the goal is to find a set H ⊂ L H (a hypothesis) such that: ∀e ∈ E + , K ∪ H |= e (completeness), and ∀e ∈ E − , K ∪ H |= e (consistency).</p><p>Here we assume that K ∩ E = ∅. Also, the language L H is given implicitly by means of syntactic restrictions over a given alphabet. In particular, the alphabet underlying L H is a subset of the alphabet for the language L K of the background theory. However, L H differs from L K as for the form of axioms. Please note that we do not make any specific assumption about the DL the background theory refers to. Two further restrictions hold naturally. One is that K |= E + since, in such a case, H would not be necessary to explain E + . The other is that K∪H |= ⊥, which means that K∪H is a consistent theory, i.e. has a model. An axiom φ ∈ L H covers an example e ∈ E iff K ∪ {φ} |= e.</p><p>The training examples. Given the target concept H, the training set E consists of concept assertions of the form a:H, where a is an individual occurring in K. Note that both K and E is crisp. Also, E is split into E + and E − . Note that, under OWA, E − consists of all those individuals which can be proved to be instance of ¬H. On the other hand, under CWA, E − is the collection of individuals, which cannot be proved to be instance of H.</p><p>The language of hypotheses. Given the target concept H, the hypotheses to be induced are fuzzy GCIs of the form</p><formula xml:id="formula_12">B H ,<label>(5)</label></formula><p>where the left-hand side is defined according to the following EL(D) syntax</p><formula xml:id="formula_13">B −→ | A | ∃R.B | ∃T.d | B 1 B 2 .<label>(6)</label></formula><p>The language L H generated by this syntax is potentially infinite due, e.g., to the nesting of existential restrictions yielding to complex concept expressions such as ∃R 1 .(∃R 2 . . . .(∃R n .(C)) . . .). L H is made finite by imposing further restrictions on the generation process such as the maximal number of conjuncts and the depth of existential nesting allowed in the left-hand side. Also, note that the learnable GCIs do not have an explicit truth degree. However, as we shall see later on, once we have learned a fuzzy GCI of the form (5), we attach to it a confidence degree that is obtained by means of the cf function (see Eq. ( <ref type="formula" target="#formula_16">8</ref>)). Finally, note that the syntactic restrictions of Eq. ( <ref type="formula" target="#formula_13">6</ref>) w.r.t. Eq. ( <ref type="formula">3</ref>) allow for a straightforward translation of the inducible axioms into rules of the kind "if x is a C 1 and . . . and x is a C n then x is an H", which corresponds to the usual pattern in fuzzy rule induction (in our case, B H is seen as a rule "if B then H") .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">The solution strategy</head><p>The solution proposed for the learning problem defined in Section 3.1 is inspired by Foil. Foil is a popular ILP algorithm for learning sets of rules which performs a greedy search in order to maximise a gain function <ref type="bibr" target="#b17">[18]</ref>. In Foil-DL, the learning strategy of Foil (i.e., the so-called sequential covering approach) is kept. The function Learn-Sets-of-Axioms (reported in Figure <ref type="figure">2</ref>) carries on inducing axioms until all positive examples are covered. When an axiom is induced (step 3.), the positive examples covered by the axiom (step 5.) are removed from E (step 6.). In order to induce an axiom, the function Learn-One-Axiom (reported in Figure <ref type="figure" target="#fig_3">3</ref>) starts with the most general axiom (i.e.</p><p>H) and specializes it by applying the refinement rules implemented in the function Refine (step 7.). The iterated specialization of the axiom continues until the axiom does not cover any negative example and its confidence degree is greater than a fixed threshold (θ). The confidence degree of axioms being generated with Refine allows for evaluating the information gain obtained on each refinement step by calling the function Gain (step 9.).</p><p>Due to the peculiarities of the language of hypotheses in Foil-DL, necessary changes are made to Foil as concerns the functions Refine and Gain. Details about these novel features are provided in the next two subsections.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>The refinement operator. The function Refine implements a specialization operator with the following refinement rules:</head><p>Add A : adds an atomic concept A Add ∃R. : adds a complex concept ∃R. by existential role restriction Add ∃T.d : adds a complex concept ∃T.d by existential role restriction Subst A : replaces an atomic concept A with another atomic concept A s.t. A A At each refinement step (i.e. at each call of Refine), the rules are applied first to the left-hand side of the axiom being specialized and then recursively to the range of all the conjuncts defined with existential role restriction. For example, let us consider that H is the target concept, A, A , B, R, R , T are concepts and properties occurring in K, and A A holds in K. Under these assumptions, the axiom ∃R.B H is specialized into the following axioms: B best := B; 6.</p><p>maxgain := 0; 7.</p><p>Φ := Refine(φ, LH) 8.</p><p>foreach φ ∈ Φ do 9.</p><p>gain := Gain(φ , φ); 10.</p><p>if  hypotheses. For example, if the concept B is the range of R in K, the function Refine adds the conjunct ∃R .B instead of ∃R . . One such "informed" refinement operator is able to perform "cautious" big steps in the search space.</p><p>Note that a specialization operator reduces the number of examples covered by a GCI. More precisely, the aim of a refinement step is to reduce the number of covered negative examples, while still keeping some covered positive examples. Since learned GCIs cover only positive examples, K will remain consistent after the addition of a learned GCI.</p><p>The heuristic. The function Gain implements an information-theoretic criterion for selecting the best candidate at each refinement step according to the following formula:</p><formula xml:id="formula_14">Gain(φ , φ) = p * (log 2 (cf (φ )) − log 2 (cf (φ))) , (<label>7</label></formula><formula xml:id="formula_15">)</formula><p>where p is the number of positive examples covered by the axiom φ that are still covered by φ . Thus, the gain is positive iff φ is more informative in the sense of Shannon's information theory, i.e. iff the confidence degree (cf ) increases. If there are some refinements, which increase the confidence degree, the function Gain tends to favour those that offer the best compromise between the confidence degree and the number of examples covered. Here, cf for an axiom φ of the form ( <ref type="formula" target="#formula_12">5</ref>) is computed as a sort of fuzzy set inclusion degree (see Eq. ( <ref type="formula" target="#formula_1">1</ref>)) between the fuzzy set represented by concept B and the (crisp) set represented by concept H. More formally:</p><formula xml:id="formula_16">cf (φ) = cf (B H) = a∈Ind + H (A) bed(K, a:B) |Ind H (A)|<label>(8)</label></formula><p>where Ind + H (A) (resp., Ind H (A) ) is the set of individuals occurring in A and involved in E + φ (resp., E + φ ∪ E − φ ) such that bed(K, a:B) &gt; 0. We remind the reader that bed(K, a:B) denotes the best entailment degree of the concept assertion a:B w.r.t. K as defined in Eq. ( <ref type="formula" target="#formula_10">4</ref>). Note that for individuals a ∈ Ind + H (A), K |= a:H holds and, thus, bed(K, a:B H) = bed(K, a:B). Also, note that, even if K is crisp, the possible occurrence of fuzzy concrete domains in expressions of the form ∃T.d in B may imply that both bed(K, B H) ∈ {0, 1} and bed(K, a:B) ∈ {0, 1}.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">The implementation</head><p>A variant of Foil-DL has been implemented in the fuzzyDL-Learner<ref type="foot" target="#foot_4">7</ref> system and provided with two GUIs: One is a stand-alone Java application, the other is a tab widget plug-in for the ontology editor Protégé<ref type="foot" target="#foot_5">8</ref> (release 4.2).</p><p>Several implementation choices have been made. Notably, fuzzy GCIs in L H are interpreted under Gödel semantics (see Eq. ( <ref type="formula" target="#formula_4">2</ref>)). However, since K and E are represented in crisp DLs, we have used a classical DL reasoner, together with a specialised code, to compute the confidence degree of fuzzy GCIs. Therefore, the system relies on the services of DL reasoners to solve all the deductive inference problems necessary to Foil-DL to work, namely instance retrieval, instance check and subclasses retrieval. In particular, the sets Ind + H (A) and Ind H (A) are computed by posing instance retrieval problems to the DL reasoner. Conversely, bed(K, a:∃T.d) can be computed from the derived T -fillers v of a, and applying the fuzzy membership function of d to v. The examples covered by a GCI, and, thus, the entailment tests in Learn-Sets-of-Axioms and Learn-One-Axiom, have been determined in a similar way.</p><p>The implementation of Foil-DL features several optimizations w.r.t. the solution strategy presented in Section 3.2. Notably, the search in the hypothesis space can be optimized by enabling a backtracking mode. This option allows to overcome one of the main limits of Foil, i.e. the sequential covering strategy. Because it performs a greedy search, formulating a sequence of rules without backtracking, Foil does not guarantee to find the smallest or best set of rules that explain the training examples. Also, learning rules one by one could lead to less and less interesting rules. To reduce the risk of a suboptimal choice at any search step, the greedy search can be replaced in Foil-DL by a beam search which maintains a list of k best candidates at each step instead of a single best candidate. Additionally, to guarantee termination, we provide two parameters to limit the search space: namely, the maximal number of conjuncts and the maximal depth of existential nesting allowed in a fuzzy GCI. In fact, the computation may end without covering all positive examples. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Comparing Foil-DL and DL-Learner</head><p>In this section we report the results of a comparison between Foil-DL and DL-Learner on a very popular learning task in ILP proposed 20 years ago by Ryszard Michalski <ref type="bibr" target="#b15">[16]</ref> and illustrated in Figure <ref type="figure" target="#fig_4">4</ref>. Here, 10 trains are described, out of which 5 are eastbound and 5 are westbound. The aim of the learning problem is to find the discriminating features between these two classes.</p><p>For the purpose of this comparative study, we have considered two slightly different versions, trains2 and trains3, of an ontology encoding the original Trains data set. <ref type="foot" target="#foot_6">9</ref> The former has been adapted from the version distributed with DL-Learner in order to be compatible with Foil-DL. Notably, the target classes EastTrain and WestTrain have become part of the terminology and several class assertion axioms have been added for representing positive and negative examples. The metrics for trains2 are reported in Table <ref type="table" target="#tab_3">2</ref>. The ontology does not encompass any data property. Therefore, no fuzzy concept can be generated when learning GCIs from trains2 with Foil-DL. However, the ontology can be slightly modified in order to test the fuzzy concept generation feature of Foil-DL. Note that in trains2 cars can be classified according to the classes LongCar and ShortCar. Instead of one such crisp classification, we may want a fuzzy classification of cars. This is made possible by removing LongCar and ShortCar (together with the related class assertion axioms) from trains2 and introducing the data property hasLenght with domain Car and range double (together with several data property assertions). The resulting ontology, called trains3, presents the metrics reported in Table <ref type="table" target="#tab_3">2</ref>.</p><p>DL-Learner<ref type="foot" target="#foot_7">10</ref> features several algorithms. Among them, the closest to Foil-DL is ELTL since it implements a refinement operator for concept learning in EL <ref type="bibr" target="#b11">[12]</ref>. Conversely, CELOE learns class expressions in the more expressive OWL DL <ref type="bibr" target="#b10">[11]</ref>. Both work only under OWA and deal only with crisp DLs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Results on the ontology trains2</head><p>Trial with Foil-DL. The settings for this experiment allow for the generation of hypotheses with up to 5 conjuncts and 2 levels of existential nestings. Under The algorithm returns the same GCIs under both OWA and CWA. Note that an important difference between learning in DLs and standard ILP is that the former works under OWA whereas the latter under CWA. In order to complete the Trains example we would have to introduce definitions and/or assertions to model the closed world. However, the CWA holds naturally in this example, because we have complete knowledge of the world, and thus the knowledge completion was not necessary. This explains the behaviour of Foil-DL which correctly induces the same hypotheses in spite of the opposite semantic assumptions.</p><p>Trial with ELTL. For the target concept EastTrain, the class expression learned by ELTL is the following :</p><p>EXISTS hasCar.(ClosedCar AND ShortCar) (accuracy: 1.0)</p><p>whereas the following finding has been returned for the target concept WestTrain:</p><p>EXISTS hasCar.LongCar (accuracy: 0.8)</p><p>The latter is not fully satisfactory as for the example coverage.</p><p>Trial with CELOE. For the target concept EastTrain, CELOE learns several class expressions of which the most accurate is:</p><p>hasCar some (ClosedCar and ShortCar) (accuracy: 1.0)</p><p>whereas, for the target concept WestTrain, the most accurate among the ones found is the following:</p><p>hasCar only (LongCar or OpenCar) (accuracy: 1.0)</p><p>Note that the former coincide with the corresponding result obtained with ELTL while the latter is a more accurate variant of the corresponding class expression returned by ELTL. The increase in example coverage is due to the augmented expressive power of the DL supported in CELOE.</p><p>-hasLenght_low: hasLenght, triangular(23.0,32.0,41.0) -hasLenght_fair: hasLenght, triangular(32.0,41.0,50.0) -hasLenght_high: hasLenght, triangular(41.0,50.0,59.0) -hasLenght_veryhigh: hasLenght, rightShoulder(50.0,59.0) -hasLenght_verylow: hasLenght, leftShoulder(23.0,32.0) Fig. <ref type="figure">5</ref>. Fuzzy concepts derived by Foil-DL from the data property hasLenght.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Results on the ontology trains3</head><p>Trial with Foil-DL. The outcomes for the target concepts EastTrain and WestTrain remain unchanged when Foil-DL is run on trains3 with the same configuration of the first trial. Yet, fuzzy concepts are automatically generated by Foil-DL from the data property hasLenght (see Figure <ref type="figure">5</ref>). However, from the viewpoint of discriminant power, these concepts are weaker than the other crisp concepts occurring in the ontology. In order to make the fuzzy concepts emerge during the generation of hypotheses, we have appropriately biased the language of hypotheses. In particular, by enabling only the use of object and data properties in L H , Foil-DL returns the following axiom for EastTrain:</p><p>Confidence Axiom 1,000 hasCar some (hasLenght_fair) and hasCar some (hasLenght_veryhigh) and hasCar some (hasLenght_verylow) subclass of EastTrain</p><p>Conversely, for WestTrain, a lighter bias is sufficient to make fuzzy concepts appear in the learned axioms. In particular, by disabling the class 2CarTrain in L H , Foil-DL returns the following axioms:</p><p>Confidence Axiom 1,000 hasCar some (2WheelsCar and 3LoadCar) and hasCar some (3LoadCar and CircleLoadCar) subclass of WestTrain 1,000 hasCar some (0LoadCar) subclass of WestTrain 1,000 hasCar some (JaggedCar) subclass of WestTrain 1,000 hasCar some (2LoadCar and hasLenght_high) subclass of WestTrain 1,000 hasCar some (ClosedCar and hasLenght_fair) subclass of WestTrain</p><p>Trial with ELTL. For the target class EastTrain, ELTL returns a class expression which leaves some positive example uncovered (incomplete hypothesis):</p><p>(EXISTS hasCar.TriangleLoadCar AND EXISTS hasCar.ClosedCar) (accuracy: 0.9)</p><p>whereas, for the target concept WestTrain, it returns an overly general hypothesis which covers also negative examples (inconsistent hypothesis):</p><p>TOP (accuracy: 0.5)</p><p>This bad performance of ELTL on trains3 is due to the low expressivity of EL and to the fact that the classes LongCar and ShortCar, which appeared to be discriminant in the first trial, do not occur in trains3 and thus can not be used anymore for building hypotheses.</p><p>Trial with CELOE. The most accurate class expression found by CELOE for the target concept EastTrain is:</p><p>((not 2CarTrain) and hasCar some ClosedCar) (accuracy: 1.0)</p><p>However, interestingly, CELOE learns also the following class expressions containing classes obtained by numerical restriction from the data property hasLenght:</p><p>hasCar some (ClosedCar and hasLenght &lt;= 48.5) (accuracy: 1.0) hasCar some (ClosedCar and hasLenght &lt;= 40.5) (accuracy: 1.0) hasCar some (ClosedCar and hasLenght &lt;= 31.5) (accuracy: 1.0)</p><p>These "interval classes" are just a step back from the fuzzification which, conversely, Foil-DL is able to do. It is acknowledged that using fuzzy sets in place of "intervall classes" improves the readability of the induced knowledge about the data. As for the target concept WestTrain, the most accurate class expression among the ones found by CELOE is:</p><p>(2CarTrain or hasCar some JaggedCar) (accuracy: 1.0)</p><p>Once again, the augmented expressivity increases the effectiveness of DL-Learner.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusions and future work</head><p>We have described a novel method, named Foil-DL, which addresses the problem of learning fuzzy EL(D) GCI axioms from crisp DL assertions. The method extends Foil in a twofold direction: from crisp to fuzzy and from rules to GCIs. Notably, vagueness is captured by the definition of confidence degree reported in <ref type="bibr" target="#b7">(8)</ref> and incompleteness is dealt with the OWA. Also, thanks to the variable-free syntax of DLs, the learnable GCIs are highly understandable by humans and translate easily into natural language sentences. In particular, Foil-DL present the learned axioms according to the user-friendly presentation style of the Manchester OWL syntax <ref type="foot" target="#foot_8">11</ref> (the same used in Protégé). We would like to stress the fact that Foil-DL provides a different solution from SoftFoil <ref type="bibr" target="#b13">[14]</ref> as for the KR framework, the refinement operator and the heuristic. Also, unlike SoftFoil, Foil-DL has been implemented and tested. The experimental results are quite promising and encourage the application of Foil-DL to more challenging real-world problems. Notably, in spite of the low expressivity of EL, Foil-DL has turned out to be robust mainly due to the refinement operator and to the fuzzification facilities. Note that a fuzzy OWL 2 version of the trains' problem (ontology fuzzytrains v1.5.owl) <ref type="foot" target="#foot_9">12</ref> has been developed by Iglesias for testing the fuzzy extension of CELOE proposed in <ref type="bibr" target="#b6">[7]</ref>. However, Foil-DL can not handle fuzzy OWL 2 constructs such as fuzzy classes obtained by existential restriction of fuzzy datatypes, fuzzy concept assertions, and fuzzy role assertions. Therefore, it has been necessary to prepare an ad-hoc ontology (trains3) for comparing Foil-DL and DL-Learner.</p><p>For the future, we intend to conduct a more extensive empirical evaluation of Foil-DL, which could suggest directions of improvement of the method towards more effective formulations of, e.g., the information gain function and the refinement operator as well as of the search strategy and the halt conditions employed in Learn-One-Axiom. Also, it can be interesting to analyse the impact of the different fuzzy logics on the learning process. Eventually, we shall investigate about learning fuzzy GCI axioms from FuzzyOWL 2 ontologies, by coupling the learning algorithm to the fuzzyDL reasoner, instead of learning from crisp OWL 2 data by using a classical DL reasoner.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. (a) Trapezoidal function trz (a, b, c, d), (b) triangular function tri(a, b, c), (c) left-shoulder function ls(a, b), and (d) right-shoulder function rs(a, b).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>(a)), the triangular (Fig.1 (b)), the left-shoulder function, Fig.1 (c)), and the right-shoulder function, Fig.1 (d)) are frequently used to specify membership functions of fuzzy sets. Although fuzzy sets have a greater expressive power than classical crisp sets, their usefulness depend critically on the capability to construct appropriate membership functions for various given concepts in different contexts. The problem of constructing meaningful membership functions is a difficult one (see, e.g.,<ref type="bibr" target="#b7">[8,</ref> Chapter 10]</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>-</head><label></label><figDesc>A ∃R.B H, B ∃R.B H, A ∃R.B H; -∃R . ∃R.B H, ∃T.d ∃R.B H; -∃R.(B A) H, ∃R.(B A ) H; -∃R.(B ∃R. ) H, ∃R.(B ∃R . ) H, ∃R.(B ∃T.d) H. The application of the refinement rules is not blind. It takes the background theory into account in order to avoid the generation of redundant or useless function Learn-One-Axiom(K, H, E + , E − , LH): φ begin 1. B := ; 2. φ := B H; 3. E − φ := E − ; 4. while cf (φ) &lt; θ or E − φ = ∅ do 5.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. Foil-DL: Learning one GCI axiom.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Fig. 4 .</head><label>4</label><figDesc>Fig. 4. Michalski's example of eastbound trains (left) and westbound trains (right) (illustration taken from [16]).</figDesc><graphic coords="11,186.82,115.83,238.88,90.83" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Syntax and semantics of constructs for the ALC DL.</figDesc><table><row><cell>bottom (resp. top) concept ⊥ (resp. ) ∅ (resp. ∆ I ) atomic concept</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>function Learn-Sets-of-Axioms(K, H, E + , E − , LH): H</figDesc><table><row><cell>begin</cell></row><row><cell>1. H := ∅; 2. while E + = ∅ do 3. φ := Learn-One-Axiom(K, H, E + , E − , LH); 4. H := H ∪ {φ}; 5. E + φ := {e ∈ E + |K ∪ φ |= e}; 6. φ ; E + := E + \ E + 7. endwhile</cell></row><row><cell>8. return H end</cell></row><row><cell>Fig. 2. Foil-DL: Learning a set of GCI axioms.</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 2 .</head><label>2</label><figDesc>Ontology metrics for trains2.owl and trains3.owl according to Protégé.</figDesc><table><row><cell></cell><cell cols="6"># logical axioms # classes # object prop. # data prop. # individuals DL expressivity</cell></row><row><cell>trains2 trains3</cell><cell>345 343</cell><cell>32 30</cell><cell>5 5</cell><cell>0 1</cell><cell>50 50</cell><cell>ALCO ALCO(D)</cell></row><row><cell cols="7">these restrictions, the GCI axioms learned by Foil-DL for the target concept EastTrain are:</cell></row><row><cell cols="2">Confidence Axiom</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>1,000</cell><cell cols="5">3CarTrain and hasCar some (2LoadCar) subclass of EastTrain</cell><cell></cell></row><row><cell>1,000</cell><cell cols="5">3CarTrain and hasCar some (3WheelsCar) subclass of EastTrain</cell><cell></cell></row><row><cell>1,000</cell><cell cols="4">hasCar some (ElipseShapeCar) subclass of EastTrain</cell><cell></cell><cell></cell></row><row><cell>1,000</cell><cell cols="4">hasCar some (HexagonLoadCar) subclass of EastTrain</cell><cell></cell><cell></cell></row><row><cell cols="7">whereas the following GCI axioms are returned by Foil-DL for WestTrain:</cell></row><row><cell cols="2">Confidence Axiom</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>1,000</cell><cell cols="2">2CarTrain subclass of WestTrain</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>1,000</cell><cell cols="3">hasCar some (JaggedCar) subclass of WestTrain</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_0">http://www.w3.org/TR/2009/REC-owl2-overview-20091027/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_1">http://www.straccia.info/software/FuzzyOWL</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_2">DL stands for any DL.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_3">EL(D) is a fragment of ALC(D).</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_4">http://straccia.info/software/FuzzyDL-Learner</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_5">http://protege.stanford.edu/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_6">http://archive.ics.uci.edu/ml/datasets/Trains</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="10" xml:id="foot_7">http://dl-learner.org/Projects/DLLearner</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="11" xml:id="foot_8">http://www.w3.org/TR/owl2-manchester-syntax/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="12" xml:id="foot_9">Available at http://wiki.aksw.org/Projects/DLLearner/fuzzyTrains.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m">The Description Logic Handbook: Theory, Implementation and Applications</title>
				<editor>
			<persName><forename type="first">F</forename><surname>Baader</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Calvanese</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Mcguinness</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Nardi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Patel-Schneider</surname></persName>
		</editor>
		<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
	<note>2nd ed.</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">fuzzyDL: An expressive fuzzy description logic reasoner</title>
		<author>
			<persName><forename type="first">F</forename><surname>Bobillo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Straccia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Int. Conf. on Fuzzy Systems</title>
				<imprint>
			<publisher>IEEE Computer Society</publisher>
			<date type="published" when="2008">2008. 2008</date>
			<biblScope unit="page" from="923" to="930" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">On the relative expressiveness of description logics and predicate logics</title>
		<author>
			<persName><forename type="first">A</forename><surname>Borgida</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence Journal</title>
		<imprint>
			<biblScope unit="volume">82</biblScope>
			<biblScope unit="page" from="353" to="367" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">On the (un)decidability of fuzzy description logics under Lukasiewicz t-norm</title>
		<author>
			<persName><forename type="first">M</forename><surname>Cerami</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Straccia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Sciences</title>
		<imprint>
			<biblScope unit="volume">227</biblScope>
			<biblScope unit="page" from="1" to="21" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Possibility theory, probability theory and multiple-valued logics: A clarification</title>
		<author>
			<persName><forename type="first">D</forename><surname>Dubois</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Prade</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Annals of Mathematics and Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">1-4</biblScope>
			<biblScope unit="page" from="35" to="66" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Metamathematics of Fuzzy Logic</title>
		<author>
			<persName><forename type="first">P</forename><surname>Hájek</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1998">1998</date>
			<publisher>Kluwer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Towards integrating fuzzy logic capabilities into an ontology-based inductive logic programming framework</title>
		<author>
			<persName><forename type="first">J</forename><surname>Iglesias</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lehmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 11th Int. Conf. on Intelligent Systems Design and Applications</title>
				<meeting>of the 11th Int. Conf. on Intelligent Systems Design and Applications</meeting>
		<imprint>
			<publisher>IEEE Press</publisher>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Fuzzy sets and fuzzy logic: theory and applications</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">J</forename><surname>Klir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Yuan</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1995">1995</date>
			<publisher>Prentice-Hall, Inc</publisher>
			<pubPlace>Upper Saddle River, NJ, USA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Formulating description logic learning as an inductive logic programming task</title>
		<author>
			<persName><forename type="first">S</forename><surname>Konstantopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Charalambidis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 19th IEEE Int. Conf. on Fuzzy Systems</title>
				<meeting>of the 19th IEEE Int. Conf. on Fuzzy Systems</meeting>
		<imprint>
			<publisher>IEEE Press</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="1" to="7" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">DL-Learner: Learning Concepts in Description Logics</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lehmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Machine Learning Research</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="2639" to="2642" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Class expression learning for ontology engineering</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lehmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Auer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Bühmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tramp</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Web Semantics</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="71" to="81" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Ideal Downward Refinement in the EL Description Logic</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lehmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Haase</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ILP</title>
		<title level="s">Revised Papers. Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">L</forename><surname>De Raedt</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2009">2009. 2010</date>
			<biblScope unit="volume">5989</biblScope>
			<biblScope unit="page" from="73" to="87" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">A formal characterization of concept learning in description logics</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
		</author>
		<ptr target="CEUR-WS.org" />
	</analytic>
	<monogr>
		<title level="m">Proc. of the 2012 Int. Workshop on Description Logics. CEUR Workshop Proceedings</title>
				<editor>
			<persName><forename type="first">Y</forename><surname>Kazakov</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Lembo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Wolter</surname></persName>
		</editor>
		<meeting>of the 2012 Int. Workshop on Description Logics. CEUR Workshop eedings</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="volume">846</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A logic-based computational method for the automated induction of fuzzy ontology axioms</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Lisi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Straccia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Fundamenta Informaticae</title>
		<imprint>
			<biblScope unit="volume">124</biblScope>
			<biblScope unit="page" from="1" to="17" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Managing Uncertainty and Vagueness in Description Logics for the Semantic Web</title>
		<author>
			<persName><forename type="first">T</forename><surname>Lukasiewicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Straccia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Web Semantics</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="291" to="308" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Pattern recognition as a rule-guided inductive inference</title>
		<author>
			<persName><forename type="first">R</forename><surname>Michalski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="349" to="361" />
			<date type="published" when="1980">1980</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">A faithful integration of description logics with logic programming</title>
		<author>
			<persName><forename type="first">B</forename><surname>Motik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rosati</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 20th Int. Joint Conf. on Artificial Intelligence</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Veloso</surname></persName>
		</editor>
		<meeting>of the 20th Int. Joint Conf. on Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="477" to="482" />
		</imprint>
	</monogr>
	<note>IJCAI 2007</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Learning logical definitions from relations</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Quinlan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="239" to="266" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Equality and domain closure in first order databases</title>
		<author>
			<persName><forename type="first">R</forename><surname>Reiter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of ACM</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page" from="235" to="249" />
			<date type="published" when="1980">1980</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Reasoning within fuzzy description logics</title>
		<author>
			<persName><forename type="first">U</forename><surname>Straccia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Artificial Intelligence Research</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="137" to="166" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Description logics with fuzzy concrete domains</title>
		<author>
			<persName><forename type="first">U</forename><surname>Straccia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">21st Conf. on Uncertainty in Artificial Intelligence</title>
				<editor>
			<persName><forename type="first">F</forename><surname>Bachus</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Jaakkola</surname></persName>
		</editor>
		<imprint>
			<publisher>AUAI Press</publisher>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="559" to="567" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
