<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Reasoning by Analogy in Description Logics through Instance-based Learning</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Claudia</forename><surname>D'amato</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Informatica</orgName>
								<orgName type="institution">Università degli Studi di Bari Campus Universitario</orgName>
								<address>
									<addrLine>Via Orabona 4</addrLine>
									<postCode>70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nicola</forename><surname>Fanizzi</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Informatica</orgName>
								<orgName type="institution">Università degli Studi di Bari Campus Universitario</orgName>
								<address>
									<addrLine>Via Orabona 4</addrLine>
									<postCode>70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Floriana</forename><surname>Esposito</surname></persName>
							<email>fanizzi|esposito@di.uniba.it</email>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Informatica</orgName>
								<orgName type="institution">Università degli Studi di Bari Campus Universitario</orgName>
								<address>
									<addrLine>Via Orabona 4</addrLine>
									<postCode>70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Reasoning by Analogy in Description Logics through Instance-based Learning</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">B43F8F656E2D282BC7AACC8B921A3529</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T02:55+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This work presents a method founded in instancebased learning for inductive (memory-based) reasoning on ABoxes. The method, which exploits a semantic dissimilarity measure between concepts and instances, can be employed both to answer class membership queries and to predict new assertions that may be not logically entailed by the knowledge base. In a preliminary experimentation, we show that the method is sound and it is actually able to induce new assertions that might be acquired in the knowledge base.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>I. INTRODUCTION</head><p>Most of the research on ontology reasoning has been focusing on methods based on deductive reasoning. However, important tasks that are likely to be provided by new generation knowledge-based systems, such as construction, revision, population and evolution are likely to be supported also by inductive methods. This has brought to an increasing interest in machine learning and knowledge discovery methods applied to ontological representations (see <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b1">[2]</ref> and, more recently, <ref type="bibr" target="#b2">[3]</ref>, <ref type="bibr" target="#b3">[4]</ref>, <ref type="bibr" target="#b4">[5]</ref>, <ref type="bibr" target="#b5">[6]</ref>).</p><p>We propose an algorithm which is based on a notion of concept similarity for performing a form of lazy learning on typical ontological representations. Namely, by combining an instance-based (analogical) approach with a notion of semantic dissimilarity, this paper intends to demonstrate the applicability of inductive reasoning in this field which can be considered another form of approximate reasoning (see discussion in <ref type="bibr" target="#b6">[7]</ref>). In particular, we have adapted the general instancebased learning approach like the k-Nearest Neighbor <ref type="bibr" target="#b7">[8]</ref> to the specific multi-relational setting for ontology languages. A couple of technical problems had to be solved for this adaptation to ontology representations: 1) the Open World Assumption (OWA) that is made in this context; 2) in this multi-class problem setting disjunction cannot be assumed by default.</p><p>The standard ontology languages are founded in Description Logics (henceforth DLs) as they borrow the language constructors for expressing complex concept definitions. Instancebased methods depend on a similarity measure defined on the instance space. In this perspective, a variety of measures for concept representations have been proposed (see <ref type="bibr" target="#b8">[9]</ref> for a survey). As pointed out in <ref type="bibr" target="#b9">[10]</ref>, most of these measures focus on the similarity of atomic concepts within hierarchies or simple ontologies, based on a few relations. Thus, it becomes necessary to investigate similarity in more complex languages. It has been observed that, adopting richer representations, the structural properties have less and less impact in assessing semantic similarity. Hence, the vision of similarity based only on a structural (graph-based) approach, such as in <ref type="bibr" target="#b10">[11]</ref>, <ref type="bibr" target="#b11">[12]</ref> may fall short. We have proposed some dissimilarity measures for non trivial DL languages, based on the semantics conveyed by the ABox assertions, which are suitable for being used in instance-based methods <ref type="bibr" target="#b12">[13]</ref>, <ref type="bibr" target="#b13">[14]</ref>. These measures elicit the underlying semantics by querying the knowledge base for assessing the concept extensions, estimated through their retrieval <ref type="bibr" target="#b14">[15]</ref>, as also hinted in <ref type="bibr" target="#b15">[16]</ref>. Besides, the overall similarity is also (partially) influenced by the concepts which are related through role restrictions. Moreover, in many other typical tasks (e.g. conceptual clustering or definition), it is necessary to assess the similarity between concepts (resp. individuals). By recurring to the notion of most specific concept of an individual with respect to an ABox <ref type="bibr" target="#b14">[15]</ref>, as representatives of the individuals at the concept level, the measures for concepts can be extended to such cases.</p><p>This analogical reasoning procedure like this may be employed to answering class membership queries through analogical rather than deductive reasoning which may be more robust with respect to noise and is likely to suggest new knowledge (which was not logically derivable). Specifically we developed the method also for an application of semantic web service discovery where services are annotated in DLs.</p><p>Another application might regard supporting various tasks for the knowledge engineer, such as the acquisition of candidate assertions for enriching ontologies with partially populated ABoxes: the outcomes given by the procedure can be utilized as recommendations. Indeed, as we show in the experimentation, the newly induced assertions are quite accurate (commission errors, i.e. predicting a concept erroneously, were rarely observed). In turn, the outcomes of the procedure may also trigger other related tasks such as induction (revision) of (faulty) knowledge bases.</p><p>The paper is organized as follows. In the next section, the representation language is briefly presented. Two concept dissimilarity measures are recalled and exploited in a modified version of the k-NN classification procedure. The results of a preliminary experimental evaluation of the method using these two measures are shown and, finally, possible developments of the method are examined.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. ALC KNOWLEDGE BASES</head><p>We recall the basics of ALC <ref type="bibr" target="#b16">[17]</ref>, a logic which adopts constructors supported by the standard ontology languages (see the DL handbook <ref type="bibr" target="#b14">[15]</ref> for a thorough reference).</p><p>In DLs, descriptions are inductively defined starting with a set N C of primitive concept names and a set N R of primitive roles. Complex descriptions are built using primitive concepts and roles and the constructors showed in the following. The semantics of the descriptions is defined by an interpretation I = (∆ I , • I ), where ∆ I is a non-empty set, the domain of the interpretation, and</p><formula xml:id="formula_0">• I is the interpretation function that maps each A ∈ N C to a set A I ⊆ ∆ I and each R ∈ N R to R I ⊆ ∆ I × ∆ I .</formula><p>The top concept is interpreted as the whole domain of objects ∆ I , while the bottom concept ⊥ corresponds to ∅. Complex descriptions can be built in ALC using the following constructors. The language supports full negation: given any description C, denoted ¬C, it amounts to A related inference used in the following is instance checking, that is deciding whether an individual is an instance of a concept <ref type="bibr" target="#b17">[18]</ref>, <ref type="bibr" target="#b14">[15]</ref>. Conversely, it may be necessary to find the concepts which an individual belongs to (realization problem), especially the most specific one (see Def. 5).</p><formula xml:id="formula_1">∆ I \ C I . Concept conjunction, denoted C 1 C 2 , yields the extension C I 1 ∩ C I 2 ; dually, concept disjunction, denoted C 1 C 2 , yields the union C I 1 ∪ C I 2 . Finally, the existential restriction, denoted ∃R.C, is interpreted as {x ∈ ∆ I | ∃y ∈ ∆ I ((x, y) ∈ R I ∧ y ∈ C I )}</formula><p>Semantically equivalent (yet syntactically different) descriptions can be given for the same concept. Nevertheless, equivalent concepts can be reduced to a normal form by means of rewriting rules that preserve their equivalence <ref type="bibr" target="#b18">[19]</ref>:</p><formula xml:id="formula_2">A description D is in ALC normal form iff D ≡ ⊥ (then D := ⊥) or if D ≡ (then D := ) or if D = D 1 • • • D n (∀i = 1, . . . , n, D i ≡ ⊥) with D i = A∈prim(Di) A R∈N R   ∀R.val R (D i ) E∈ex R (Di)</formula><p>∃R.E   where:</p><p>• prim(D i ) is the set of all (negated) primitive concepts occurring at the top level of D i ;</p><formula xml:id="formula_3">• val R (D i ) is the conjunction C i 1 • • • C i n in the value restriction of role R, if any (otherwise val R (D i ) = ); • ex R (D i )</formula><p>is the set of concepts in the existential restrictions of the role R. For any R, every sub-description in ex R (D i ) and val R (D i ) is in normal form.</p><p>In the following, let L = ALC/ ≡ be the quotient set of ALC descriptions in normal form.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. DISSIMILARITY MEASURES IN DESCRIPTION LOGICS</head><p>We recall two definition of dissimilarity measures for ALC descriptions expressed in normal form <ref type="bibr" target="#b12">[13]</ref>, <ref type="bibr" target="#b13">[14]</ref>. These measures are based on both the structure and the semantics of the concept descriptions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Overlap Dissimilarity Measure</head><p>The first measure, is derived from a measure of the overlap between concepts, that can be defined as follows:</p><p>Definition 1 (overlap function): Let I be the canonical interpretation of the ABox A. The overlap function f is defined<ref type="foot" target="#foot_0">1</ref> :</p><formula xml:id="formula_4">f : L × L → R + defined for descriptions C, D ∈ L, with C = n i=1 C i and D = m j=1 D j disjunctive level: f (C, D) := f (C, D) =    ∞ C ≡ D 0 C D ≡ ⊥ max i,j f (C i , D j ) otherwise conjunctive level: f (C i , D j ) := f P (prim(C i ), prim(D j ))+ +λ(f ∀ (C i , D j ) + f ∃ (C i , D j ))</formula><p>with λ ∈ [0, 1]. primitive concepts:</p><formula xml:id="formula_5">f P (P 1 , P 2 ) := |R(P 1 ) ∪ R(P 2 )| |(R(P 1 ) ∪ R(P 2 )) \ (R(P 1 ) ∩ R(P 2 ))|</formula><p>where R(P ) = A∈P A I and f P (P 1 , P 2 ) = ∞ when R(P 1 ) = R(P 2 ).</p><p>value restrictions:</p><formula xml:id="formula_6">f ∀ (C i , D j ) := R∈N R f (val R (C i ), val R (D j ))</formula><p>existential restrictions:</p><formula xml:id="formula_7">f ∃ (C i , D j ) := R∈N R N k=1 max p=1,...,M f (C k i , D p j )</formula><p>where</p><formula xml:id="formula_8">C k i ∈ ex R (C i ) and D p j ∈ ex R (D j ) and we suppose w.l.o.g. that N = |ex R (C i )| ≥ |ex R (D j )| = M ,</formula><p>otherwise the indices N and M as well as C i and D j are to be exchanged in the formula above.</p><p>The function f represents a measure of the overlap between two descriptions expressed in ALC normal form. It measures the similarity of the input concepts based on the similarity between their extensions (approximated with the retrieval) and also on the similarity of the concepts reached by the role restrictions. Namely, It is defined recursively beginning from the top level of the descriptions (a disjunctive level) up to the bottom level represented by (conjunctions of) primitive concepts.</p><p>Overlap at the disjunctive level is treated as the maximum overlap between the disjunctive forms of the input concepts. At conjunctive levels, instead of simply considering the similarity like in Tversky's measure <ref type="bibr" target="#b19">[20]</ref> (as we do for prim's), that could turn out to be null in case of disjoint retrieval sets for the input concepts, we distinguish the overlaps between the prims and those between the concepts in the scope of the various role restrictions<ref type="foot" target="#foot_1">2</ref> ; the contribution of the overlap of concepts reached through role restrictions can be penalized by tweaking the parameter λ. The measure for primitive concepts resembles Tversky's measure and it represents a semantic baseline since it depends on the semantics of the knowledge base, as conveyed by the ABox assertions. This is in line with to the ideas in <ref type="bibr" target="#b15">[16]</ref>, <ref type="bibr" target="#b9">[10]</ref>, where semantics is elicited as a probability distribution over the domain of the interpretation.</p><p>As for the role restrictions overlap, for the universal restrictions we simply add the overlap measures varying the role, while for the existential restrictions the measure is trickier: borrowing the idea of the existential mappings we consider, per role, all possible matches between the concepts in the scope of existential restrictions, then we consider the maximal sum of overlaps resulting from all the matches. Now, it is possible to derive a dissimilarity measure based on f as follows:</p><p>Definition 2 (overlap dissimilarity measure): The overlap dissimilarity measure is a function</p><formula xml:id="formula_9">d : L × L → [0, 1] such that, given two concept descriptions C, D ∈ L, C = n i=1 C i and D = m j=1 D j : d(C, D) :=    1 if f (C, D) = 0 0 if f (C, D) = ∞ 1 f (C,D)</formula><p>otherwise Function d simply measures the level of dissimilarity between two concepts as the inverse of the overlap function f . Particularly, if f (C, D) = 0, i.e. there is no overlap between the considered concepts, then d must indicate that the two concepts are totally different, indeed d(C, D) = 1, the maximum value of its range. If f (C, D) = ∞ this means that the two concepts are totally overlapped and consequently d(C, D) = 0 that means that the two concept are indistinguishable, indeed d assumes the minimum value of its range. If the considered concepts have a partial overlap then their dissimilarity is inversely proportional to their overlap, since in this case f (C, D) &gt; 1 and consequently 0 &lt; d(C, D) &lt; 1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. A Dissimilarity Measure Based on Information Content</head><p>As discussed in <ref type="bibr" target="#b11">[12]</ref>, a measure of concept (dis)similarity can be derived from the notion of Information Content (IC) that, in turn, depends on the probability of an individual to belong to a certain concept. Now, differently from other works, which assume that a probability distribution for the concepts in an ontology is known, we derive it from the knowledge base, from the distribution that can be estimated therein <ref type="bibr" target="#b15">[16]</ref>, <ref type="bibr" target="#b9">[10]</ref>.</p><p>In order to approximate this probability for a certain concept C, we recur to its extension w.r.t. the considered ABox in a fixed interpretation. Namely, we chose the canonical interpretation I A , which is the one adopting the set of individuals mentioned in the ABox as its domain and the identity as its interpretation function <ref type="bibr" target="#b14">[15]</ref>. Now, given a concept C its probability is estimated by:</p><formula xml:id="formula_10">pr(C) = |C I A |/|∆ I A |</formula><p>Finally, we can compute the information content of a concept, employing this probability:</p><formula xml:id="formula_11">IC(C) = − log pr(C)</formula><p>A measure of the concept dissimilarity is now formally defined <ref type="bibr" target="#b13">[14]</ref>:</p><p>Definition 3 (IC dissimilarity): Let A be an ABox with canonical interpretation I. The information content dissimilarity measure is a function g : L × L → R + defined recursively for any two normal form concept descriptions C, D ∈ L, with</p><formula xml:id="formula_12">C = n i=1 C i and D = m j=1 D j disjunctive level: g(C, D) := g (C, D) =        0 if C ≡ D ∞ if C D = ⊥ min g (C i , D j ) 1 ≤ i ≤ n 1 ≤ j ≤ m otherwise conjunctive level: g (C i , D j ) := g P (prim(C i ), prim(D j ))+ +λ(g ∀ (C i , D j ) + g ∃ (C i , D j )) with λ ∈ [0, 1].</formula><p>primitive concepts:</p><formula xml:id="formula_13">g P (P i , P j ) :=    ∞ if P i P j ≡ ⊥ IC(Pi Pj )+1 IC(LCS(Pi,Pj ))+1</formula><p>otherwise value restrictions:</p><formula xml:id="formula_14">g ∀ (C i , D j ) := R∈N R g (val R (C i ), val R (D j ))</formula><p>existential restrictions:</p><formula xml:id="formula_15">g ∃ (C i , D j ) := R∈N R N k=1 min p=1,...,M g (C k i , D p j )</formula><p>where  In case of cyclic ABoxes expressed in a DL with existential restrictions the MSC may not be expressed by a finite description <ref type="bibr" target="#b14">[15]</ref>, yet it may be often approximated.</p><formula xml:id="formula_16">C k i ∈ ex R (C i ) and D p j ∈ ex R (D j ) and we suppose w.l.o.g. that N = |ex R (C i )| ≥ |ex R (D j )| = M ,</formula><formula xml:id="formula_17">: L × L → [0, 1], such that given the concept descriptions in ALC normal form C = n i=1 C i and D = m j=1 D j , let d(C, D) :=    0 if g(C, D) = 0 1 if g(C, D) = ∞ 1 − 1 g(C,</formula><p>On performing experiments related to another similarity measure exclusively based on concept extensions <ref type="bibr" target="#b20">[21]</ref>, we noticed that, recurring to the MSC for lifting individuals to the concept level, just falls short: indeed the MSCs may be too specific and unable to include other (similar) individuals in their extensions. By comparing descriptions reduced to the normal form we have given a more structural definition of dissimilarity. However, since MSCs are computed from the same ABox assertions, reflecting the current knowledge state, this guarantees that structurally similar representations will be obtained for semantically similar concepts.</p><p>Let us recall that, given the ABox, it is possible to calculate the most specific concept of an individual a w.r.t. the ABox, MSC(a) or at least its approximation MSC k (a) up to a certain description depth k. In some cases these are equivalent concepts but in general we have that MSC k (a) MSC(a).</p><p>Given two individuals a and b in the ABox, we consider MSC k (a) and MSC k (b) (supposed in normal form). Now, in order to assess the dissimilarity between the individuals, d measure can be applied to these descriptions:</p><formula xml:id="formula_18">d(a, b) := d(MSC k (a), MSC k (b))</formula><p>This may turn out to be handy in several tasks, namely both in inductive reasoning (construction, repairing of knowledge bases) and in information retrieval.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>D. Discussion</head><p>We proved in <ref type="bibr" target="#b12">[13]</ref>, <ref type="bibr" target="#b13">[14]</ref> that these measures are really dissimilarity measures according to the formal definition <ref type="bibr" target="#b21">[22]</ref>, considering that their input (what is actually compared) is equivalence classes in L, i.e. ALC concept descriptions with the same normal form.</p><p>As previously mentioned, we have also attempted slightly different measure definitions (e.g. considering minima or products at the conjunctive level that sound more intuitive) which experimentally did not prove more effective than the simple (additive) definition given above.</p><p>The computational complexity of the measures depends on the complexity of the required reasoning services for computing the concepts retrieval. Namely both subsumption and instance-checking are P-space for the ALC logic <ref type="bibr" target="#b17">[18]</ref>, yet such inference can be computed once and preliminarily, before the measures are computed for the method we will present in the following.</p><p>Obviously when computing the dissimilarity measures for cases involving individuals, the extra cost of computing MSCs (or their approximations) has to be added.</p><p>Nevertheless, in practical applications, these computations may be efficiently carried out exploiting the statistics that are maintained by the DBMSs query optimizers. Besides, the counts that are necessary for computing the concept extensions could be estimated by means of the probability distribution over the domain.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>IV. A NEAREST NEIGHBOR CLASSIFICATION PROCEDURE IN DESCRIPTION LOGICS</head><p>We briefly review the basics of the k-Nearest Neighbor method (k-NN) and propose how to exploit the classification procedure for inductive reasoning. In this lazy approach to learning, a notion of distance (or dissimilarity) measure for the instance space is employed to classify a new instance.</p><p>Let x q be the instance that requires a classification. Using a dissimilarity measure, the set of k nearest pre-classified instances is selected. The objective is to learn a discrete-valued target function h : IS → V from a space of instances IS to a set of values V = {v 1 , . . . , v s }. In its simplest setting, the k-NN algorithm approximates h for new instances x q on the ground of the value that h assumes in the neighborhood of x q , i.e. the k closest instances to the new instance in terms of a dissimilarity measure. Precisely, it is assigned according to the value which is voted by the majority of instances in the neighborhood. The the classification function ĥ can be formally defined as follows:</p><formula xml:id="formula_19">ĥ(x q ) := argmax v∈V k i=1 δ(v, h(x i ))</formula><p>where δ is a function that returns 1 in case of matching arguments and 0 otherwise. Note that the hypothesized function ĥ is defined only extensionally, that is the k-NN method does not return an intensional classification model (e.g. a function or a concept definition), it merely gives an answer for new query instances to be classified, employing the procedure above. This simple formulation does not take into account the similarity among instances, except when selecting the instances to be included in the neighborhood. Therefore a modified setting is generally adopted, weighting the vote on the grounds of the similarity of the instance:</p><formula xml:id="formula_20">ĥ(x q ) := argmax v∈V k i=1 w i δ(v, h(x i ))<label>(1)</label></formula><p>where, usually, w i = 1/d(x i , x q ) or w i = 1/d(x i , x q ) 2 . Usually this method is employed to classify vectors of features in some n-dimensional instance space (e.g. often IS = R n ). Let us now turn to adapt the method to the more complex context of DLs descriptions. Preliminarily, it should be observed that a strong assumption of this setting is that it can be employed to assign a value (e.g. a class) to a query instance among a set of values which can be regarded as a set of pairwise disjoint concepts/classes. This is an assumption that cannot be always valid. In this case, indeed, an individual could be an instance of more than one concept.</p><p>Let us consider a new value set V = {C 1 , . . . , C s } of concepts C j that may be assigned to a query instance x q . If they were to be considered as disjoint (like in the standard machine learning setting), the decision procedure would adopt the hypothesis function defined in Eq. ( <ref type="formula" target="#formula_20">1</ref>), with the query instance assigned the single concept voted by the weighted majority of instances in the neighborhood.</p><p>In the general case considered in this paper, when the disjointness of the classes cannot be assumed (unless explicitly stated in the TBox), one can adopt another answering procedure, decomposing the multi-class problem into smaller binary classification problems (one per concept). Therefore, a simple binary value set (V = {−1, +1}) is to be employed. Then, for each single concept (say C j ), a hypothesis ĥj is computed on the fly during the classification phase:</p><formula xml:id="formula_21">ĥj (x q ) := argmax v∈V k i=1 δ(v, h j (x i )) d(x q , x i ) 2 ∀j ∈ {1, . . . , s}<label>(2)</label></formula><p>where each function h j , simply indicates the occurrence (+1) or absence (−1) of the corresponding assertion in the ABox for the k training instances x i : C j (x i ) ∈ A. Alternately<ref type="foot" target="#foot_2">3</ref> , h j may return +1 when C j (x i ) is logically entailed by the knowledge base K, and −1 otherwise. The problem with non-explicitly disjoint concepts is also related to the Closed World Assumption usually made in the context of Information Retrieval and Machine Learning. That is the reason for adapting the standard setting to cope both with the case of non-disjoint concepts and with the OWA which is commonly made in the Semantic Web context. To deal with the OWA, the absence of information on whether a certain instance x belongs to the extension of concept C j should not be interpreted negatively; rather, it should count as neutral information. Thus, one can still adopt the decision procedure in Eq. ( <ref type="formula" target="#formula_21">2</ref>), however another value set has to be adopted for the h j 's, namely V = {−1, 0, +1}, where the three values denote, respectively, positive occurrence, absence and negative occurrence (positive for the concept negation). Formally:</p><formula xml:id="formula_22">h j (x) =    +1 C j (x) ∈ A 0 C j (x) ∈ A ∧ ¬C j (x) ∈ A −1 ¬C j (x) ∈ A</formula><p>Again, a more complex procedure may be devised by simply substituting the notion of occurrence (absence) of assertions in (from) the ABox with one based on logic entailment (denoted with ) from the knowledge base, i.e. K C j (x), K C j (x) nor K ¬C j (x) and K ¬C j (x), respectively. Although this may help reaching the precision of deductive reasoning, it is also much more computationally expensive, since the simple lookup in the ABox must be replaced with instance checking.</p><p>From a computational viewpoint this procedure could be implemented to provide an answer even more efficiently than with a standard deductive reasoner. Indeed, once the retrieval of the primitive concepts is computed, the dissimilarity measures can be easily computed by means of a dynamic programming algorithm. Besides, the various measures could be maintained in an ad hoc data structure which would allow for an efficient retrieval of the nearest neighbors, such as the kD-trees or ball trees <ref type="bibr" target="#b22">[23]</ref> V. EXPERIMENTS</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Experimental Setting</head><p>In order to assess the validity of the presented method with the dissimilarity measures proposed in Sect. III, we have applied it to the instance classification problem, with four different ontologies represented in OWL: FSM, SURFACE-WATER-MODEL from the Protégé library<ref type="foot" target="#foot_3">4</ref> , the FINANCIAL ontology <ref type="foot" target="#foot_4">5</ref> employed as a testbed for the PELLET reasoner and a small FAMILY ontology written in our lab. Although they are represented in languages that are different from ALC, we simply discarded these details when constructing the MSC approximations to be able to apply the presented measures.</p><p>FAMILY is an ALCF ontology describing kinship relationships. It is made up of 14 concepts (both primitive and defined), some of them are declared to be disjoint, 5 object properties, 39 distinct individual names. Most of the individuals are asserted to be instances of more than one concept, and are involved in more than one role assertions. This ontology has been written to have a small yet more complex case with respect to the following ones. Indeed, while the other ontologies are more regular, i.e. only some concepts are employed in the assertions (the others are defined only intensionally), in the FAMILY ontology every concept has at least one instance asserted. The same happens for the assertions on roles; particularly, there are some cases where role assertions constitute a chain from an individual to another one, by means of other intermediate assertions.</p><p>The FSM ontology describes the domain of finite state machines using the SOF(D) language. It is made up of 20 (both primitive and defined) concepts (some of them are explicitly declared to be disjoint), 10 object properties, 7 datatype properties, 37 distinct individual names. About half of the individuals are asserted as instances of a single concept and are not involved in any role (object property). SURFACE-WATER-MODEL is an ALCOF(D) ontology describing the domain of the surface water and the water quality models. It is based on the Surface-water Models Information Clearinghouse (SMIC) of the USGS. Namely, it is an ontology of numerical models for surface water flow and water quality simulation. The application domain of these models comprises lakes, oceans, estuaries etc.. These models are classified based on their availability, application domain, dimensions, partial differential equation solver, and characteristics types. It is made up of 19 concepts (both primitive and defined) without any specification about disjointness, 9 object properties, 115 distinct individual names; each of them is an instance of a single class and only some of them are involved in object properties.</p><p>FINANCIAL is an ALCIF ontology that describes the domain of eBanking. It is made up of 60 (both primitive and defined) concepts (some of them are declared to be disjoint), 17 object properties, and no datatype property. It contains 17941 distinct individual names. From the original ABox, we randomly extracted assertions for 652 individuals.</p><p>The classification method was applied to all the individuals in each ontology; namely, the individuals were checked to assess if they were instances of the concepts in the ontology through the analogical method. The performance was evaluated comparing its responses to those returned by a standard reasoner 6 as a baseline.</p><p>Specifically, for each individual in the ontology the MSC is computed and enlisted in the set of training (or test) examples. Each example is classified applying the adapted k-NN method presented in the previous section. As a value of k we chose |Ind(A)|, as advised in the instance-based learning literature.</p><p>The experiment has been repeated twice adopting a leaveone-out cross validation procedure with both the dissimilarity measures defined in Section III.</p><p>For each concept in the ontology, we measured the following parameters for the evaluation:</p><p>• match rate: number of cases of individuals that got exactly the same classification by both classifiers with respect to the overall number of individuals; • omission error rate: amount of unlabeled individuals (our method could not determine whether it was an instance 6 We employed PELLET: http://pellet.owldl.com or not) while it was to be classified as an instance of that concept;</p><p>• commission error rate: amount of individuals (analogically) labeled as instances of a concept, while they (logically) belong to that concept or vice-versa • induction rate: amount of individuals that were found to belong to a concept or its negation, while this information is not logically derivable from the knowledge base We report the average rates obtained over all the concepts in each ontology and also their standard deviation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Experiments Employing the Overlap Measure</head><p>By looking at Tab. I reporting the experimental outcomes with the dissimilarity measure based on the overlap (see Def.</p><p>2), preliminarily it is important to note that, for every ontology, the commission error was quite low. This means that the classifier did not make critical mistakes i.e. cases when an individual is deemed as an instance of a concept while it really is an instance of another disjoint concept.</p><p>In particular, by looking at the outcomes related to the FAMILY ontology, it can be observed that the match rate is the lowest while the highest rate of omission errors was reported. This may be due to two facts: 1) very few individuals were available w.r.t. the number of concepts <ref type="foot" target="#foot_5">7</ref> ; 2) sparse data situation: instances are irregularly spread over the concepts, that is they might be instances of different concepts, which are sometimes disjoint. Hence the MSC approximations that were computed also resulted very different one from another, which reduces the possibility of significantly matching similar MSCs. This is a known drawback of the Nearest-Neighbor methods. Nevertheless, as mentioned above, it is important to note that the algorithm did not make any commission error and it is able to infer new knowledge (11%).</p><p>As regards the FSM ontology, we have observed the maximum match rate with respect to the classification given by the logic reasoner. Moreover, differently from the other ontologies, both the omission error rate and induction rate were null. A very limited percentage of incorrect classification cases was observed. These outcomes were probably due to the fact that individuals in this ontology are quite regularly divided by the assertions on concepts and roles, so computing their MSCs, these are all very similar to each other and so the amount For the same reasons, also for the SURFACE-WATER-MODEL ontology quite a high rate of matching classifications was reported (yet less then with the previous ontology); moreover, some cases of omission error (6%) were observed. The induction rate was about 12% which means that for this ontology our classifier always assigned individuals to the correct concepts but, in some cases, it could also induce new assertions. Since this rate represents assertions that were not logically deducible from the ontology and yet they were inferred inductively by the analogical classifier, these figures would be a positive outcome (provided this knowledge were deemed as correct by an expert). Particularly, in this case the increase of the induction rate has been due to the presence of assertions of mutual disjointness for some of the concepts.</p><p>Results are no different also for the case of the experiments with FINANCIAL ontology that largely exceeds the others in terms of number of concepts and individuals. Namely, the observed match rate is again above the 80% and the rest of the cases are comprised in the induction rate (17%), leaving a limited margin to residual errors. This corroborates a fact about the NN learners, that is their reaching better and better performance in the limit, as long as new training instances become available. Actually, performing a 10-fold cross validation we obtained almost the same results.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>C. Experiments with the Information Content Measure</head><p>The average results obtained by adopting the procedure with the measure based on information content (see Def. 4) are reported in Table <ref type="table" target="#tab_2">II</ref>.</p><p>By analyzing this table it is possible to note that no sensible variation was observed in the classifications performed using the first dissimilarity measure. Particularly, with both measures the method correctly classified all the individuals, without commission errors. The reason is that, in most of the cases, the individuals of these ontologies are instances of one concept only and they are involved in a few roles (object properties). Some of the figures are slightly lower than those observed in the other experiment: this is confirmed by a higher variability.</p><p>Surprisingly, the results on the larger ontologies (S.-W.-M. and FINANCIAL) perfectly match those obtained with the other measure which is probably due to the fact that we used a leaveone-out cross-validation mode which yielded a high value for the number k of training instances for the neighborhood and it is well known that the NN procedure becomes more and more precise as more instances can be considered. The price to be paid was a longer computation time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VI. CONCLUSIONS AND FUTURE WORK</head><p>In this work we have coupled with a method for inductive inference on ABoxes with two composite concept similarity measures. The method is based on the classification in analogy with the majority of neighbor training individuals. It can be naturally exploited for predicting/suggesting missing information about individuals thus enabling a sort of inductive retrieval. For example, it may be employed a query to the KB may be issued. Specifically we are targeting also the task of service discovery when for semantically annotated web service. Even more so the outcomes of our method may be decisive for enabling a series of other inductive tasks such as clustering, case-based reasoning, , etc.. The proposed method is able to induce new assertions, in addition to the knowledge already derivable by means of a reasoner. Then it seems to be able to enhance the standard instance-based classification. Particularly, an increase in accuracy was observed when the instances increase in number and are homogeneously spread.</p><p>The presented measure can be refined tweaking the weighting factor λ for decreasing the impact of the dissimilarity between nested sub-concepts in the descriptions on the determination of the overall value.</p><p>Lately, we have defined another measure for ALN <ref type="bibr" target="#b23">[24]</ref>. Hence a natural extension may concern the definition of dissimilarity measures in more expressive DLs languages. For example, a normal form for ALCN can be obtained based on those for ALN and ALCN . Then, by exploiting the notion of existential mappings <ref type="bibr" target="#b24">[25]</ref>, the measure presented in this paper may be extended to more expressive DLs. We are currently developing other kind of semantic similarities based on the idea of the hypothesis-driven distance.</p><p>Besides, as mentioned, this method could be extended with different (yet still computationally tractable) answering procedures grounded on statistical inference (non-parametric tests based on ranked distances), in order to accept answers as correct with a high degree of confidence. Furthermore, the k-NN method in its classical form is particularly suitable for the automated induction of missing values for (scalar or numeric) datatype properties of an individual as an estimate derived from the values of the datatypes for the surrounding individuals.</p><p>Kernels are another means to express a notion of similarity is some unknown feature space. We are working at the definition of kernel functions on DLs representations <ref type="bibr" target="#b25">[26]</ref>, thus allowing the exploitation of kernel methods efficiency (e.g. the support vector machines) in a multi-relational setting.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>and the value restriction ∀R.C has as its extension the set {x ∈ ∆ I | ∀y ∈ ∆ I ((x, y) ∈ R I → y ∈ C I )}. The main inference is subsumption between concepts based on their semantics: given two descriptions C and D, C subsumes D, denoted by C D, iff for every interpretation I it holds that C I ⊇ D I . When C D and D C then they are equivalent, denoted with C ≡ D. A knowledge base K = T , A contains a TBox T and an ABox A: • T is the set of definitions C ≡ D, meaning that, for every interpretation I, C I = D I , where C is the concept name and D is its description constructed as above; • A contains concept and role assertions about the worldstate, e.g. C(a) and R(a, b), meaning that, for every I, a I ∈ C I and (a I , b I ) ∈ R I .</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>D) otherwise C. Measuring the Dissimilarity between Individuals The notion of Most Specific Concept is commonly exploited for lifting individuals to the concept level. Definition 5 (most specific concept): Given an ABox A and an individual a, the most specific concept of a w.r.t. A is the concept C, denoted MSC A (a), such that A |= C(a) and for any other concept D such that A |= D(a), it holds that C D.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>otherwise the indices N and M are to be exchanged in the formula above.The function g represents a measure of the dissimilarity between two descriptions expressed in ALC normal form. It is defined recursively beginning from the top level of the descriptions (a disjunctive level) up to the bottom level represented by (conjunctions of) primitive concepts. Now g has values in [0, ∞]. It may be useful to derive a normalized dissimilarity measure as shown in the following.Definition 4 (normalized IC dissimilarity): Let A be an ABox with canonical interpretation I. The normalized information content dissimilarity measure is the function d</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>TABLE I AVERAGE</head><label>I</label><figDesc>RESULTS OF THE EXPERIMENTS WITH THE METHOD EMPLOYING THE MEASURE BASED ON OVERLAP.</figDesc><table><row><cell cols="4">Match Commission Omission Induction</cell></row><row><cell>Rate</cell><cell>Rate</cell><cell>Rate</cell><cell>Rate</cell></row><row><cell cols="4">FAMILY .654±.174 .000±.000 .231±.173 .115±.107</cell></row><row><cell cols="4">FSM .974±.044 .026±.044 .000±.000 .000±.000</cell></row><row><cell cols="4">S.-W.-M. .820±.241 .000±.000 .064±.111 .116±.246</cell></row><row><cell cols="4">FINANCIAL .807±.091 .024±.076 .000±.001 .169±.076</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>TABLE II AVERAGE</head><label>II</label><figDesc>RESULTS OF THE EXPERIMENTS WITH THE METHOD EMPLOYING THE MEASURE BASED ON INFORMATION CONTENT.</figDesc><table><row><cell cols="4">Match Commission Omission Induction</cell></row><row><cell>Rate</cell><cell>Rate</cell><cell>Rate</cell><cell>Rate</cell></row><row><cell cols="4">FAMILY .608±.230 .000±.000 .330±.216 .062±.217</cell></row><row><cell cols="4">FSM .899±.178 .096±.179 .000±.000 .005±.024</cell></row><row><cell cols="4">S.-W.-M. .820±.241 .000±.000 .064±.111 .116±.246</cell></row><row><cell cols="4">FINANCIAL .807±.091 .024±.076 .000±.001 .169±.046</cell></row><row><cell cols="4">of information they convey is very low. A choice of a lower</cell></row><row><cell cols="4">number k of neighbors could probably help committing those</cell></row><row><cell>residual errors.</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">The name A of the ABox is omitted for simplicity.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">We tried also other solutions, such as considering minima or products of the three overlap measures, yet experimentally this did not yield a better performance whereas the computation time was increased. For practical reasons, the maximal measure ∞ has been replaced with large numbers depending on the cardinality of the set of individuals in the ABox: |Ind(A)|.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">For the sake of simplicity and efficiency, this case will not be considered in the following.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">See the webpage: http://protege.stanford.edu/plugins/owl/owl-library</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">See the webpage: http://www.cs.put.poznan.pl/ alawrynowicz/financial.owl</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_5">Instance-based methods make an intensive use of the information about the individuals and improve their performance with the increase of the number of instances considered.</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGMENTS</head><p>This work was partially supported by the regional interest projects DIPIS (DIstributed Production as Innovative System) and DDTA (Distretto Digitale Tessile Abbigliamento) in the context of the tasks related to the semantic web services discovery.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Learning the CLASSIC description logic</title>
		<author>
			<persName><forename type="first">W</forename><surname>Cohen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hirsh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 4th International Conference on the Principles of Knowledge Representation and Reasoning</title>
				<editor>
			<persName><forename type="first">P</forename><surname>Torasso</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Doyle</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Sandewall</surname></persName>
		</editor>
		<meeting>the 4th International Conference on the Principles of Knowledge Representation and Reasoning</meeting>
		<imprint>
			<publisher>Morgan Kaufmann</publisher>
			<date type="published" when="1994">1994</date>
			<biblScope unit="page" from="121" to="133" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A polynomial approach to the constructive induction of structural knowledge</title>
		<author>
			<persName><forename type="first">J.-U</forename><surname>Kietz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Morik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="193" to="218" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">ECAI2000 Workshop on Ontology Learning</title>
	</analytic>
	<monogr>
		<title level="s">ser. CEUR WS Proceedings</title>
		<editor>S. Staab, A. Maedche, C. Nedellec, and P. Wiemer-Hastings</editor>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m">IJCAI2001 Workshop on Ontology Learning</title>
				<editor>
			<persName><forename type="first">S</forename><surname>Staab</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Maedche</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Nedellec</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Hovy</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Knowledge-intensive induction of terminologies from metadata</title>
		<author>
			<persName><forename type="first">F</forename><surname>Esposito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Fanizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Iannone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Palmisano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Semeraro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ISWC2004, Proceedings of the 3rd International Semantic Web Conference, ser. LNCS</title>
				<editor>
			<persName><forename type="first">F</forename><surname>Van Harmelen</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Mcilraith</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Plexousakis</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2004">2004</date>
			<biblScope unit="volume">3298</biblScope>
			<biblScope unit="page" from="441" to="455" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Decentralized case-based reasoning for the Semantic Web</title>
		<author>
			<persName><forename type="first">M</forename><surname>D'aquin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lieber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Napoli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 4th International Semantic Web Conference, ISWC2005, ser</title>
				<editor>
			<persName><forename type="first">Y</forename><surname>Lncs</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Gil</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Motta</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Benjamins</surname></persName>
		</editor>
		<editor>
			<persName><surname>Musen</surname></persName>
		</editor>
		<meeting>the 4th International Semantic Web Conference, ISWC2005, ser</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2005">3279. 2005</date>
			<biblScope unit="page" from="142" to="155" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Resolution-based approximate reasoning for OWL DL</title>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Vrandecić</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 4th International Semantic Web Conference, ISWC2005, ser</title>
				<editor>
			<persName><forename type="first">Y</forename><surname>Lncs</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Gil</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Motta</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Benjamins</surname></persName>
		</editor>
		<editor>
			<persName><surname>Musen</surname></persName>
		</editor>
		<meeting>the 4th International Semantic Web Conference, ISWC2005, ser</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2005">3279. 2005</date>
			<biblScope unit="page" from="383" to="397" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Machine Learning</title>
		<author>
			<persName><forename type="first">T</forename><surname>Mitchell</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1997">1997</date>
			<publisher>McGraw-Hill</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Assessing semantic similarity between spatial entity classes</title>
		<author>
			<persName><forename type="first">M</forename><surname>Rodríguez</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1997">1997</date>
		</imprint>
		<respStmt>
			<orgName>University of Maine</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Ph.D. dissertation</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Towards measuring similarity in description logics</title>
		<author>
			<persName><forename type="first">A</forename><surname>Borgida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Walsh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hirsh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of the International Description Logics Workshop, ser. CEUR Workshop Proceedings</title>
				<meeting><address><addrLine>Edinburgh, UK</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Automated resolution of semantic heterogeneity in multidatabases</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">W</forename><surname>Bright</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">R</forename><surname>Hurson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Pakzad</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transaction on Database Systems</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="212" to="253" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language</title>
		<author>
			<persName><forename type="first">P</forename><surname>Resnik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Artificial Intelligence Research</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="95" to="130" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">A dissimilarity measure for the ALC description logic</title>
		<author>
			<persName><forename type="first">C</forename><surname>Amato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Fanizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Esposito</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Semantic Web Applications and Perspectives, 2nd Italian Semantic Web Workshop SWAP2005</title>
				<editor>
			<persName><forename type="first">P</forename><surname>Bouquet</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Tummarello</surname></persName>
		</editor>
		<meeting><address><addrLine>Trento, Italy</addrLine></address></meeting>
		<imprint>
			<publisher>CEUR</publisher>
			<date type="published" when="2005">2005</date>
			<biblScope unit="volume">166</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A dissimilarity measure for ALC concept descriptions</title>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 21st Annual ACM Symposium of Applied Computing</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Sac2006</surname></persName>
		</editor>
		<editor>
			<persName><surname>Haddad</surname></persName>
		</editor>
		<meeting>the 21st Annual ACM Symposium of Applied Computing<address><addrLine>Dijon, France</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="1695" to="1699" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m">The Description Logic Handbook</title>
				<editor>
			<persName><forename type="first">F</forename><surname>Baader</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Calvanese</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Mcguinness</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Nardi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Patel-Schneider</surname></persName>
		</editor>
		<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Lp, a logic for representing and reasoning with statistical knowledge</title>
		<author>
			<persName><forename type="first">F</forename><surname>Bacchus</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational Intelligence</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="209" to="231" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Attributive concept descriptions with complements</title>
		<author>
			<persName><forename type="first">M</forename><surname>Schmidt-Schauß</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Smolka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="26" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Deduction in concept languages: From subsumption to instance checking</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Donini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lenzerini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Nardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Schaerf</surname></persName>
		</author>
		<ptr target="citeseer.ist.psu.edu/donini94deduction.html" />
	</analytic>
	<monogr>
		<title level="j">Journal of Logic and Computation</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="423" to="452" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Approximation and difference in description logics</title>
		<author>
			<persName><forename type="first">S</forename><surname>Brandt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Küsters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-Y</forename><surname>Turhan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference on Knowledge Representation</title>
				<editor>
			<persName><forename type="first">D</forename><surname>Fensel</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Giunchiglia</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Mcguinness</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M.-A</forename><surname>Williams</surname></persName>
		</editor>
		<meeting>the International Conference on Knowledge Representation</meeting>
		<imprint>
			<publisher>Morgan Kaufmann</publisher>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="203" to="214" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Features od similarity</title>
		<author>
			<persName><forename type="first">A</forename><surname>Tversky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psycological Review</title>
		<imprint>
			<biblScope unit="volume">84</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="327" to="352" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">A semantic similarity measure for expressive description logics</title>
		<author>
			<persName><forename type="first">C</forename><surname>Amato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Fanizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Esposito</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Convegno Italiano di Logica Computazionale, CILC05, A. Pettorossi</title>
				<meeting>Convegno Italiano di Logica Computazionale, CILC05, A. Pettorossi<address><addrLine>Rome, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">Analysis of Symbolic Data: Exploratory Methods for Extracting Statistical Information from Complex Data</title>
		<author>
			<persName><forename type="first">H</forename><surname>Bock</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1999">1999</date>
			<publisher>Springer-Verlag</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">H</forename><surname>Witten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Frank</surname></persName>
		</author>
		<title level="m">Data Mining</title>
				<imprint>
			<publisher>Morgan Kaufmann</publisher>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
	<note>2nd ed</note>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">A similarity measure for the ALN description logic</title>
		<author>
			<persName><forename type="first">N</forename><surname>Fanizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Amato</surname></persName>
		</author>
		<ptr target="http://cilc2006.di.uniba.it/download/camera/15FanizziCILC06.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of Convegno Italiano di Logica Computazionale</title>
				<meeting>Convegno Italiano di Logica Computazionale<address><addrLine>CILC06, Bari, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Computing least common subsumers in ALEN</title>
		<author>
			<persName><forename type="first">R</forename><surname>Kusters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Molitor</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Joint Conference on Artificial Intelligence</title>
				<editor>
			<persName><forename type="first">B</forename><surname>Nebel</surname></persName>
		</editor>
		<meeting>the International Joint Conference on Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2001">IJCAI2001. 2001</date>
			<biblScope unit="page" from="219" to="224" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">A declarative kernel for ALC concept descriptions</title>
		<author>
			<persName><forename type="first">N</forename><surname>Fanizzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Amato</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 16th International Symposium on Methodologies for Intelligent Systems, ISMIS 2006, ser</title>
				<editor>
			<persName><forename type="first">F</forename><surname>Lnai</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Z</forename><surname>Esposito</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Ras</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Malerba</surname></persName>
		</editor>
		<editor>
			<persName><surname>Semeraro</surname></persName>
		</editor>
		<meeting>the 16th International Symposium on Methodologies for Intelligent Systems, ISMIS 2006, ser</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="volume">4203</biblScope>
			<biblScope unit="page" from="322" to="331" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
