=Paper=
{{Paper
|id=Vol-329/paper-5
|storemode=property
|title=A Panoramic Approach to Integrated Evaluation of Ontologies in the Semantic Web
|pdfUrl=https://ceur-ws.org/Vol-329/paper04.pdf
|volume=Vol-329
|dblpUrl=https://dblp.org/rec/conf/eon/DasguptaDL07
}}
==A Panoramic Approach to Integrated Evaluation of Ontologies in the Semantic Web==
A Panoramic Approach to Integrated Evaluation of
Ontologies in the Semantic Web
Sourish Dasgupta, Deendayal Dinakarpandian, Yugyung Lee
School of Computing and Engineering
University of Missouri-Kansas City,
Missouri, USA
{sdwb7, dinakard, leeyu}@umkc.edu
Abstract. As the sheer volume of new knowledge increases, there is a need to find
effective ways to convey and correlate emerging knowledge in machine-readable form. The
success of the Semantic Web hinges on the ability to formalize distributed knowledge in terms
of a varied set of ontologies. We present Pan-Onto-Eval, a comprehensive approach to
evaluating an ontology by considering its structure, semantics, and domain. We provide formal
definitions of the individual metrics that constitute Pan-Onto-Eval, and synthesize them into an
integrated metric. We illustrate its effectiveness by presenting an example based on multiple
ontologies for a University.
Keywords: Ontologies, Semantic Web, Open Evaluation, Ontology Ranking
1 Introduction
An important goal of the Semantic Web [1] is to enable agents to discover
knowledge that is distributed across the Web. The distributed knowledge needs to be
formalized in the form of ontologies so that relevant subsets may be selected for
different purposes. As stated by Sabou et al [2], this necessitates an efficient way to
evaluate and rank ontologies. Ontology evaluation is also important for the related
problems of ontology discovery, reasoning and modularization [2].
Tartir et al [8] and Sabou et al [2] have compiled various metrics that can be used
to evaluate ontologies. Ding et al [3] and Patel et al [4] have proposed evaluation
metrics based on a popularity measure that is derived from Google’s Page Rank
algorithm [5]. A number of semantic search engines like Swoogle [3, 6], OntoKhoj
[4] and OntoSelect [7] are based mainly on the popularity measure. Ontology
evaluation and ranking can be used for selecting relevant knowledge resources [8] and
for determining their quality. Moreover, ontology evaluation can be an efficient basis
for comparing several ontologies, as shown in our previous work [9].
Ontology summarization is the extraction of a snapshot of an ontology that
contains the most important characteristics of the ontology (concepts and relations
that represent the thematic categories of the ontology). Zhang et al [10,11] have
introduced ontology summarization for better understanding and improved alignment
of similar ontologies. The primary idea underlying their work is the extraction of
relevant vocabularies from ontologies based on notions such as RDF1 sentences and
RDF graphs. They have not applied it to the evaluation of ontologies. To our
1 http://www.w3.org/RDF/
knowledge there has been limited work on the use of ontology summaries for the
purpose of ontology evaluation. Another important aspect is with regard to
scalability. Current evaluation methodologies are not scalable for a large ontology. An
intuitive way to handle this problem might be to modularize ontologies according to
usage patterns (Sabou et al [2] and Noy [12]). However, on-the-fly modularization of
ontologies based on queries is challenging due to the significant computation cost
required for ontology modularization per se. This motivated us to use summaries of
ontologies as the basis of our evaluation computation instead of dealing with the
entire ontology.
In this paper, we propose a novel way of evaluating ontologies based on our
ontology summarization technique [13] that focuses on multiple semantic dimensions
of ontologies. In view of the extensive diversity of ontologies, we need an integrated
approach to ontology evaluation that considers its domain as well as structural and
semantic perspectives.
2. Related Work
Several research efforts have tried to classify different methods for evaluating
ontologies based on these objectives [14,15]. Some work (Swoogle [3,6], OntoSelect
[7] and OntoKhoj [4]) focus on measurement of the authoritativeness of an ontology
by utilizing relevant and important cross-references of the ontology and rank them
similar to PageRank [5]. However, Alani et al [16] pointed out that cross-references
between ontologies might not be always available and hence evaluation based solely
on this criterion might fail. Furthermore, even though an ontology might be well
connected with several other ontologies, they might cover topics differently and have
different semantic implications. Thus, the importance of an ontology cannot be
captured simply by calculating its degree of reference.
Structural richness is a measure of the topological aspect (depth and height) of an
ontology. Tartir et al [8] have termed it as “inheritance richness.” This criterion
measures how the information is distributed over the entire ontology and determines
whether the ontology is domain-specific (the depth is greater than the width) or
generic (the width is greater than the depth). Another approach is to determine the
significance of a particular concept based on the number of super and sub concepts
[16-18]. In [16], two very important metrics have been considered: density measure
and centrality measure [18]. Density is determined based on the number of super and
sub concepts of the given concept. Centrality is a measure of how far a concept is
from the root concept in its hierarchy, relative to the length of the longest path from
the root to a leaf node containing the concept. It is assumed that concepts in the center
of an ontology are the most representative. This kind of evaluation relies largely on
the structural aspect of concepts in ontologies.
Relational richness is a measure that captures how a concept is related to other
concepts. According to Tartir et al [8], relational richness of an ontology is defined as
the ratio of the number of non-IS-A relations to the total number of relations in the
ontology. This definition, however, is somewhat simplistic. It is because this approach
does not take into account the roles of concept, domain (subject) or range (object), for
a given relation. A similar concern for relational richness can be found in Sabou et al
[2] where no model has been defined. It takes all relations into account regardless of
the fact that there may be more than one concept hierarchy in a single ontology. Thus,
it is important that the set of relations pertaining to a hierarchy should be treated
separately from those in different hierarchies. Otherwise the thematic differences
between these hierarchies cannot be correctly captured; this measure cannot properly
reflect the perspective of an ontology. Existing studies are limited in measuring the
semantics of relations in an ontology. In our model, we take the roles of the concepts
involved in relations into consideration and additional categories of relations for
ontology evaluation.
3. Proposed Model – Fundamental Concepts
We now present our ontology evaluation model, called Pan-Onto-Eval that builds
on our previous work on ontology summarization [13]. Ontology summarization aims
to extract a snapshot of an ontology that contains the most important characteristics of
the ontology (concepts and relations that represent the thematic categories of the
ontology). Our measurement represents a comprehensive perspective on the following
four important issues: a) Triple Centricity, b) Theme Centricity, c) Structure
Centricity and d) Domain Centricity. We hypothesize that all these features are highly
related to each other so that an integrated model can serve efficiently as the basis of
evaluation metrics. We elaborate on these fundamental concepts below.
a) Triple Centricity: This is the central feature of our model. In an ontology O, the
relations (denoted by R) can be either IS-A relations (denoted by RS) exclusively or
non-IS-A relations (denoted by RN): RS ⊂ R, RN ⊂ R and RN ∩ RS = φ. Given any
non-IS-A relation, a concept can be either a domain concept (DC) or a range concept
(RC) depending upon its role in the relation. A concept associated with a non-IS-A
relation can be either a DC or a RC.
Regarding the triple centric evaluation, we say that an ontology is meaningful
when there are many diverse relationships, i.e., domain concepts associated with other
concepts through diverse relations. Hence we analyze their roles with relations (i.e.
whether they are domain or range concepts) and their importance (the measurement of
concept importance) described in our work on ontology summarization [13].
Furthermore, we analyze how the range concepts are associated within these domains
as the range concepts play an important role, i.e., the information source, to the
domain concepts. In this way, we evaluate an ontology from a triple centric
perspective that is distinct from other works [8, 16-18].
b) Theme Centricity: This refers to the classification of non-IS-A relations in an
ontology. This is a measure that efficiently reflects the importance of non-ISA
relations in the evaluation of any ontology in terms of relational richness. Tartir et al
[8] stated “An ontology that contains many relations other than class-subclass
relations is richer than a taxonomy with only class-subclass relationships”. Sabou et al
[2] considered relations as a primary criterion for the summary extraction of
ontologies. However, they concentrated on a quantitative aspect such as the
percentage of non-IS-A vs. IS-A relations [8] and did not take into account how these
non-IS-A relations are distributed over an ontology.
In our work, seven broad thematic categories for classification of non-IS-A
relations inspired by UMLS [19] have been defined as follows: Compositional,
Attributive, Spatial, Functional, Temporal, Comparative and Conceptual. It is evident
from the justification provided for the triple centric approach that the relations
between domain and range concepts carry different semantic ‘senses’. This
classification thus provides for better understanding of the thematic categories of the
ontology so that it may facilitate effective ontology evaluation and querying. This is
because it allows one to map relations existing in query triples to those contained in
the ontology.
c) Structure Centricity: This measure describes the topology (i.e., shape and size)
of concept hierarchies of an ontology. Consider two topologies [8, 9]: The top-shaped
hierarchy has a characteristic such that the breadth of class hierarchies decreases as
the depth increases. This ontology is more generalized in its thematic category. On the
other hand, the pyramidal hierarchy has a characteristic such that the breadth of class
hierarchies increases as the depth increases. They are more domain-specific.
However, in reality, ontologies have more irregular shape in terms of the breadth-
depth ratio. Previous works [8, 9]only consider the average number of sub-classes of a
given hierarchy. Thus, this measure would not be appropriate for evaluating diverse
structural aspects of ontology. From a structural perspective, we may want to analyze
the distribution of non-IS-A relations. If a relation appears at a high level, it might be
too abstract. Otherwise, it might be too specific.
d) Domain Centricity: An ontology may consist of more than one IS-A hierarchy.
Each of these hierarchies might suggest that their thematic category (or semantic
implication) is different. In other words, each hierarchy contributes differently to the
semantics of the ontology as a whole. Each hierarchy consists of some domain
concepts typed under their own root; the specific perspective of these hierarchies may
be characterized by their relations and range concepts. That is why we analyze the
semantic richness of a hierarchy based on the comprehensiveness criterion (in Section
4) and incorporate the measure into an ontology evaluation score. We assume that this
approach is more appropriate than taking the ontology as a whole because it considers
the semantics and distribution of information across the ontology.
4. Pan-Onto-Eval Metrics
We now formalize our ontology evaluation metrics of the Pan-Onto-Eval. The
evaluation metric is defined by considering the following five qualitative aspects of
ontology: (1) Information content, (2) Relational Richness (3) Inheritance Richness,
(4) Dimensional Richness, and (5) Domain Importance. In the Pan-Onto-Eval, for a
given ontology, we independently analyze each hierarchy that exists under the root of
the ontology independently and combine information from multiple hierarchies into
information representing the ontology as a whole.
We define the parameters that will be used in the formula:
M: Number of range concepts in H
Mi: Number of selected range concepts with the thematic category i in the summary
N: Number of domain concepts in H
Ni: Number of selected domain concepts in the thematic category i in the summery
Q: Number of the thematic categories of relations in H
Q’: Total number of thematic categories (in our model it is seven)
R: Number of non-IS-A relations in H
Rt(RC): number of relations classified under the thematic category t for a range concept RC
R(i): Number of relations selected in the thematic category i in the summary.
R(DC): Number of relations associated with the domain concept DC
S(DCi) Number of direct sub-concept (children) under the domain concept DCi in H
α: Normalization function (a sigmoid function is used in the analysis)
K: number of hierarchies in the ontology
1) Information Content (IC) measures how well information involving relations R is
distributed over an IS-A hierarchy H in an Ontology O. Our hypothesis with regard to
IC is that a well spread distribution of important relations with respect to domain
concepts DC in H indicates richness of information. For this purpose, we borrow the
basic formula for information entropy[20] to determine degree of information content
of ontologies. We measure the number of relations in terms of the number of range
concepts RC that are associated with the hierarchy H.
Information Addition (IA) measures how important a Range Concept (RC) is as
compared to other RCs associated with a hierarchy. This can be represented as the
ratio of the number of observed relations associated with a thematically categorized
RC to the maximum number of possible relations of the RC. The maximum number of
possible relations of a RC is defined using the pigeon hole principle2as follows:
Q
∑ R ( RC )
t
(1)
IA( RC ) = t =1
R − M +1
Entropy of the Hierarchy E(H) is the amount of uncertainty associated with the
relational association of the RC to the hierarchy H. In other words, the overall
uncertainty of associated RCs can be measured as below.
M
E ( H ) = −∑ IA( RCi ) • log2 IA( RCi ) (2)
i =1
We now formally define Information Content (IC) of an IS-A hierarchy H as:
1
IC ( H ) = R • α • (3)
E(H )
A high value for IC implies that the information content of the hierarchy H in an
ontology is rich due to rich relationships defined in H.
2) Relational Richness (RR): This metric measures the degree of important relations
in a particular hierarchy of an ontology. We define RR for the hierarchy H as follows:
2 http://zimmer.csufresno.edu/~larryc/proofs/proofs.pigeonhole.html
1 Q
RR( H ) = • ∑ R (t )
(4)
Q t =1
This metric equation captures the important relations associated with the range
concepts that are scanned while generating the summary.
3) Inheritance Richness (IR) captures whether the hierarchical (IS-A) relations are
rich both structurally as well as in their information content. This is important because
a concept may have a rich set of sub-concepts but without carrying much information
per se. Such cases have been ignored in the metric definition of previous works [8].
We define IR of a particular hierarchy H as:
1 N
IR(H) = ∑S(DCi) • R(DCi)
N i=1
(5)
4) Dimensional Richness (DR) measures the richness of the thematic categories of
relations in a hierarchy of an ontology. This shows the different ways that an ontology
hierarchy can satisfy queries based on their summary content. We formally define DR
of an IS-A hierarchy H as:
Q Q
DR ( H ) = ∑ Ni • Mi
Q ′ i =1
(6)
The first factor of Equation 6 indicates the relative coverage of thematic categories for
an ontology. The second factor indicates the richness of all of these categories in
terms of the number of important (selected) range concepts and their domain
concepts. If the value of DR is high then it suggests that the corresponding ontology
carries a rich semantic dimensionality with a very high ratio of the identified
categories versus the total number of defined categories. It also indicates either a very
high density of selected range concepts and/or a very high density of corresponding
domain concepts in the ontology summary. This means that the ontology is rich in
certain thematic categories and queries based on those categories can be best served.
5) Domain Importance (DMI): This metric provides an insight to the richness of the
core domain(s) of interest that a particular hierarchy Hk contains when compared to
other hierarchies of the same ontology. This metric is basically a compound metric of
the previous three metrics. We define Domain Factor (DMF) and Domain Importance
(DMI) as follows:
DMF ( Hk ) = IC ( Hk ) + IR( Hk ) + DR( Hk ) + RR( Hk ) (7)
DMF ( Hk ) (8)
DMI ( Hk ) = k
MAX ( DMF ( Hi ))
i =1
If DMI is closer to the maximum possible value, this means that the domain
represented by this hierarchy is important compared to other hierarchies.
Ontology Evaluation Score ( ρ ): For a given ontology O, we analyze the richness of
each hierarchy within O separately and according to respective criteria. We can now
combine them together into a single model that can effectively evaluate ontologies. In
order to combine the individual analysis of hierarchies, we compute it as the product
of the average of DMI and the maximum DMF (the best one). We formalize the
ontology evaluation score (denoted by ρ ) as follows:
k 1 K
• ∑ DMI ( Hi )
ρ (o) = MAX ( DMF ( Hi )) • (9)
i =1 K i =1
5. Experimental Results
3 4 5
We analyze three related university ontologies (O1 O2 , O3 ) and evaluate them
according to the proposed model. As preprocessing, we convert the DAML files to
6
OWL using a converting tool and generate summaries. The application is
implemented using the Protégé OWL 3.3 beta API on a Windows machine. Table 1
shows the analysis of ontology University-I. We analyze the 9 hierarchies among 11
(denoted as Hi) in the ontology excluding two hierarchies (they have single concept
with no relation). Hierarchy H6 has the highest number of associated non-IS-A
relations (12) and the highest number of range concepts (9) while H5 has the
maximum number of domain concepts (5) and the maximum levels.
It is interesting to note that although H6 and H7 are structurally and relationally
rich than the others yet they have a low Information Content (IC). This is because the
relations are not distributed evenly throughout the hierarchy and most of the domain-
concepts in the hierarchy are weakly associated with range concepts in terms of
information distribution. Hierarchy H5 has the highest Domain Importance (DMI)
value and thus is considered the best hierarchy of this ontology. This accounts for the
high Inheritance Richness (IR) score and Dimensional Richness (DR) score as
compared to other hierarchies and hence shows how important it is to have high-
weight relations associated with the concepts (and sub-concepts) of a hierarchy. The
contributing factor is the dimensional variety of the summary which reflects the rich
categorical coverage of the hierarchy as a whole. This hierarchy is rooted at the
domain-concept ‘Document’ and covers the attributive, functional and temporal
aspects evenly. The next best hierarchy is H7 rooted at the concept ‘Organization’
with the majority of relations falling under the categories conceptual and attributive.
Close to this hierarchy is H6 rooted at ‘Organism’. The rest of the hierarchies have
pretty low DMI values. The evaluation score of the University-I (ρ) is 6.109.
Analyzing Table 2 indicates that the University–II ontology is an instantiation of
the University-I. It is interesting to see that the new hierarchy (having a single concept
3 http://www.ksl.stanford.edu/projects/DAML/ksl-daml-desc.daml
4 http://www.ksl.stanford.edu/projects/DAML/ksl-daml-instances.daml
5 http://www.cs.umd.edu/projects/plus/DAML/onts/univ1.0.daml
6 http://www.mindswap.org/2002/owl.shtml
‘Chimaera-Export-Enable’) adds no richness to the ontology. An important
observation is that the best hierarchy in this ontology is H6 as compared to its parent
ontology where the best hierarchy is H5. This is because of the partial use of the
University-I ontology. This leads to a lowering of the DR value and the RR value of
H5. The evaluation score of the ontology (ρ) is 3.909.
Table 1. Evaluation of University – I
H1 H2 H3 H4 H5 H6 H7 H8 H9
Number of relations (R) 2 1 3 3 4 12 11 1 3
Number of range concepts (M) 2 1 3 3 4 9 7 1 3
Number of Domain concepts (N) 1 1 1 1 5 4 2 1 1
Information content (IC) 2 1 3 3 4 3 3.52 1 3
Inheritance richness (IR) 0 0 0 0 4 3 1 0 0
Dimensional richness (DR) 0.57 0.14 1.28 1.28 1.7 1.4 3.4 0.14 0.57
Relational richness (RR) 1 1 1 1 1.33 2.4 2.75 1 1.5
Domain factor (DMF) 2.57 2.14 3.28 3.28 8.03 7.05 7.15 2.14 3.07
Domain importance (DMI) 0.29 0.27 0.38 0.38 1 0.87 0.89 0.27 0.37
Table 2. Evaluation of University - II
H1 H2 H3 H4 H5 H6 H7 H8
Number of relations (R) 0 1 3 0 2 6 5 2
Number of range concepts (M) 0 1 3 0 2 6 3 2
Number of Domain concepts (N) 1 1 1 1 5 4 2 1
Information content (IC) 0 1 3 0 2 6 2.9 2
Inheritance richness (IR) 0 0 0 0 0 0 0 0
Dimensional richness (DR) 0 0.14 1.28 0 0.57 1.71 2.85 0.57
Relational richness (RR) 0 1 1 0 1 2 1.25 1
Domain factor (DMF) 0 2.14 3.28 0 2.57 4.71 4.68 2.57
Domain importance (DMI) 0 0.454 0.696 0 0.546 1 0.99 0.546
Table 3. Evaluation of University - III
H1 H2 H3 H4 H5 H6 H7
Number of relations (R) 1 3 1 6 2 0 0
Number of range concepts (M) 1 2 1 4 2 0 0
Number of Domain concepts (N) 1 16 2 4 7 2 3
Information content (IC) 1 1.95 1 3.3 2 0 0
Inheritance richness (IR) 0 7 0 8 0 0 0
Dimensional richness (DR) 0.14 0.57 0.14 1.28 0.57 0 0
Relational richness (RR) 1 1 1 1 1 0 0
Domain factor (DMF) 2.14 9.22 2.14 10.83 2.57 0 0
Domain importance (DMI) 0.198 0.851 0.198 1 0.237 0 0
The third ontology, University-III, has been analyzed in Table 3. This ontology is
different semantically from the previous two ontologies although there are common
concepts among them. This is because the associated relations (and hence the
semantic categories) are quite different. H4 is rooted at ‘Person’ and has 4 DCs, 4 RCs
and 6 Relations. Incidentally, this hierarchy is structurally the best among the seven
hierarchies of the ontology. If we compare H4 with H2 (rooted at ‘Employee’) we will
see the number of RCs and relations in H2 are smaller compared to H4. Although the
number of DCs in H2 is 16 (four times that of H4) yet the IR value (7) is lower than
that of H4 (8). This is because most of the inheritances in H2 are void relationally (3
Relations and 2 RCs). This means they have no semantic importance although they
are very rich structurally. The second best structurally rich hierarchy is H5 (7 DCs).
But this hierarchy has low DMI due to low dimensional richness, in spite of IC being
high. The other important factor for such a low DMI is that the relations are
associated with the leaf concepts of the hierarchy and hence the IR value is 0
(compared to 8 of H4 and 7 of H5). The evaluation score of the University-III (ρ) is
4.567.
We give a comparative analysis of these three ontologies in Figure 1 showing the
break-up of the average contribution of each of the metrics for the final evaluation
score.
30
25
20
University - I
15
University - II
University - III
10
5
0
Avg. IC Avg. IR Avg. DR Avg. RR E-Score
Fig. 1. Comparison of the three ontologies (IC, IR, DR, RR are scaled by factor 10)
7. Conclusion
This paper has presented Pan-Onto-Eval, a comprehensive approach to evaluating
an ontology by considering various aspects like structure, semantics, and domain. The
main contribution of this paper is a formal treatment of the model for an automated
and integrated evaluation of ontologies. The experimental results of the university
ontologies demonstrate the essence and benefits of the proposed model. This work is
limited by a lack of rigorous evaluation by experts. The summarization technique that
is an important basis could have been fully explored and the thematic categories may
further be expanded for real world applications. Overall, the model has great potential
on evaluation and selection of distributed knowledge in the Semantic Web.
References
1. Berners-Lee, T., J. Hendler, and O. Lassila, The Semantic Web. Scientific
American, 2001. 284(5): p. 34-43.
2. M. Sabou, et al. Ontology Selection; Ontology Evaluation on the Real
Semantic Web. in the Evaluation of Ontologies for the Web (EON). 2006.
3. L. Ding, et al. Finding and Ranking Knowledge on the Semantic Web in the
4th International Semantic Web Conference. 2005.
4. C. Patel, et al. OntoKhoj: A Semantic Web Portal for Ontology Searching,
Ranking and Classification. in the 5th International ACM Workshop on Web
Information and Data Management. 2003.
5. L. Page, et al., The PageRank Citation Ranking: Bringing Order to Web
1998, Stanford.
6. L. Ding, et al. Swoogle: A Search and Metadata Engine for the Semantic
Web in 13th ACM International Conference on Information and Knowledge
Management. 2004.
7. P. Buitelaar, T. Eigner, and T. Declerck. Ontoselect: A Dynamic Ontology
Library with Support for Ontology Selection. in the International Semantic
Web Conference. 2004. Hiroshima, Japan.
8. S. Tartir, et al. OntoQA: Metric-Based Ontology Quality Analysis. in IEEE
Workshop onKnowledge Acquisition from Distributed, Autonomous,
Semantically Heterogeneous Data and Knowledge Sources. 2005.
9. K. Supekar, C. Patel, and Y. Lee. Characterizing Quality of Knowledge on
Semantic web. in the AAAI Florida AI Research Symposium 2004.
10. X. Zhang, G. Cheng, and Y. Qu. Ontology Summarization Based on RDF
Sentence Graph. in 16th International World Wide Web Conference. 2007.
11. X. Zhang, H. Li, and Y. Qu. Finding Important Vocabulary within Ontology.
in 1st Asian Semantic Web Conference (ASWC). 2006.
12. Noy, N.F., Evaluation by Ontology Consumers IEEE Intelligent Systems,
2004. 19(4): p. 74-81.
13. S. Dasgupta and Y. Lee, Relation Oriented Ontology Summerization. 2007,
University of Missouri - KC.
14. J. Brank, M. Grobelnik, and D. Mladenic. A Survey of Ontology Evaluation
Techniques. in Conference on Data Mining and Data Warehouses, SiKDD.
2005. Ljubljana, Slovenia.
15. J. Hartmann, et al. Methods for ontology evaluation. in Knowledge Web
Deliverable D1.2.3. 2005.
16. H. Alani, C. Brewster, and N. Shadbolt:. Ranking Ontologies with
AKtiveRank. in 5th International Semantic Web Conference. 2006.
17. H. Alani and C. Brewster. Metrics for Ranking Ontologies. in 15th
International Conference for World Wide Web. 2006. Edinburgh, UK.
18. H. Alani and C. Brewster. Ontology Ranking based on the Analysis of
Concept Structures. International Conference on Knowledge Capture 2005.
19. V. Lopez, M. Pasin, and E. Motta. AquaLog: An Ontology Portable Question
Answering System for the Semantic Web. in European Semantic Web
Conference (ESWC). 2005.
20. Shannon, C.E., A Mathematical Theory of Communication. Bell System
Technical Journal, 1948. 27: p. 379-423, 623-656.