<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Improving Ontology Recommendation and Reuse in WebCORE by Collaborative Assessments</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Iván Cantador</string-name>
          <email>ivan.cantador@uam.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Miriam Fernández</string-name>
          <email>miriam.fernandez@uam.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pablo Castells</string-name>
          <email>pablo.castells@uam.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Escuela Politécnica Superior Universidad Autónoma de Madrid Campus de Cantoblanco</institution>
          ,
          <addr-line>28049, Madrid</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>General Terms Algorithms</institution>
          ,
          <addr-line>Measurement, Human Factors</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this work, we present an extension of CORE [8], a tool for Collaborative Ontology Reuse and Evaluation. The system receives an informal description of a specific semantic domain and determines which ontologies from a repository are the most appropriate to describe the given domain. For this task, the environment is divided into three modules. The first component receives the problem description as a set of terms, and allows the user to refine and enlarge it using WordNet. The second module applies multiple automatic criteria to evaluate the ontologies of the repository, and determines which ones fit best the problem description. A ranked list of ontologies is returned for each criterion, and the lists are combined by means of rank fusion techniques. Finally, the third component uses manual user evaluations in order to incorporate a human, collaborative assessment of the ontologies. The new version of the system incorporates several novelties, such as its implementation as a web application; the incorporation of a NLP module to manage the problem definitions; modifications on the automatic ontology retrieval strategies; and a collaborative framework to find potential relevant terms according to previous user queries. Finally, we present some early experiments on ontology retrieval and evaluation, showing the benefits of our system.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Ontology evaluation</kwd>
        <kwd>ontology reuse</kwd>
        <kwd>rank fusion</kwd>
        <kwd>collaborative filtering</kwd>
        <kwd>WordNet</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Categories and Subject Descriptors</title>
      <p>H.3.3 [Information Storage and Retrieval]: Information Search
and Retrieval – information filtering, retrieval models, selection
process.</p>
    </sec>
    <sec id="sec-2">
      <title>1. INTRODUCTION</title>
      <p>The Web can be considered as a live entity that grows and evolves
fast over time. The amount of content stored and shared on the
web is increasing quickly and continuously. The global body of
multimedia resources on the Internet is undergoing a significant
growth, reaching a presence comparable to that of traditional text
contents. The consequences of this enlargement result in well
known difficulties and problems, such as finding and properly
managing all the existing amount of sparse information.
To overcome these limitations the so-called “Semantic Web”
trend has emerged with the aim of helping machines process
information, enabling browsers or other software agents to
automatically find, share and combine information in consistent
ways. As put by Tim Berners-Lee in 1999, “I have a dream for
the Web in which computers become capable of analyzing all the
data on the Web – the content, links, and transactions between
people and computers. A ‘Semantic Web’, which should make this
possible, has yet to emerge, but when it does, the day-to-day
mechanisms of trade, bureaucracy and our daily lives will be
handled by machines talking to machines. The ‘intelligent agents’
people have touted for ages will finally materialize”.</p>
      <p>At the core of these new technologies, ontologies are envisioned
as key elements to represent knowledge that can be understood,
used and shared among distributed applications and machines.
However, ontological knowledge mining and development are
difficult and costly tasks that require major engineering efforts.
Developing an ontology from scratch requires the expertise of at
least two different individuals: an ontology engineer that ensures
the correctness during the ontology design and development, and
a domain expert, responsible for capturing the semantics of a
specific field into the ontology. In this context, ontology reuse
becomes an essential need in order to exploit past and current
efforts and achievements.</p>
      <p>
        In this scenario, it is also important to emphasize that ontologies,
as well as content, do not stop evolving and growing within the
Web. They are part of its wave of growth and evolution, and they
need to be managed and kept up to date in distributed
environments. In this perspective, the initial efforts to collect
ontologies in libraries [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] are not sufficient, and novel
technologies are necessary to successfully retrieve this special
kind of content.
      </p>
      <p>
        Novel tools have been recently developed, such as ontology
search engines [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] represent an important first step towards
automatically assessing and retrieving ontologies which satisfy
user queries and requests. However, ontology reuse demands
additional efforts to address special needs and requirements from
ontology engineers and practitioners. It is necessary to evaluate
and measure specific ontology features, such as lexical
vocabulary, relations [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], restrictions, consistency, correctness,
etc., before making an adequate selection. Some of these features
can be measured automatically, but some, like the correctness or
the level of formality, require a human judgment to be assessed.
In this context, the Web 2.0 is arising as a new trend where people
collaborate and share their knowledge to successfully achieve
their goals. New search engines like Technorati1 exploit blogs
with the aim of finding not only the information that the user is
looking for, but also the experts that might better answer the
users’ requirements. As put by David Sifry, one of the founders of
1 Technorati, blog search engine, http://technorati.com/
Technorati, in an interview for a Spanish newspaper, “Internet has
been transformed from the great library to the great
conversation”.
      </p>
      <p>
        Following this aspiration, the work presented here aims to
enhance ontology retrieval and recommendation, combining
automatic evaluation techniques with explicit users’ opinions and
experiences. This work follows a previous approach for
Collaborative Ontology Reuse and Evaluation over controlled
repositories, named CORE [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. For the work reported in this
paper, the tool has been enhanced and adapted to the Web. Novel
technologies, such as AJAX2, have been incorporated to the
system for the design and implementation of the user interface. It
has also been modified and improved to overcome previous
limitations, such as handling large numbers of ontologies. The
collaborative capabilities have also been extended within two
different frameworks. Firstly, during the problem definition phase,
the system helps users to express their needs and requirements by
showing other problem descriptions previously given by different
users. Secondly, during the ontology retrieval phase, the system
helps users to enhance the automatic system recommendations by
using other user evaluations and comments.
      </p>
      <p>Following Leonardo Da Vinci’s words, “Wisdom is the daughter
of experience”, our tool aims to take a step forwards for helping
users to be wise in exploiting other people’s experience and
expertise.</p>
      <p>The rest of the paper has been organized as follows. Section 2
summarizes some relevant work related to our system. Its
architecture is described in Section 3. Section 4 contains empirical
results obtained from early experiments done with a prototype of
the system. Finally, several conclusions and future research lines
are given in Section 5.</p>
    </sec>
    <sec id="sec-3">
      <title>2. RELATED WORK</title>
    </sec>
    <sec id="sec-4">
      <title>2.1 Ontology Evaluation</title>
      <p>Two well-known scenarios for ontology reuse have been
identified in the Semantic Web area. The first one addresses the
common problem of finding the most adequate ontologies for a
specific domain. The second scenario envisions the not so
common but real situation in which Semantic Web applications
need to automatically and dynamically find an ontology. In this
work, we focus our attention on the fist scenario, where users are
the ones who express their information needs. In this scenario,
ontology reuse involves several areas such as ontology evaluation,
selection, search and ranking.</p>
      <p>
        Several ontology libraries and search engines have been
developed in the last few years to address the problem of ontology
search and retrieval. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] presents a complete study of ontology
libraries (WebOnto, Ontolingua, SHOE, etc.), where their
functionalities are evaluated attending to different criteria such as
ontology management, ontology adaptation and ontology
standardization. Although ontology libraries are a good temporary
solution for ontology retrieval, they suffer from the current
limitation of not being opened to the web. In that sense, Swoogle
[
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] constitutes one of the biggest efforts carried out to crawl,
index and search for ontologies distributed across the Web.
To obtain the most appropriate ontology and fulfil ontology
engineers’ requirements, search engines and libraries should be
complemented with evaluation methodologies.
      </p>
      <p>Ontology evaluation can be defined as assessing the quality and
the adequacy of an ontology for being used in a specific context,
for a specific goal. From our perspective, ontology evaluation
constitutes the cornerstone of ontology reuse because it faces the
complex task of evaluate, and consequently select the most
appropriate ontology on each situation.</p>
      <p>
        An overview of ontology evaluation approaches is presented in
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], where four different categories are identified: those that
evaluate an ontology by comparing it to a Golden Standard [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ];
those that evaluate the ontologies by plugging them in an
application and measuring the quality of the results that the
application returns [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]; those that evaluate ontologies by
comparing them to unstructured or informal data (e.g. text
documents) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], and those based on human interaction to measure
ontology features not recognizable by machines [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. In each of
the above approaches several evaluation levels are identified:
lexical, taxonomical, syntactic, semantic, contextual, and
structural between others. Table 1 summarized these ideas.
Once the ontologies have been searched, retrieved and evaluated,
the next step is to select the most appropriate one that fulfils user
or application goals. Some approaches for ontology selection have
been addressed in [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] and complemented in [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], where a
complete study is presented to determine the connections between
ontology selection and evaluation.
      </p>
      <p>
        When the user and not the application is the one that demands an
ontology, the selection task should be less categorical, returning
not only one but the set of the most suitable results. To sort these
results according to the evaluation criteria, several ontology
ranking measures have been proposed in the literature. Some of
them are presented in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Both works aim to take a step
beyond to the approaches based on the page-rank algorithm [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ],
where ontologies are ranked considering the number of links
between them, because this ranking methodology does not work
for ontologies with poor connectivity and lack of referrals from
other ontologies.
      </p>
      <p>
        As it has been shown before, current ontology reuse approaches
take advantage of ontology evaluation, search, retrieval, selection
and ranking methodologies. All these areas provide different
advantages to the process of ontology evaluation and reuse, but
they do not exploit others related to the well known
Recommender Systems [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]; is it helpful to know other users’
opinions to evaluate and select the most suitable ontology?
The collaboration between users has been addressed in the area of
ontology design and construction [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. In [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], the necessity of
mechanisms for ontology maintenance is presented under
scenarios like “ontology-development in collaborative
environments”. Moreover, works as [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], present tools and
services to support the process of achieving consensus on
common shared ontologies by geographically distributed groups.
However, despite all these common scenarios where the user’s
collaboration is required for ontology design and construction, the
use of collaborative tools for ontology evaluation is still a novel
and incipient approach in the literature [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
    </sec>
    <sec id="sec-5">
      <title>2.2 Recommender Systems</title>
      <p>Collaborative filtering strategies make automatic predictions
(filter) about the interests of a user by collecting taste information
from many users (collaborating). This approach usually consists
of two steps: a) look for users that have a similar rating pattern to
that of the active user (the user for whom the prediction is done),
and b) use the ratings of users found in the previous step to
compute the predictions for the active user. These predictions are
specific to the user, differently to those given by more simple
approaches that provide average scores for each item of interest,
for example based on its number of votes.</p>
      <p>
        Collaborative filtering is a widely explored field. Three main
aspects typically distinguish the different techniques reported in
the literature [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]: user profile representation and management,
filtering method, and matching method.
      </p>
      <p>User profile representation and management can be divided
into five different tasks:
• Profile representation. Accurate profiles are vital for the
content-based component (to ensure recommendations are
appropriate) and the collaborative component (to ensure that
users with similar profiles are in fact similar). The type of
profile chosen in this work is the user-item ratings matrix
(ontology evaluations based on specific criteria).
• Initial profile generation. The user is not usually willing to
spend too much time in defining her/his interests to create a
personal profile. Moreover, user interests may change
dynamically over time. The type of initial profile generation
chosen in this work is a manual selection of values for only
five specific evaluation criteria.
• Profile learning. User profiles can be learned or updated
using different sources of information that are potentially
representative of user interests. In our work, profile learning
techniques are not used.
• The source of user input and feedback to infer user interests
from information used to update user profiles. It can be
obtained in two different ways: using information explicitly
provided by the user, and using information implicit
observed in the user’s interaction. Our system uses no
feedback to update the user profiles.
• Profile adaptation. Techniques are needed to adapt the user
profiles to new interests and forget old ones as user interests
evolve with time. Again, in our approach profile adaptation
is done manually (manual update of ontology evaluations).
Filtering method. Items or actions are recommended to a user
taking into account the available information (item content
descriptions and user profiles). There are three main information
filtering approaches for making recommendations:
• Demographic filtering: Descriptions of people (e.g. age,
gender, etc) are used to learn the relationship between a
single item and the type of people who like it.
• Content-based filtering: The user is recommended items
based on the descriptions of items previously evaluated by
other users. Content-based filtering is chosen approach in
our work (the system recommends ontologies using previous
evaluations of those ontologies).
• Collaborative filtering: People with similar interests are
matched and then recommendations are made.</p>
      <p>Matching method. It defines how user interests and item
characteristics are compared. Two main approaches can be
identified:
• User profile matching: people with similar interests are
matched before making recommendations.
• User profile-item matching: a direct comparison is made
between the user profile and the items. The degree of
appropriateness of the ontologies is computed by taking into
account previous evaluations of those ontologies.</p>
      <p>In WebCORE, a new ontology evaluation measure based on
collaborative filtering is proposed, considering users’ interests and
previous assessments of the ontologies.</p>
    </sec>
    <sec id="sec-6">
      <title>3. SYSTEM ARCHITECTURE</title>
      <p>As mentioned before, WebCORE is a web application for
Collaborative Ontology Reuse and Evaluation. A user logins into
the system via a web browser, and, thanks to AJAX technology
and the Google Web Toolkit3, dynamically describes a problem
domain, searches for ontologies related to this domain, obtains
relevant ontologies ranked by several lexical, taxonomic and
collaborative criteria, and optionally evaluates by himself those
ontologies that he likes or dislikes most.</p>
      <p>
        In this section, we describe the server-side architecture of
WebCORE. Figure 1 shows an overview of the system. We
distinguish three different modules. The first one, the left module,
receives the problem description (Golden Standard) as a full text
or as a set of initial terms. In the first case, the system uses a NLP
module to obtain the most relevant terms of the given text. The
initial set of terms can also be modified and extended by the user
using WordNet [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. The second one, represented in the centre of
the figure, allows the user to select a set of ontology evaluation
techniques provided by the system to recover the ontologies
closest to the given Golden Standard. Finally, the third one, on the
right of the figure, is a collaborative module that re-ranks the list
of recovered ontologies, taking into consideration previous
feedback and evaluations of the users.
3 Google Web Toolkit, http://code.google.com/webtoolkit/
      </p>
    </sec>
    <sec id="sec-7">
      <title>3.1 Golden Standard Definition</title>
      <p>
        The first phase of our ontology recommender system is the
Golden Standard definition. As done in the first version of CORE
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], the user describes a domain of interest specifying a set of
relevant terms that will be searched through the concepts (classes
or instances) of the ontologies stored in the system.
      </p>
      <p>As an improvement, WebCORE includes an internal NLP
component that automatically retrieves the most informative terms
from a given text. Moreover, we have added a new collaborative
component that continuously offers to the user a ranked list with
the terms that have been used in those previous problem
descriptions in which a given term appears.</p>
      <sec id="sec-7-1">
        <title>3.1.1 Term-based Problem Description</title>
        <p>In our system, the Golden Standard is described by a set of initial
set of terms. These terms can automatically be obtained by the
internal Natural Language Processing (NLP) module, which uses
a repository of documents related to the specific domain in which
the user is interested in. This NLP module accesses to the
repository of documents, and returns a list of pairs (lexical entry,
part of speech) that roughly represents the domain of the problem.
On the other hand, the list of initial (root) terms can be manually
specified.</p>
        <p>
          The module also allows the user to expand the root terms using
WordNet [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] and some of the relations it provides: hypernym,
hyponym and synonym. The new terms added to the Golden
Standard using these relations might also be extended again, and
new terms can iteratively be added to the problem definition.
        </p>
        <p>The final representation of the Golden Standard is defined as a
set of terms T (LG, POS, LGP, R, Z) where:
• LG is the set of lexical entries defined for the Golden</p>
        <p>Standard.
• POS corresponds to the different Parts Of Speech considered
by WordNet: noun, adjective, verb and adverb.
• LGP is the set of lexical entries of the Golden Standard that
have been extended.
• R is the set of relations between terms of the Golden</p>
        <sec id="sec-7-1-1">
          <title>Standard: synonym, hypernym, hyponym and root (if a term</title>
          <p>has not been obtained by expansion, but is one of the initial
terms).
• Z is an integer number that represents the depth or distance
of a term to the root term from which it has been derived.</p>
        </sec>
        <sec id="sec-7-1-2">
          <title>Examples:</title>
          <p>T1 = (“genetics”, NOUN, “”, ROOT, 0). T1 is one of the root
terms of the Golden Standard. The lexical entry that it represents
is “genetics”, its part of speech is “noun”, it has not been
expanded from any other term so its lexical parent is the empty
string, its relation is “root”, and its depth is 0.</p>
          <p>T2 = (“biology”, NOUN, “genetics”, HYPERNYM, 1). T2 is a
term expanded from “genetics” (T1). The lexical entry it
represents is “biology”, its part of speech is “noun”, the lexical
entry of its parent is “genetics”, it has been expanded by the
“hypernym“ relation, and the number of relations that separates it
from the root term T1 is 1.</p>
          <p>Figure 2 shows the interface of the Golden Standard Definition
phase. In the left side of the screen, the current list of root terms is
shown. The user can manually insert new root terms to this list
giving their lexical entries and selecting their parts of speech. The
correctness of these new insertions is controlled by verifying that all
the considered lexical entries belong to the WordNet repository.
Adding new terms, the final Golden Standard definition is
immediately updated: the final list of (root and expanded) terms that
represent the domain of the problem is shown in the bottom of the
figure. The user can also make term expansion using WordNet. He
selects one of the terms from the Golden Standard definition and the
system shows him all its meanings contained in WordNet (top of the
figure). After he has chosen one of them, the system presents him
three different lists with the synonyms, hyponyms and hypernyms
of the term. The user can then selects one or more elements of these
lists and add them to the expanded term list. For each expansion, the
depth of the new term is increased by one unit. This will be used
later to measure the importance of the term within the Golden
Standard: the greater the depth of the derived term with respect to its
root term, the less its relevance will be.</p>
        </sec>
      </sec>
      <sec id="sec-7-2">
        <title>3.1.2 Collaborative Problem Description</title>
        <p>In the problem definition phase a collaborative component has
been added to the system (right side of Figure 2). This component
reads the term currently selected by the user, and searches for all
the stored problem definitions that contain it. For each of these
problem definitions, the rest of their terms and the number of
problems in which they appear are retrieved and shown in the web
browser.</p>
        <p>
          With this simple strategy the user is suggested the most popular
terms, fact that could help him to better describe the domain in
which he is interested in. It is very often the case that a person has
very specific goals or interests, but does not know how to
correctly explain/describe them, and how to effectively find
solutions for them. With the retrieved terms, the user might
discover new ways to describe the problem domain and obtain
better solutions in the ontology recommendation phase.
This follows somehow the ideas of the well known folksonomies4.
The term “folksonomy” is a combination of “folk” and
“taxonomy”, and was firstly used by Thomas Vander Wal [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] in
        </p>
      </sec>
      <sec id="sec-7-3">
        <title>4 Mathes, A. (2004). Folksonomies: Cooperative Classification</title>
        <p>and Communication through Shared Metadata.
http://www.adammathes.com/academic/computer-mediatedcommunication/folksonomies.html
a discussion on a mailing list about the system of organization
developed in Delicious5 and Flickr6. It is associated to those
information retrieval methodologies consisting of collaboratively
generated, open-ended labels that categorize content.</p>
        <p>Although they suffer from problems of imprecision and
ambiguity, techniques employing free-form tagging encourage
users to organize information in their own ways and actively
interact with the system.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>3.2 Automatic Ontology Recommendation</title>
      <p>
        Once the user has selected the most appropriate set of terms to
describe the problem domain, the tool performs the processes of
ontology retrieval and ranking. These processes play a key role
within the system, since they provide the first level of information
to the user. To enhance the previous approaches of CORE, an
adaptation of traditional Information Retrieval techniques have
been integrated into the system. Our novel strategy to ontology
retrieval can be seen as an evolution of classic keyword-based
retrieval techniques [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], where textual documents are replaced by
ontologies.
      </p>
      <sec id="sec-8-1">
        <title>3.2.1 Query encoding and ontology retrieval</title>
        <p>
          The queries supported by our model are expressed using the terms
selected during the Golden Standard definition phase.
In classic keyword-based vector-space models for information
retrieval [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ], each of the query keywords is assigned a weight
that represents the importance of the keyword in the information
need expressed by the query, or its discriminating power for
discerning relevant from irrelevant documents.
        </p>
        <p>Analogously, in our model, the terms included in the Golden
Standard can be weighted to indicate the relative interest of the
5 del.icio.us - social bookmarking, http://del.icio.us/
6 Flickr - photo sharing, http://www.flickr.com/
user for each of the terms to be explicitly mentioned in the
ontologies. In our system, these weights are automatically
assigned considering the depth measure of each of the terms
included in the Golden Standard.</p>
        <p>
          Let T be the set of all terms defined in the Golden Standard
definition phase. Let di be the depth measure associate with each
term ti ∈ T. Let q be query vector extracted from the Golden
Standard definition, and let wi be the weight associated to each of
these terms, where for each ti ∈ T, wi ∈ [
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ]. Then, the weight wi
is calculated as:
w = 1
i di + 1
This measure gives more relevance to the terms explicitly
expressed by the user, and less importance to those ones extended
or derived from previously selected terms. An interesting future
work could be to enhance and refine the query, e.g. based on terms
popularity, or other more complex strategies as terms frequency
analysis.
        </p>
        <p>To carry out the process of ontology retrieval, the approach is
focused on the lexical level, retrieving those ontologies that
contain a subset of the terms expressed by the user during the
Golden Standard definition. To compute the matching, two
different options are available within the tool: search for exact
matches and search for matches based on the Levenshtein distance
between two terms.</p>
        <p>In both cases, the query execution returns a set of ontologies that
satisfy user requirements. Considering that not all the retrieved
ontologies fulfil the same level of satisfaction, it is the system task
to sort them and present the ranked list to the user.</p>
      </sec>
      <sec id="sec-8-2">
        <title>3.2.2 Ontology ranking</title>
        <p>Once the list of ontologies is formed, the ontology-search engine
computes a semantic similarity value between the query and each
ontology as follows. We represent each ontology in the search
space as an ontology vector oj ∈ O, where oji is the mean of the
term ti similarities with all the matched entities in the ontology if
any matching exists, and zero otherwise.</p>
        <p>The components oji are calculated as:
∑ w(m ji )</p>
        <p>M ji
o ji = M ji ∑ w(mi )</p>
        <p>Mi
where Mji is the set of matches of the term ti in the ontology
oj, w(mji) represents the similarities between the term ti and the
entities of the ontology oj that matches with it, Mi is the set of
matches of the term ti within all the ontologies and w(mi)
represents the weights of each of these matches.</p>
        <p>For example, if we define in the Golden Standard a term “acid”,
this term may return several matches in the same ontology with
different entities as: “acid”, “amino acid”, etc. In order to
establish the appropriate weight in the ontology vector, oij, the
goal is to compute the number of matches of one term in the
whole repository of ontologies and give more relevance to those
ontologies that have matched that specific term more times.
Due to the way in which the vector oj is constructed, each
component oij contains specific information about the similarity
between the ontology and the corresponding term ti. To compute
the final similarity between the query vector q and the ontology
vector oj, the vectorial model calculates the cosine measure
between both vectors. However, if we follow the traditional
vectorial model, we will only be considering the difference
between the query and the ontology vectors according to the angle
they form, but not taking into account their dimensions. Thus, to
overcome this limitation, the above cosine measure used in the
vectorial model has been replaced by the simple dot product.
Hence, the similarity measure between an ontology oj and the
query q is simply compute as follows:</p>
        <p>sim(q, o j ) =q ⋅ o j</p>
      </sec>
      <sec id="sec-8-3">
        <title>3.2.3 Combination with Knowledge Base Retrieval</title>
        <p>
          If the knowledge in the ontology is incomplete, the ontology
ranking algorithm performs very poorly. Queries will return less
results than expected, the relevant ontologies will not be retrieved,
or will get a much lower similarity value than it should. For
instance, if there are ontologies about “restaurants”, and “dishes”
are expressed as instances in the corresponding Knowledge Base
(KB), a user searching for ontologies in this domain may be also
interested in the instances and literals contained in the KB. To
cope with this issue, our ranking model combines the similarity
obtained from the terms that belong to the ontology with the
similarity obtained from the terms that belong to the KB using the
adaptation of the vector space model explained before.
On the other hand, the combination of outputs of several search
engines has been a widely addressed research topic in the
Information Retrieval field [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. After testing several approaches,
we have selected the so-called Comb-MNZ strategy. This
technique has been shown in prior works as one of the simplest
and most effective rank aggregation techniques, and consists of
computing a combined ranking score by a linear combination of
the input scores with additional factors that measure the relevance
of each score in the final ranking. In our case, the relevancies of
the scores, i.e., the relevancies of the similarity computation
within the ontology and within the knowledge base, are given by
the user. He can select a value vi ∈ [
          <xref ref-type="bibr" rid="ref1 ref5">1, 5</xref>
          ] for each kind of search,
and this value is then mapped to a corresponding value si using
the following normalization.
        </p>
        <p>si = v5i
Following this idea, the final score is computed as:
sO × sim(q, o) + skb × sim(q, kb)
For future work, we are considering to set si using statistical
information about the knowledge contained in the ontologies, the
knowledge contained in the KBs and the information requested by
the user during the Golden Standard definition phase.
Figure 3 shows the system recommendation interface. At the left
side the user can select the matching methodology (fuzzy or
exact), the search spaces (ontology entities and knowledge base
entities), and the weight or importance given to each of the
previously selected search spaces. In the right part the user can
visualize the ontology and navigate across it. Finally, the middle
of the interface presents the list of ontologies selected for the user
to be evaluated during the collaborative evaluation phase.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>3.3 Collaborative Ontology Evaluation</title>
      <p>
        The third and last phase of the system is compound of a novel
ontology recommendation algorithm that exploits the advantages
of Collaborative Filtering [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], exploring the manual evaluations
stored in the system to rank the set of ontologies that best fulfils
the user’s interests.
      </p>
      <p>
        In WebCORE, user evaluations are represented as a set of five
different criteria [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and their respective values, manually
determined by the users who made the evaluations.
      </p>
      <p>• Correctness: specifies whether the information stored in the
ontology is true, independently of the domain of interest.
• Readability: indicates the non-ambiguous interpretation of
the meaning of the concept names.
• Flexibility: points out the adaptability or capability of the
ontology to change.
• Level of formality: highly informal, semi-informal,
semiformal, rigorously-formal.
• Type of model: upper-level (for ontologies describing
general, domain-independent concepts), core-ontologies (for
ontologies that contain the most important concepts on a
specific domain), domain-ontologies (for ontologies that
broadly describe a domain), task-ontologies (for ontologies
focused on generic types of tasks or activities) and
application-ontologies (for ontologies describing a domain
in an application-dependent manner).</p>
      <p>The above criteria can have discrete numeric or non-numeric
values. The user’s interests are expressed like a subset of these
criteria, and their respective values, meaning thresholds or
restrictions to be satisfied by user evaluations. Thus, a numeric
criterion will be satisfied if an evaluation value is equal or greater
than that expressed by its interest threshold, while a non-numeric
criterion will be satisfied only when the evaluation is exactly the
given threshold (i.e. in a Boolean or yes/no manner).</p>
      <p>According to both types of user evaluation and interest criteria,
numeric and Boolean, the recommendation algorithm will
measure the degree in which each user restriction is satisfied by
the evaluations, and will recommend a ranked ontology list
according to similarity measures between the thresholds and the
collaborative evaluations. To create the final ranked ontology list
the recommender module follows two phases. In the first one it
calculates the similarity degrees between all the user evaluations
and the specified user interest criteria thresholds. In the second
one it combines the similarity measures of the evaluations,
generating the overall rankings of the ontologies.</p>
      <p>Figure 4 shows all the previous definitions and ideas, locating
them in the graphical interface of the system. On the left side of
the screen, the user introduces the thresholds for the
recommendations and obtains the final collaborative ontology
ranking. On the right side, the user adds new evaluations for the
ontologies and checks evaluations given by the rest of the users.</p>
      <sec id="sec-9-1">
        <title>3.3.1 Collaborative Evaluation Measures</title>
        <p>
          As mentioned before, a user evaluates an ontology considering
five different criteria that can be divided in two different groups:
a) numeric criteria (‘correctness’, ‘readability’ and ‘flexibility’),
which take discrete numeric values [
          <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5">1, 2, 3, 4, 5</xref>
          ], where 1 means
the ontology does not fulfil the criterion, and 5 means the
ontology completely satisfies the criterion, and, b) Boolean
criteria (‘level of formality’ and ‘type of model’), which are
“A”
“B”
“A”
“B”
        </p>
        <p>E1
2
0
E1
3
0
5</p>
        <p>E2
0
0
E2
4
1
5</p>
        <p>Evaluations
Evaluations
E3
2
2
E3
5
4
5</p>
        <p>E4
0
0
E4
5
5
5</p>
        <p>E5
2
0
E5
2
0
4</p>
        <p>E6
0
0
E6
0
0
0
represented by specific non-numeric values that can be or not
satisfied by the ontology.</p>
        <p>Taking into account the previous definitions, user interests will be
a subset of the above criteria and their respective values
representing the set of thresholds that should be reached by the
ontologies. Given a set of user interests, the system will size up all
the stored evaluations, and will calculate their similarity measures.
To explain these similarities we shall use a simple example of six
different evaluations (E1, E2, E3, E4, E5 and E6) of a given
ontology. In the explanation we shall distinguish between the
numeric and the Boolean criteria. We start with the Boolean ones,
assuming two different criteria, C1 and C2, with three possible
values: “A”, “B” and “C”. In Table 1 we show the threshold
values established by a user for these two criteria, “A” for C1 and
“B” for C2, and the six evaluations stored in the system.
In this case, because of the threshold of a criterion n is satisfied or
not by a certain evaluation m, their corresponding similarity
measure is simply 0 if they have the same value, and 2 otherwise.</p>
        <p>⎧0
similaritybool (criterionmn ) = ⎨
⎩2
if evaluationmn ≠ thresholdmn
if evaluationmn = thresholdmn
The similarity results for the Boolean criteria of the example are
shown in Table 3.</p>
        <p>
          = 1 + similarityn*um (criterionmn )· penaltynum (thresholdmn ) ∈ [
          <xref ref-type="bibr" rid="ref2">0, 2</xref>
          ]
This measure will also return values between 0 and 2. The idea of
returning a similarity value between 0 and 2 is inspired on other
collaborative matching measures [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] to not manage negative
numbers, and facilitate, as we shall show in the next subsection, a
coherent calculation of the final ontology rankings.
        </p>
        <p>The similarity assessment is based on the distance between the
value of the criterion n in the evaluation m, and the threshold
indicated in the user’s interests for that criterion. The more the
value of the criterion n in evaluation m overcomes the threshold,
the greater the similarity value shall be.</p>
        <p>Specifically, following the expression below, if the difference dif
= (evaluation – threshold) is equal or greater than 0, we assign a
positive similarity in (0,1] that depends on the maximum
difference maxDif = (maxValue – threshold) we can achieve with
the given threshold; and else, if the difference dif is lower than 0,
we give a negative similarity in [-1,0), punishing the distance of
the value with the threshold.</p>
        <p>⎧ 1 + dif
⎪
⎪1 + maxDif
similarityn*um (criterionmn ) = ⎨
⎪ dif</p>
        <p>E1
1/4
1/6
1</p>
        <p>E2
2/4
2/6
1</p>
        <p>Evaluations
E3
3/4
5/6
1</p>
        <p>E4
3/4
1
1</p>
        <p>E5
-1/3
1/6
-1/5</p>
        <p>E6
-1
1/6
-1
Comparing the evaluation values of Table 4 with the similarity
values of Table 5, the reader may notice several important facts:
1. Evaluation E4 satisfies criteria C4 and C5 with evaluations of 5.</p>
        <p>Applying the above expression, these criteria receive the same
similarity of 1. However, criterion C4 has a threshold of 0, and
C5 has a threshold equal to 5. As it is more difficult to satisfy
the restriction imposed to C5, this one should have a greater
influence in the final ranking.
2. Evaluation E6 gives an evaluation of 0 to criteria C3 and C5, not
satisfying either of them and generating the same similarity
value of -1. Again, because of their different thresholds, we
should distinguish their corresponding relevance degrees in
the rankings.</p>
        <p>For these reasons, a threshold penalty is applied, reflecting how
difficult it is to overcome the given thresholds. The more difficult
to surpass a threshold, the lower the penalty value shall be.
penaltynum (threshold ) =</p>
        <sec id="sec-9-1-1">
          <title>1 + threshold</title>
        </sec>
        <sec id="sec-9-1-2">
          <title>1 + maxValue</title>
          <p>∈ (0,1]
Table 6 shows the threshold penalty values for the three numeric
criteria and the six evaluations of the example.
For the numeric criteria, the evaluations can overcome the
thresholds to different degrees. Table 4 shows the thresholds
established for criteria C3, C4 and C5, and their six available
evaluations. Note that E1, E2, E3 and E4 satisfy all the criteria, while
E5 and E6 do not reach some of the corresponding thresholds.
In this case, the similarity measure has to take into account two
different issues: the degree of satisfaction of the threshold, and the
difficulty of achieving its value. Thus, the similarity between the
value of criterion n in the evaluation m, and the threshold of interest
is divided into two factors: 1) a similarity factor that considers
whether the threshold is surpassed or not, and, 2) a penalty factor
which penalizes those thresholds that are easier to be satisfied.</p>
          <p>E1
4/6
1/6
1
E1
As a preliminary approach, we calculate the similarity between an
ontology evaluation and the user’s requirements as the average of
its N criteria similarities.</p>
          <p>similarity(evaluationm ) =
1 N</p>
          <p>∑ similarity(criterionmn )</p>
          <p>N n=1
A weighted average could be even more appropriate, and might
make the collaborative recommender module more sophisticated
and adjustable to user needs. This will be considered for a
possible enhancement of the system in the continuation of our
research.</p>
        </sec>
      </sec>
      <sec id="sec-9-2">
        <title>3.3.2 Collaborative Ontology Ranking</title>
        <p>Once the similarities are calculated taking into account the user’s
interests and the evaluations stored in the system, a ranking is
assigned to the ontologies.</p>
        <p>The ranking of a specific ontology is measured as the average of
its M evaluation similarities. Again, we do not consider different
priorities in the evaluations of several users. We have planned to
include in the system personalized user appreciations about the
opinions of the rest of the users. Thus, for a certain user some
evaluations will have more relevance than others, according to the
users that made it.</p>
        <p>ranking (ontology) =
=
1 M</p>
        <p>∑ similarity(evaluationm )
M m=1
1 M N</p>
        <p>∑ ∑ similarity(criterionmn )</p>
        <p>MN m=1 n=1
Finally, in case of ties, the collaborative ranking mechanism sorts
the ontologies taking into account not only the average similarity
between the ontologies and the evaluations stored in the system,
but also the total number of evaluations of each ontology,
providing thus more relevance to those ontologies that have been
rated more times.</p>
        <p>M</p>
      </sec>
      <sec id="sec-9-3">
        <title>M total</title>
        <p>ranking (ontology)</p>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>4. EXPERIMENTS</title>
      <p>In this section, we present some early experiments that attempt to
measure: a) the gain of efficiency and effectiveness, and the b)
increment of users’ satisfaction obtained with the use of our
system when searching ontologies within a specific domain.
The scenario of the experiments was the following. A repository
of thirty ontologies was considered and eighteen subjects
participated in the evaluations. They were Computer Science
Ph.D. students of our department, all of them with some expertise
in modeling and exploitation of ontologies. They were asked to
search and evaluate ontologies with WebCORE in three different
tasks. For each task and each student, one of the following
problem domains was selected:
• Family. Search for ontologies including family members:
mother, father, daughter, son, etc.
• Genetics. Search for ontologies containing specific
vocabulary of Genetics: genes, proteins, amino acids, etc.
• Restaurant. Search for ontologies with vocabulary related
to restaurants: food, drinks, waiters, etc.</p>
      <p>In the repository, there were six different ontologies related to
each of the above domains, and twelve ontologies describing other
no related knowledge areas. No information about the domains
and the existent ontologies was given to the students.</p>
      <p>Tasks 1 and 2 were performed first without the help of the
collaborative modules of the system, i.e., the term recommender
of the problem definition phase and the collaborative ranking of
the user evaluation phase. After all users finished the previous
ontology searches and evaluations, task 3 was done with the
collaborative components activated. For each task and each
student, we measured the time expended, and the number of
ontologies retrieved and selected (‘reused’). We also asked the
users about their satisfaction (in a 1-5 rating scale) about each of
the selected ontologies and the collaborative modules.
Tables 8 and 9 contain a summary of the obtained results. Note
that measures of task 1 are not shown. We have decided not to
consider them for evaluation purposes because we discern the first
task as a learning process of the use of the tool, and its time
executions and number of selected ontologies as skewed no
objective measures.</p>
      <p>To evaluate the enhancements in terms of efficiency and
effectiveness, we present in Table 8 the average number of reused
ontologies and the average execution times for task 2 and 3. The
results show a significant improvement when the collaborative
modules of the system were activated. In all the cases, the
students made use of the terms and evaluations suggested by
others, accelerating the processes of problem definition and
relevant ontology retrieval.
On the other hand, table 9 shows the average degrees of
satisfaction revealed by the users about the retrieved ontologies
and the collaborative modules. Again, the results evidence
positive applications of our approach.</p>
      <p>Task</p>
      <p>3</p>
      <p>%
improvement</p>
      <p>Initial term
recommendation</p>
      <p>Final ontology</p>
      <p>ranking</p>
    </sec>
    <sec id="sec-11">
      <title>5. CONCLUSIONS AND FUTURE WORK</title>
      <p>In this paper, a web application for ontology evaluation and reuse
has been presented. The novel aspects of our proposal include the
use of WordNet to help users to define the Golden Standard; a
new ontology retrieval technique based on traditional Information
Retrieval models; rank fusion techniques to combine different
ontology evaluation measures; and two collaborative modules:
one that suggests the most popular terms for a given domain, and
one that recommends lists of ontologies with a multi-criteria
strategy that takes into account user opinions about ontology
features that can only be assessed by humans.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Adomavicius</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Tuzhilin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Toward the Next Generation of Recommender Systems: A Survey of the Stateof-the-Art and Possible Extensions</article-title>
          .
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          <volume>17</volume>
          (
          <issue>6</issue>
          ):
          <fpage>734</fpage>
          -
          <lpage>749</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Alani</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Brewster</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Metrics for Ranking Ontologies</article-title>
          .
          <source>Proceedings of the 4th Int. Workshop on Evaluation of Ontologies for the Web (EON'06)</source>
          ,
          <source>at the 15th Int. World Wide Web Conference (WWW'06)</source>
          . Edinburgh, UK,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Alani</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brewster</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Shadbolt</surname>
          </string-name>
          , N.:
          <article-title>Ranking Ontologies with AKTiveRank</article-title>
          .
          <source>Proc.. of the 5th Int. Semantic Web Conference (ISWC'06)</source>
          . Athens, Georgia, USA,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Brank</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grobelnik</surname>
            <given-names>M.</given-names>
          </string-name>
          , and Mladenic D.
          <article-title>: A Survey of Ontology Evaluation Techniques</article-title>
          .
          <source>Proceedings of the 4th Conference on Data Mining and Data Warehouses (SiKDD'05)</source>
          ,
          <source>at the 7th Int. Multi-conference on Information Society (IS'05)</source>
          . Ljubljana, Slovenia,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Brewster</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alani</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dasmahapatra</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Wilks</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <article-title>Data driven ontology evaluation</article-title>
          .
          <source>Proc. of the 4th Int. Conf. on Language Resources and Evaluation (LREC04)</source>
          . Lisbon 2004
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Ding</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Fensel</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Ontology Library Systems: The key to successful Ontology Reuse</article-title>
          .
          <source>Proc. of the 1st Semantic Web Working Symposium (SWWS'01)</source>
          . Stanford, CA, USA,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Farquhar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fikes</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Rice</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>The Ontolingua server: A tool for collaborative ontology construction</article-title>
          .
          <source>Technical report, Stanford KSL 96-26</source>
          ,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Fernández</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cantador</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Castells</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <article-title>CORE: A Tool for Collaborative Ontology Reuse and Evaluation</article-title>
          .
          <source>Proceedings of the 4th Int. Workshop on Evaluation of Ontologies for the Web (EON'06)</source>
          ,
          <source>at the 15th Int. World Wide Web Conference (WWW'06)</source>
          . Edinburgh, UK,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J. H.</given-names>
          </string-name>
          :
          <article-title>Analysis of multiple evidence combination</article-title>
          .
          <source>Proceedings of the 20th ACM Int. Conference on Research and Development in IR (SIGIR'97)</source>
          . New York,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Lozano-Tello</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Gómez-Pérez</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Ontometric: A method to choose the appropriate ontology</article-title>
          .
          <source>Journal of Database Management</source>
          ,
          <volume>15</volume>
          (
          <issue>2</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Maedche</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Staab</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Measuring similarity between ontologies</article-title>
          .
          <source>Proceedings of the 13th European Conference on Knowledge Acquisition and Management (EKAW</source>
          <year>2002</year>
          ). Madrid, Spain,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>G. A.</given-names>
          </string-name>
          :
          <article-title>WordNet: A lexical database for English. New horizons in commercial and industrial Artificial Intelligence</article-title>
          .
          <source>Communications of the Association for Computing Machinery</source>
          ,
          <volume>38</volume>
          (
          <issue>11</issue>
          ):
          <fpage>39</fpage>
          -
          <lpage>41</lpage>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Montaner</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>López</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>De la Rosa</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          :
          <article-title>A Taxonomy of Recommended Agents on the Internet</article-title>
          .
          <source>Artificial intelligence Review</source>
          <volume>19</volume>
          :
          <fpage>285</fpage>
          -
          <lpage>330</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Noy</surname>
            ,
            <given-names>N. F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chugh</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Musen</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          :
          <article-title>A Framework for Ontology Evolution in Collaborative Environments</article-title>
          .
          <source>Proceedings of the 5th Int. Semantic Web Conference (ISWC'06)</source>
          . Athens, Georgia, USA,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Paslaru</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          :
          <article-title>Using Context Information to Improve Ontology Reuse</article-title>
          .
          <source>Doctoral Workshop at the 17th Conference on Advanced Information Systems Engineering (CAiSE'05)</source>
          . Porto, Portugal,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Porzel</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Malaka</surname>
          </string-name>
          , R.:
          <article-title>A task-based approach for ontology evaluation</article-title>
          .
          <source>Proc. of the 16th European Conference on Artificial Intelligence (ECAI'04)</source>
          . Valencia, Spain,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Protégé</surname>
            <given-names>OWL</given-names>
          </string-name>
          ontology Repository. http://protege.stanford.edu/download/ontologies.html
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Resnick</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Iacovou</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suchak</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bergstrom</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Riedl</surname>
          </string-name>
          , J.:
          <article-title>GroupLens: An Open Architecture for Collaborative Filtering of Netnews</article-title>
          .
          <source>Internal Research Report</source>
          , MIT Center for Coordination Science,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Sabou</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>López</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Motta</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Uren</surname>
          </string-name>
          , V.:
          <article-title>Ontology Evaluation on the Real Semantic Web</article-title>
          .
          <source>Proceedings of the 4th Int. Workshop on Evaluation of Ontologies for the Web (EON'06)</source>
          ,
          <source>at the 15th Int. World Wide Web Conference (WWW'06)</source>
          . Edinburgh, UK,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Sabou</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>López</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Motta</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Uren</surname>
          </string-name>
          , V.:
          <article-title>Ontology Selection for the Real Semantic Web: How to cover the Queen's Birthday Dinner? Proc. of the 15th International Conference on Knowledge Engineering and Knowledge Management (EKAW'06)</article-title>
          . Podebrady, Czech Republic,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Salton</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <surname>McGill</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Introduction to Modern Information Retrieval</article-title>
          .
          <string-name>
            <surname>McGraw-Hill</surname>
          </string-name>
          , New York,
          <year>1983</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          : Atomiq: Folksonomy: Social Classification.
          <year>2004</year>
          . http://atomiq.org/archives/2004/08/folksonomy_social_class ification.html
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Sure</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Erdmann</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Angele</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Staab</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Studer</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Wenke</surname>
          </string-name>
          , D.:
          <article-title>OntoEdit: Collaborative Ontology Development for the Semantic Web</article-title>
          .
          <source>Proceedings of the 1st International Semantic Web Conference (ISWC '02)</source>
          , Sardinia, Italy,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Swoogle - Semantic Web</surname>
          </string-name>
          Search Engine. http://swoogle.umbc.edu
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>