<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Automatic Ontology Creation from Text for National Intelligence Priorities Framework (NIPF)</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Mithun Balakrishna, Munirathnam Srikanth Lymba Corporation Richardson, TX</institution>
          ,
          <addr-line>75080</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>-Analysts are constantly overwhelmed with large amounts of unstructured data. This holds especially true for intelligence analysts with the task of extracting useful information from large data sources. To alleviate this problem, domainspecific and general-purpose ontologies/knowledge-bases have been proposed to help automate methods for organizing data and provide access to useful information. However, problems in ontology creation and maintenance have resulted in expensive procedures for expanding/maintaining the ontology library available to support the growing and evolving needs of the Intelligence Community (IC). In this paper, we will present a semi-automatic development of an ontology library for the National Intelligence Priorities Framework (NIPF) topics. We use Jaguar-KAT, a state-of-the-art tool for knowledge acquisition and domain understanding, with minimized manual intervention to create NIPF ontologies loaded with rich semantic content. We also present evaluation results for the NIPF ontologies created using our methodology. Index Terms-ontology generation, National Intelligence Priorities Framework (NIPF).</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>
        Analysts are constantly plagued and overwhelmed by large
amounts of unstructured, semi-structured data required for
extracting useful information [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Over the past decade,
ontologies and knowledge bases have gained popularity for their
high potential benefits in a number of applications including
data/knowledge organization and search applications [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The
data processing burden on the intelligence analysts have been
relieved with the integration of ontologies to help automate
methods for organizing data and provide access to useful
information [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Though a number of applications can and have benefited
due to their integration with domain-specific and
generalpurpose ontologies/knowledge-bases, it is very well known
that ontology creation (popularly referred to as the knowledge
acquisition bottleneck [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]) is an expensive process [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
The modeling of ontologies for non-trivial domains/topics is
difficult and time/resource consuming. The knowledge
acquisition bottleneck problems in ontology creation and maintenance
have resulted in expensive procedures for maintaining and
expanding the ontology library available to support the growing
and evolving needs of the Intelligence Community (IC).
      </p>
      <p>
        In this paper, we present a semi-automatic development of
an ontology library for the 33 topics defined in the National
Intelligence Priorities Framework (NIPF). NIPF is the Director
of National Intelligence’s (DNI’s) guidance to the Intelligence
Community on the national intelligence priorities approved by
the President of the United States of America [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        Lymba’s Jaguar-KAT [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] is a state-of-the-art tool for
knowledge acquisition and domain understanding. We use
Jaguar to create rich NIPF ontologies by extracting deep
semantic content from NIPF topic specific document collections
while keeping the manual intervention to a minimum. In this
paper, we discuss the technical contributions of automatic
concept and semantic relation extraction, automatic ontology
construction, and the metrics to evaluate ontology quality.
      </p>
    </sec>
    <sec id="sec-2">
      <title>II. AUTOMATIC ONTOLOGY GENERATION</title>
      <p>Jaguar automatically builds domain-specific ontologies from
text. The text input to Jaguar can come from a variety
of document sources, including Text, MS Word, PDF and
HTML web pages, etc. The ontology/knowledge-base created
by Jaguar includes the following constituents:
• Ontological Concepts: basic building blocks of an
ontology
• Hierarchy: structure imposed on certain ontological
concepts via transitive relations that generally hold to be
universally true (e.g. ISA, Part-Whole, Locative, etc)
• Contextual Knowledge Base: semantic contexts that
encapsulate knowledge of events via semantic relations
• Axioms on Demand: assertions about concepts of interest
generated from the available knowledge; this is useful for
reasoning on text
anthrax
biological
weapon</p>
      <p>C1
C2
isa</p>
      <p>Ontology
Concept Set</p>
      <p>C5
C3</p>
      <p>C7
cauC3 ispaw C16
C5 C13</p>
      <p>Hierarpcwhy
C4 isa C14</p>
      <p>isa
pw C11</p>
      <p>Knowledge Base</p>
      <p>Contextual
Knowledge
input to Jaguar includes a document collection (Text, MS
Word, PDF and HTML web pages, etc.) and a seeds file
containing the concepts/keywords of interest in the domain.
Jaguar’s ontology creation involves complex text processing
using advanced Natural Language Processing (NLP) tools, and
an advanced knowledge classification/management algorithm.
A single run of Jaguar can be divided into the following two
major phases:
• Text Processing
• Classification/Hierarchy Formation</p>
      <p>
        In Text Processing, the first step is to extract textual content
from the input document collection. The text files then go
through a set of NLP processing tools: named-entity
recognition, part-of-speech tagging, syntactic parsing, word-sense
disambiguation, coreference resolution, and semantic parsing
(or semantic relation discovery) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The concept discovery
module then extracts the concepts of interest using the input
seeds set as a starting point and growing it based on the
extracted NLP information [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        The classification module forms a hierarchical structure
within the set of identified domain concepts via transitive
relations that generally hold to be universally true (e.g. ISA,
PartWhole, Locative, etc). Jaguar uses well-formed procedures [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
to impose a hierarchical structure on the discovered concepts
set using the semantic relations discovered by Polaris [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and
with WordNet [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] as the upper ontology.
      </p>
      <p>A. Automatically Building NIPF Ontologies</p>
      <p>In this paper, we use Jaguar to create an ontology library
for the 33 topics defined in NIPF. For each NIPF topic, we
collected 500 documents from the web (the W eapons topic
was an exception and its collection had only 50 Wikipedia
documents) and manually verified their relevance to the
corresponding topic. We then use Jaguar to create an ontology,
for each identified NIPF topic. Jaguar builds each ontology
with rich semantic content extracted from the corresponding
NIPF topic document collection while keeping the manual
intervention to a minimum. These ontologies are fine-tuned
to contain the level of detail desired by an analyst.</p>
      <p>1) Extracting Textual Content: We first extract text from
the input NIPF document collections and then filter/clean-up
the extracted text. The NIPF text input to Jaguar comes from
all possible document types, including MS Word, PDF and
HTML web pages, and is therefore prone to having many
irregularities, such as incomplete, strangely formatted sentences,
headings, and tabular information. The text extraction and
filtering mechanism of Jaguar is a crucial step that makes the
input acceptable for subsequent NLP tools to process it. The
extraction/filtering rules include, conversion/removal of
nonASCII characters, verbalization of Wikipedia infoboxes and
tables, conversion of punctuation symbols, among others.</p>
      <p>2) Initial Seed Set Selection: For each NIPF topic, Jaguar
is provided with an initial seed set containing on average
51 concepts of interest. The seed set is used to determine
the set of text sentences of interest in a topic’s document
collection. The initial seed set selection for the NIPF topic
was performed manually based on the concepts found in the
topic descriptions. The initial seed selection process is the
only manual step that we use in our NIPF ontology creation
process. We are currently exploring automated methods for
creating the initial seed set using a combination of statistical
and semantic clues in the document collection.</p>
      <p>3) Concept and Relation Discovery: For each NIPF topic,
the set of text files extracted from the document collection are
processed through the entire set NLP tools listed in Section II.
The NLP processed data files are then passed through the
concept discovery module, which identifies noun concepts in
sentences which are related to the NIPF topic target words or
seeds. The concept discovery module analyzes the syntactic
parse tree of each processed sentence and scans them for
noun phrases. Though Jaguar has the capability to extract
verb concepts by analyzing verb phrases, for our current
NIPF ontology creation experiment, we focused only on noun
concepts and their semantic relations. Each noun phrase is then
processed and well-formed noun concepts are extracted based
on a set of syntactic patterns and rules.</p>
      <p>
        Noun concepts (which are part of the seed set), their
semantic relations (extracted from the semantic parser, Polaris [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ],
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]) and the noun concepts involved in semantic relations with
the seed set concepts are added into data structures for
subsequent processing into the ontology’s hierarchy. The resulting
data structures are processed and used to populate one or
many semantic contexts, groups of relations or nested contexts
which hold true around a common central concept. The seed
set is then augmented with concepts that have hierarchical
relations with the target words or seeds. The entire process
of sentence selection, concept extraction, semantic relation
extraction and seed concepts set augmentation is repeated in
an iterative manner, n number of times (by default, n is set
to 3). While processing the NIPF topic collections through
Jaguar, we used ISA, Part-Whole and Synonymy semantic
relations for automatically augmenting the seeds concept set.
Figure 2 depicts this iterative process of extracting concepts
and semantic relations of interest using seed concepts.
      </p>
      <p>4) Creating Concept Hierarchies: The extracted NIPF topic
noun concepts and semantic relations are fed to the
classification module to determine the hierarchical structure.
Certain hypernymy relations discovered via classification contain
anomalies (causing cycles) or redundancies. Hence, we run
them through a conflict resolution engine to detect and correct
inconsistencies. The conflict resolution engine creates a NIPF
topic hierarchy link by link (relation by relation) and follows
a conflict avoidance technique, wherein each new link is
tested for causing inconsistencies before being added to the
hierarchy.</p>
      <p>5) Ontology Merging: Although single runs of Jaguar yield
rich NIPF ontologies, Jaguar’s real power lies in providing an
ontology maintenance option to layer ontologies from many
different runs. Figure 3 depicts the process of merging two
ontologies through conflict resolution algorithms. Jaguar can
merge disparate ontologies or add new knowledge by using the
aforementioned conflict resolution techniques. The merge tool
merges the two ontologies’ concept sets, hierarchies (using
conflict resolution), and their knowledge bases (set of semantic
contexts). Given two ontologies or knowledge bases, ontology
merging is performed by enumerating the relations in the
smaller ontology and adding them to the larger or reference
ontology. A relation may either be represented by a similar
relation in the reference ontology, may create a redundant
path between concepts or may be a new relation that can
be added to the reference ontology. The conflict resolution
techniques are then used for handling the conflict induced in
the ontology to generate a merged ontology. Merging is useful
for distributed or parallel systems where small chunks of the
input text may be processed on some portions of the system
and then subsequently merged. It also provides a foundation
for future work in contextual reasoning and epistemic logic.
The resulting rich NIPF knowledge bases can be viewed at
many different levels of granularity, providing an analyst with
the level of detail desired.</p>
      <p>
        III. EVALUATION OF JAGUAR’S NIPF ONTOLOGIES
Since the mid-1990s, various methodologies have been
proposed to evaluate ontology generation/maintenance/reuse
techniques [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. All the proposed methodologies have focused
on some facet of the ontology generation problem, and depend
on the type of ontology being created/maintained and the
purpose of the ontology [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. It is noted that not much
progress has been achieved in developing a comprehensive and
global technique for evaluating the correctness and relevance
of ontologies [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>Nj (correct)+Nj (irrelevant)
P r(Correctness)= Nj (correct)+Nj (incorrect)+Nj (irrelevant)</p>
      <p>0Correctness1
P rBBBBB + CCCCC= Nj (correct)+Nj N(ijn(ccoorrrreecctt))+Nj (irrelevant)
@ Relevance A</p>
      <p>Nj (correct)+Nj (irrelevant)
Cvg(Correctness)= Ng(correct)+Ng(irrelevant)+Ng(added)</p>
      <p>0Correctness1
CvgBBBBB + CCCCC= Ng(corNrje(ccto)r+rNecgt()added)
@ Relevance A
(1)</p>
      <p>
        We evaluated the quality of Jaguar’s NIPF ontologies by
comparing them against manual gold annotations. Following
the ontology evaluation levels defined in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], our evaluations
are focused on the Lexical, Vocabulary, or Data Layer and
the Other Semantic Relations levels. For a NIPF topic, the
ontology and document collection were manually annotated
by several human annotators and used in the evaluation of the
ontology. Viewing an ontology as a set of semantic relations
between two concepts, the annotators:
• Labeled an entry correct if the concepts and the semantic
relation are correctly detected by the system else marked
the entry as Incorrect
• Labeled a correct entry as irrelevant if any of the
concepts or the semantic relation are irrelevant to the
domain
• From the sentences added new entries if the concepts and
the semantic relation were omitted by Jaguar
      </p>
      <p>The annotation rules provide feedback on the automated
concept tagging and semantic relation extraction and also
are used for computing precision (Pr) and coverage (Cvg)
metrics for the automatically generated ontologies. Equations
in (1) capture the metrics defined by Lymba to evaluate
Jaguar’s automatic topical NIPF ontology generation from
text. In (1), Nj (.) gives the counts from Jaguar’s output and
Ng(.) correspond to counts in the user annotations. Table II
presents our initial evaluation results for 4 NIPF topics using a
subset of 3 semantic relations (I SA, P W and CAU relations)
defined in Table I. Table III presents the semantic relation and
concept extraction statistics for the four NIPF ontologies being
evaluated in this paper.</p>
      <p>We use the metrics defined in (1) to evaluate the
ontologies against the manual annotations from different human
annotators. The results in Table II represent the evaluation
scores which have been averaged over the results for different
annotators. The first column in Table II identifies the number
of annotators for each topic. Jaguar obtained the best
Precision results in both Correctness and Correctness+Relevance
evaluations for the Weapons NIPF topic. Please note that as
shown in Table III, smaller number of
concepts/semanticrelations were extracted for this topic due to its smaller
collection size (50 documents versus the 500 document set
for the other topics). The Terrorism NIPF topic obtained the
best Coverage result for the Correctness evaluation and it
was also very close to the best Coverage result obtained
by the Missiles NIPF topic for the Correctness+Relevance
evaluation. The Weapons NIPF topic obtained the best
FMeasure result (β = 1) for the Correctness evaluation while
the Missiles NIPF topic obtained the best F-Measure result for
the Correctness+Relevance evaluation.</p>
    </sec>
    <sec id="sec-3">
      <title>IV. CONCLUSIONS AND FUTURE WORK</title>
      <p>In this paper, we presented the semi-automatic development
of an ontology library for the NIPF topics. We use Jaguar-KAT,
a state-of-the-art tool for knowledge acquisition and domain
understanding, with minimized manual intervention to create
NIPF ontologies loaded with rich semantic content. We also
defined evaluation metrics to assess the quality of the NIPF
ontologies created using our methodology. We evaluated a
subset of Jaguar’s NIPF ontologies by comparing them against
manual gold annotations. The results look very promising and
show that a decent amount of knowledge was automatically
and accurately extracted by Jaguar from the input document
collection while keeping the manual intervention in the process
to a minimum. We plan to perform further analysis of the
results and identify methods for improving the precision and
coverage of text processing and ontology generation.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bixler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Moldovan</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Fowler</surname>
          </string-name>
          , “
          <article-title>Using knowledge extraction and maintenance techniques to enhance analytical performance,”</article-title>
          <source>in Proceedings of International Conference on Intelligence Analysis</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Cimiano</surname>
          </string-name>
          ,
          <source>Ontology Learning and Population from Text: Algorithms, Evaluation and Applications</source>
          . Springer,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Moldovan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Srikanth</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Badulescu</surname>
          </string-name>
          , “
          <article-title>Synergist: Topic and user knowledge bases from textual sources for collaborative intelligence analysis</article-title>
          ,
          <source>” in CASE PI Conference</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Ratsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schultz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Saric</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. C.</given-names>
            <surname>Lavin</surname>
          </string-name>
          , U. Wittig,
          <string-name>
            <given-names>U.</given-names>
            <surname>Reyle</surname>
          </string-name>
          ,
          <string-name>
            <surname>and I. Rojas</surname>
          </string-name>
          , “
          <article-title>Developing a protein-interactions ontology</article-title>
          ,
          <source>” Comparative and Functional Genomics</source>
          , vol.
          <volume>4</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>85</fpage>
          -
          <lpage>89</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H.</given-names>
            <surname>Pinto</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Martins</surname>
          </string-name>
          , “Ontologies:
          <article-title>How can they be built?” Knowlegde and Information Systems</article-title>
          , vol.
          <volume>6</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>441</fpage>
          -
          <lpage>464</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>“</surname>
            <given-names>FBI</given-names>
          </string-name>
          <string-name>
            <surname>: National Security</surname>
          </string-name>
          Branch - FAQ,”
          <source>Last accessed on Jul 21</source>
          ,
          <year>2008</year>
          , available at http://www.fbi.gov/hq/nsb/nsb_faq.htm#NIPF.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>D. I. Moldovan</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Girju</surname>
          </string-name>
          , “
          <article-title>An interactive tool for the rapid development of knowledge bases,”</article-title>
          <source>International Journal on Artificial Intelligence Tools</source>
          , vol.
          <volume>10</volume>
          , no.
          <issue>1-2</issue>
          , pp.
          <fpage>65</fpage>
          -
          <lpage>86</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Badulescu</surname>
          </string-name>
          , “
          <article-title>Classification of semantic relations between nouns,”</article-title>
          <source>Ph.D. dissertation</source>
          , The University of Texas at Dallas,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Girju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Giuglea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Olteanu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Fortu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Bolohan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Moldovan</surname>
          </string-name>
          , “
          <article-title>Support vector machines applied to the classification of semantic relations in nominalized noun phrases</article-title>
          ,
          <source>” in Lexical Semantics Workshop in Human Language Technology (HLT)</source>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.</given-names>
            <surname>Miller</surname>
          </string-name>
          , “
          <article-title>Wordnet: a lexical database for english,” Communications of the ACM</article-title>
          , vol.
          <volume>38</volume>
          , no.
          <issue>11</issue>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>41</lpage>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sure</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Perez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Daelemans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Reinberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Guarino</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N. F.</given-names>
            <surname>Noy</surname>
          </string-name>
          , “
          <article-title>Why evaluate ontology technologies? because it works!” IEEE Intelligent Systems</article-title>
          , vol.
          <volume>19</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>74</fpage>
          -
          <lpage>81</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Brank</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Grobelnik</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Mladenic</surname>
          </string-name>
          , “
          <article-title>A survey of ontology evaluation techniques,” in Data Mining and Data Warehouses (SiKDD</article-title>
          ), Ljubljana, Slovenia,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gangemi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Catenacci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ciaramita</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          , “
          <article-title>Modelling ontology evaluation and validation</article-title>
          ,” in European Semantic Web Symposium/Conference (ESWC),
          <year>2006</year>
          , pp.
          <fpage>140</fpage>
          -
          <lpage>154</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>