<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Universidad Aut ́onoma de Madrid</institution>
          ,
          <addr-line>28049 Madrid</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <fpage>124</fpage>
      <lpage>165</lpage>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Using Automatically Generated Students’</title>
    </sec>
    <sec id="sec-2">
      <title>Clickable Conceptual Models for E-tutoring</title>
      <p>Abstract. Computer methods for evaluating student’s knowledge have
traditionally been based on Multiple Choice Questions (MCQs) or
fillin-the-blank exercises, which do not provide a reliable basis upon which
to assess student’s underlying misconceptions. Because of this lack, we
have devised and implemented a procedure for automatically deriving
clickable students’ conceptual models from their free-text answers. A
student’s conceptual model can be defined as a network of interrelated
concepts associated with a confidence value that indicates how well each
student knows a concept. Several knowledge representation formats are
used to show the generated conceptual model to the student.
Furthermore, students can click on the concepts to get more information about
them. 22 English Studies students are taking advantage of this new
resource to review their Pragmatics course. Initial results show that they
have found it very useful and claim that it is a good support for their
review of the subject.
1</p>
      <sec id="sec-2-1">
        <title>Introduction</title>
        <p>
          According to the theory of constructivism [
          <xref ref-type="bibr" rid="ref1 ref30 ref8">1</xref>
          ], knowledge can be defined as the
product of a learning activity in which an individual assimilates and
accommodates new information into his or her cognitive structure in accordance with
the environment as s/he understands it. Thus, in educational terms, a student
builds his or her specific cognitive structure or conceptual model, understood
here as a network of concepts, depending on his or her particular features and
previous knowledge. Moreover, in conformity with the Meaningful Learning
Theory of Ausubel [
          <xref ref-type="bibr" rid="ref2 ref31 ref9">2</xref>
          ], students can learn new concepts only if they have a base of
previous concepts to which to link the new concepts.
        </p>
        <p>
          Therefore, it is necessary to have some reliable strategy to model the
student’s conceptual knowledge. Currently, there are systems such as ConceptLab
[
          <xref ref-type="bibr" rid="ref10 ref3 ref32">3</xref>
          ] which represents the student model as a concept map that facilitates the
sharing of knowledge among students and the assessment of students’
knowledge by teachers; and STyLE-OLM [
          <xref ref-type="bibr" rid="ref11 ref33 ref4">4</xref>
          ] which interactively builds the student’s
conceptual model through a dialogue between the student and the system. These
systems are at the forefront of computer-supported tutoring and assessment.
        </p>
        <p>
          In previous work [
          <xref ref-type="bibr" rid="ref12 ref34 ref5">5</xref>
          ], we devised a procedure for automatically deriving
inspectable students’ conceptual models from free-text answers. The domain model
is partially generated from information provided by the teacher, and the
student’s conceptual model can be defined as a network of interrelated concepts, in
which each concept has an associated confidence value that indicates how well
it has been understood by each student according to a set of metrics. The
conceptual model can also refer to a group of students, in which case, each concept
is also associated with a confidence value that indicates how well on average the
class has understood the concept. Both the student’s conceptual model and the
class conceptual model can be generated from the students’ free-text answers
using a set of Natural Language Processing (NLP) tools. The generated models
are made available to both students and teachers so that they can keep track of
the students’ conceptual evolution during the course, allowing them to focus on
the least understood concepts, which prevent the assimilation of new concepts.
        </p>
        <p>The procedure has been implemented in the Will Tools1, which are a set of
web-based applications that consist of: Willow, an automatic and adaptive
freetext students’ answers scorer; Willov, a conceptual model viewer; Willed, an
authoring tool; and, Willoc, a configuration tool. In this paper, we present the
next step of the procedure: to give the student more control over the generated
model, with the consequence that it can be used not only for evaluation but also
for tutoring. In order to achieve this goal, the students are no longer presented
all the domain concepts in the conceptual model. Instead, only concepts with a
confidence value higher than a certain threshold are shown. In this way, students
can see how they construct their knowledge at their own particular rhythm from
a blank conceptual model to a conceptual model with all domain concepts. Each
domain concept will appear as it is correctly used in the answers provided to
Willow but only if its confidence-value is higher than the threshold (e.g. 0.1).</p>
        <p>Furthermore, the conceptual model is not only inspectable but clickable.
Students can click on each concept of their conceptual model and learn more
about it. This is useful to orient the study towards the concepts that are least
understood, and guide the student to the questions that involve these concepts.
It is also important to observe that since students can look at the conceptual
model of the whole class, they can click on a concept that does not appear in
his or her particular conceptual model, but that appears in the class conceptual
model, and which may be important to assimilate if the concept is a precondition
for assimilating other concepts.</p>
        <p>A study is being undertaken in the 2007-2008 academic year, with 22 English
Studies students using the Will Tools to review their Pragmatics course. Initial
results show that students have found this new resource useful and they claim
that it is a good support for their review of the course.</p>
        <p>This paper is organized as follows: Section 2 describes the domain and
student’s conceptual models; Section 3 depicts some clickable and evolving
representation formats in which the students’ conceptual models are shown; Section 4
reports the results of the experiment performed with a group of English Studies
students; and, finally Section 5 provides the main conclusions of the paper.
1 The systems are available on-line at http://www.eps.uam.es/˜dperez/index1.html</p>
      </sec>
      <sec id="sec-2-2">
        <title>Domain and student’s conceptual model</title>
        <p>The domain model contains the reference information of the course or
area-ofknowledge under assessment. The information is provided by the teachers using
the authoring tool called Willed. There may be one or more teachers using Willed
to describe a course. In particular, it would be convenient that there are more
than just one teacher as, in this way, the creation of the domain model is less
dependent on a particular individual.</p>
        <p>Firstly, teachers are asked the name of the course to model. Secondly, they
are asked the name of the lessons of the course, and thirdly, they have to provide
a set of questions per topic. The minimum information that should be given per
question is: its statement in natural language; its maximum numerical score; its
numerical score to pass the question; its difficulty level in the range low (0),
medium (1) or high (2); the topic to which the question is related to and, finally,
a set of correct answers or references in natural language.</p>
        <p>
          In order to organize this information provided by the teacher in the domain
model, we have devised a hierarchical structure of knowledge into three different
types of concepts. The reason for using this structure is to follow the organization
of the course provided by the teachers as much as possible. The three types of
concepts devised are:
– Area-of-knowledge-concepts (AC): It is the name of the course to assess
as indicated by the teachers.
– Topic-concepts (TCs): They are the name of the lessons of the course as
indicated by the teachers.
– Basic-concepts (BCs): They are the key concepts of the area of
knowledge under study. BCs are automatically extracted from the correct answers
provided by the teachers to each question of the course using an automatic
Term Identification module [
          <xref ref-type="bibr" rid="ref12 ref34 ref5">5</xref>
          ]. Teachers can also later review this list of
BCs and, modify it as they consider more adequate.
        </p>
        <p>
          For instance, for an “Operating Systems” course, the AC would be “Operating
Systems”, one TC could be the “Concurrency” lesson and, and one BC could be
“thread ”. Moreover, given that the goal is to find out the level of assimilation of
each concept per student, all concepts are associated to a confidence-value (CV)
that reflects how well the system estimates that the student knows them. The
CV of a concept is between 0 and 1. A lower value means that the student does
not know the concept as s/he does not use it, while a higher value means that
the student confidently uses that concept. The CV is automatically updated as
the student answers questions according to a set of metrics [
          <xref ref-type="bibr" rid="ref12 ref34 ref5">5</xref>
          ]. The CV of a TC
is calculated as the mean value of the CVs of the BCs that it groups. The CV
of an AC is calculated from the CVs of its related TCs.
        </p>
        <p>Regarding the relationships between the concepts, we have devised three
types of links between them according to the type of concepts that they relate
(and following the criterion of adjusting the model as much as possible to the
traditional course provided by the teacher):
– Type 1, between ACs and TCs: Given that a course is usually structured
into lessons, type 1 links relate the concept representing the whole course (the
AC) with each lesson (each TC). A topic-concept may belong to different
area-of-knowledge concepts, but as the model only represents one course,
each TC can only be related to the AC. Type 1 links are automatically
extracted from the information provided by the teachers (i.e. which lessons
correspond to each course).
– Type 2, between TC and BC: Given that each lesson has a set of
questions with correct answers, type 2 links relate the concept representing the
lesson (each TC) with each concept treated in that lesson (each BC). A
basic-concept belongs to one or more topic-concepts. These relationships are
important because they give us information about how the basic-concepts
are grouped into topic-concepts and, how the students are able to use the
BC in the different questions of the topics of the course. TCs are not linked
among themselves, as the relationships between the topics are already
captured by the type 3 links. Type 2 links are automatically extracted from the
relationships between the topics and, the concepts found in the reference
answers of the questions of the topic.
– Type 3, between two BCs: A basic-concept can be related to one or more
basic-concepts. These links are very important as they reflect how BCs are
related in the student’s cognitive structure as extracted from the students’
answers. Therefore, unlike type 1 and type 2 links that are automatically
extracted from the information provided by the teachers, type 3 links are
automatically extracted from the information provided by the students.</p>
        <p>We define a student’s conceptual model as a simplified representation of
the concepts and relationships among them that each student keeps
in his or her mind about an area of knowledge at a given point of
time. Conceptual models are useful both as a data model to guide the system’s
assessment of the student, and also as a form of feedback to both student and
teacher, indicating the current state of progress of the student. As a resource to
the system, the order and content of questions can be selected to focus on the
misconceptions or erroneous links detected. In terms of feedback to the teacher
and student, the presentation of a student’s conceptual model makes evident the
student’s strengths and weaknesses. The teacher can also view the conceptual
model of the class as a whole to see the strengths and weaknesses of the class,
which may suggest that they need to spend more time teaching certain topics.</p>
        <p>
          The student’s conceptual model is not introduced by the teacher or by the
student, but generated from the answers provided by the students to the Willow
system [
          <xref ref-type="bibr" rid="ref12 ref34 ref5">5</xref>
          ]. The core idea is to compare the free-text answer provided by the
student to a set of correct free-text answers provided by the teachers, such that
the more similar they are, the higher the score the student achieves. Furthermore,
the system takes the frequency of use of the concepts in the student’s answer
into account in contrast to the frequency of use of the concepts in the teachers’
answers with the idea that students should not use concepts not contemplated
by their teachers in their answers, use them too frequently, or ignore concepts
that are considered important by the teachers [
          <xref ref-type="bibr" rid="ref12 ref34 ref5">5</xref>
          ].
        </p>
        <p>
          Initially, each student’s conceptual model has only the area-of-knowledge
concept (AC) and the topic-concepts (TCs) as indicated by the teacher and stored
in the domain model. Both AC and TCs have been associated a 0
confidencevalue indicating that the student has never used them. Similarly, only type 1
and 2 links are represented as extracted from the domain model. Next, when the
students start using Willow to answer the questions indicated by the teachers,
they will start providing free-text answers, and from these answers, Willow
automatically identifies the basic-concepts used. Moreover, Willow calculates the
confidence-value associated to each concept according to the frequency metrics
[
          <xref ref-type="bibr" rid="ref12 ref34 ref5">5</xref>
          ], and looks for type 3 links between BCs in the student’s answer.
3
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>Some conceptual models representation formats</title>
        <p>
          The conceptual model can be represented in several knowledge representation
formats: a concept map, a conceptual diagram, a table, a bar chart and a
textual summary. The conceptual model is always updated with the information
gathered from the students’ answers. This permits the capture of the
conceptual evolution of the students, since the conceptual models generated at different
times can be stored and reviewed later. In our previous work [
          <xref ref-type="bibr" rid="ref12 ref34 ref5">5</xref>
          ], both students
and teachers could enter a conceptual model viewer (COMOV) to look at the
inspectable representation of the models during the course. However, as a
result of the experiments performed with the Willow+COMOV systems during
the 2005-2006 and 2006-2007 academic years, we thought that it would be more
convenient not to show the whole conceptual model to the students, but just
the concepts with a CV higher than a certain threshold so that students could
actually see how they are building their conceptual models as they answer more
questions in Willow.
        </p>
        <p>Therefore, we have changed the way the student and class models are
accessed. In particular, students can now look at their own conceptual model and
the class conceptual model in the Willow system, whereas teachers can look at
the conceptual model of any student or group of students in a new conceptual
model viewer for teachers (Willov). In this way, both students and teachers can
keep track of the evolution of the models by looking at them several times during
the course. The difference now is that students can only see the concepts with a
CV higher than a certain threshold (e.g. 0.1, that is, the concepts that have been
mentioned at least once in their answers), while the teachers’ representation is
the same as in the previous version, showing all the concepts irrespectively of
their CV. Additionally, students and teachers can see the conceptual model for
each topic under review independently of other topics, and also a global view for
all topics.</p>
        <p>Furthermore, in order to help students understand the concepts that they
have used wrongly, they can now click on each concept and be presented with
an automatically generated explanation page. That is, the models are now not
only inspectable but also clickable, and thus more power has been given to the
student to control his or her learning. This does not give more work to the
teacher. In fact, the teacher does not have to write the explanation page. It
is generated from the information provided when the course was created. In
particular, the explanation page shows all questions and the correct answers in
which the concept has been used. The concept is marked with a color background
so that the student can extract the meaning of the concept from the different
contexts in which it appears.</p>
        <p>Regarding the possible representation formats of the automatically generated
student’s conceptual models, two will be described in this paper: concept maps
and conceptual diagrams. Concept maps are particularly useful for displaying
networks of concepts. Each node represents a concept and the links between the
nodes represent the relationships between the concepts. A web-like organization
of the map has been chosen, as it is one of the most suitable formats for the
hierarchy of concepts (BC, TC, AC) proposed. The type of node is indicated
by the size and place in the concept map: the AC is bigger and it is always at
the center, the TCs are medium-size and are placed in the second radial line,
while the BCs are smaller and are placed in the outer radials lines; and, the links
have been reorganized in an effort to avoid crossings. The conceptual model can
also be presented as a hierarchical diagram, with the most important concept
at the top and less relevant concepts below. In this format, the focus is just on
the concepts and, the relationships among them are not explicitly represented.
Figure 1 shows a concept map and conceptual diagram representations of the
student’s conceptual model for one topic.
4</p>
      </sec>
      <sec id="sec-2-4">
        <title>Experiment</title>
        <p>In the 2007-2008 academic year, Willow was used by 22 students out of 45
studying a “Pragmatics” course within the Department of English. Teachers provided
Use Map Diagram Table Graph Text Total
Individual 3 5 9 3 3 23
Class 1 2 1 1 1 6
Individual+Class 4 3 0 0 0 7
Class+Individual 6 2 0 0 0 8</p>
        <p>Total 14 12 10 4 4 44
material for Willow, consisting of 49 questions, each with 3 correct answers and
covering four topics of the “Pragmatics” course. The use of the system was
completely voluntary and did not affect the grade given in the subject. The goal of
the experiment was to find out whether the students find the new utilities in
Willow useful for reviewing their course. It is important to highlight that since
Willow is a Blended Learning tool, we do not aim to replace the teacher, but to
support both the teachers and students by providing an alternative knowledge
acquisition, assessment and representation format.</p>
        <p>The only technical knowledge needed to use Willow is the ability to use a
web browser. However, as it was the first time the students used computers as a
support for their studies, we gave them a short tutorial on the main features of
Willow, and we organized a first day of using Willow in class (in contrast with the
normal intention of using the system after class). As we did not want to interfere
with their manner of interaction with Willow (just the opposite, we wanted the
students to explore the system by themselves), we did not explain some new
features such as how to get more information about concepts by clicking on the
display of the conceptual model, or how to follow their progress by looking at
their conceptual model several times during the semester.</p>
        <p>Rather than basing our evaluation on user questionnaires, which requires
more work from the student, we set Willow to log each action the student
performs within the system. In this way, at the end of the first day of using Willow
in class we had 22 logs (49% of the students volunteered to use the system in
class). These logs revealed that even though they had not been told that they
could check their progress by looking several times at the model after having
answered questions, 14 students looked at the conceptual model 44 times, as
gathered in Table 1.</p>
        <p>Regarding how the conceptual model was viewed, the concept map format
was most popular (32% of views). The conceptual diagram form was second in
popularity (27%), while the bar chart and the textual summary were the least
popular formats (possibly because they were the last options on the menu).
Regarding the use of the individual versus class conceptual model, in 52% of
the cases, students looked at only their own conceptual model, while in 48%
of the cases they looked at both their own and the class conceptual models.
When tabular presentation was used, the students were more concerned with
their own results rather than the global results of the class. It is also interesting
to observe that the number of students who looked first their individual model
and secondly, the class conceptual model is similar to the number of students
who looked at the models in the reverse order.
5</p>
      </sec>
      <sec id="sec-2-5">
        <title>Conclusions</title>
        <p>The use of automatically generated students’ conceptual models from the
freetext answers provided to Willow has been extended not only for evaluating
purposes but also for tutoring. Only concepts with a confidence value higher than
a certain threshold are shown in the representation of the generated conceptual
model, so that concepts that have never been used by the student do not appear
in his or her own model. The student can still see these concepts in the class
conceptual model and click on them to generate an immediate explanation page
to find out what information is lacking in his or her answers and to improve them.
In this way, the next time that s/he answers the questions failed in Willow, if the
student uses the new information provided by the explanation page, s/he will
be able not only to pass the question but to generate a conceptual model with
more concepts marked as correctly known, indicating that s/he has achieved a
better knowledge of the subject.</p>
        <p>A study is being undertaken in the 2007-2008 academic year, with 22 English
Studies students using Willow to review their Pragmatics course. From the logs
of the use of Willow, it can be stated that one of the most popular representation
formats is the individual concept map.
6</p>
      </sec>
      <sec id="sec-2-6">
        <title>Acknowledgments</title>
        <p>This work has been sponsored by Spanish Ministry of Science and Technology,
project number TIN2007-64718.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Operational Specification for FCA using Z</title>
      <p>Simon Andrews and Simon Polovina
Faculty of Arts, Computing, Engineering and Sciences</p>
      <p>Sheffield Hallam University, Sheffield, UK</p>
      <p>{s.andrews, s.polovina}@shu.ac.uk
Abstract. We present an outline of a process by which operational
software requirements specifications can be written for Formal Concept
Analysis (FCA). The Z notation is used to specify the FCA model and
the formal operations on it. We posit a novel approach whereby key
features of Z and FCA can be integrated and put to work in
contemporary software development, thus promoting operational specification as
a useful application of conceptual structures.
1</p>
      <sec id="sec-3-1">
        <title>Introduction</title>
        <p>
          The Z notation is a method of formally specifying software systems [
          <xref ref-type="bibr" rid="ref1 ref2 ref30 ref31 ref8 ref9">1, 2</xref>
          ]. It is
a mature method with tool support [
          <xref ref-type="bibr" rid="ref10 ref3 ref32">3</xref>
          ] and an ISO standard1. Its strength is
in providing a rigorous approach to software development. Formal methods of
software engineering allow system requirements to be unambiguously specified.
The mathematical specifications produced can be formally verified and tools
exist to aid with proof and type checking. Being based on typed set theory and
first order predicate logic, Z is in a position to be exploited as a method of
specification of systems modeled using FCA.
        </p>
        <p>An issue with formal methods has been the amount of effort required to
produce a mathematical specification of the software system being developed.
Having a ’ready made’ mathematical model provided by FCA would allow formal
methods to have a new outlet. Whilst FCA can already be used to aid in the
understanding and implementation of software systems (see next Section), Z can
provide the method and structure by which FCA can be properly integrated into
a development life cycle.</p>
        <p>
          Work linking FCA and Z has been undertaken [
          <xref ref-type="bibr" rid="ref11 ref33 ref4">4</xref>
          ] that uses FCA as a means
by which Z specifications can be explored and visualised. However, it does not
appear that the link has been established in the other direction, i.e. that an FCA
model can be taken as a starting point for functional requirements specification
in Z. We are interested in specifying functional system requirements as
operations on the FCA data model, thus allowing the strengths of FCA and Z to be
combined. Work on algorithms based on FCA has been carried out, for example
by Carpineto and Romano [
          <xref ref-type="bibr" rid="ref12 ref34 ref5">5</xref>
          ], but here we are suggesting a formal approach to
the abstract specification of system requirements that can assist in transforming
the conceptual model into an implementation.
2
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>FCA in Software Development</title>
        <p>
          FCA has been used in a number of ways for software development; for modeling
the data structure of software applications, such as ICE [
          <xref ref-type="bibr" rid="ref13 ref35 ref6">6</xref>
          ], DVDSleuth [
          <xref ref-type="bibr" rid="ref14 ref36 ref7">7</xref>
          ] and
HierMail [
          <xref ref-type="bibr" rid="ref15 ref37">8</xref>
          ], and as the basis for specialised application building environments
such as ToscanJ [
          <xref ref-type="bibr" rid="ref16 ref38">9</xref>
          ] and Galicia [
          <xref ref-type="bibr" rid="ref17 ref39">10</xref>
          ]. However there appears to be little work
concerning the use of FCA as part of a general software engineering life cycle.
        </p>
        <p>
          Tilley et al [
          <xref ref-type="bibr" rid="ref18 ref40">11</xref>
          ] have conducted a survey of FCA support for software
engineering activities which found that the majority of reported work was concerned
with object-oriented re-engineering of existing/legacy systems and class
identification tasks. They found little that related to a wider software engineering
context or to particular life cycle phases.
        </p>
        <p>
          One piece of work that does relate FCA directly to phases of the software life
cycle has been carried out by Hesse and Tilley [
          <xref ref-type="bibr" rid="ref19 ref41">12</xref>
          ]; They discuss how FCA
applies to requirements engineering and analysis. By taking a use-case approach,
relating information objects to functional processes, they show that a
hierarchical program structure can be produced. They suggest that FCA can play a
central role in the software engineering process as a form of concept-based
software development. The approach of this paper embodies their idea, with FCA
providing the information structure and Z providing the process specification
(Figure 1).
        </p>
        <p>Fig. 1. FCA and Z in the Software Life Cycle
3</p>
      </sec>
      <sec id="sec-3-3">
        <title>From FCA to Z</title>
        <p>In FCA a formal context consists of a set of objects, G, a set of attributes, M ,
and a relation between G and M , I ⊆ G × M . A formal concept is a pair (A,
B ) where A ⊆ G and B ⊆ M . Every object in A has every attribute in B . For
every object in G that is not in A, there is an attribute in B that that object
does not have. For every attribute in M that is not in B there is an object in A
that does not have that attribute. A is called the extent of the concept and B is
called the intent of the concept. If g ∈ A and m ∈ B then (g, m) ∈ I , or gIm.</p>
        <p>In Z, information structures are declared based upon a typed set theory. To
apply this in FCA, G becomes such a type, namely the universal set of objects of
interest. Similarly, M becomes the universal set of attributes that the objects of
interest may have. The notation g : G declares an object g of type G and m : M
declares an attribute m of type M . Sets can be declared using the powerset
notation, P, and relations declared by placing an appropriate arrow between
related types.
3.1</p>
        <sec id="sec-3-3-1">
          <title>Formal Context as a System State</title>
          <p>Using the Z notation, the formal context and concepts can be specified as state
variables in a state schema (Figure 2), declaring the relation I , along with a
concept function, S , which maps extents to intents. S is declared as an injection;
an intent has one and only one extent, an extent has one and only one intent.The
lower section of the schema (the schema predicate) logically describes how I and
S are related. A : P G declares that A is a set of objects. B is the intent of A. |
ContextAndConcepts
I : G ↔ M
S : P G ֌7 P M
∀ A : P G; B : P M | (A, B ) ∈ S •
∀ g : G; m : M • g ∈ A ∧ m ∈ B ⇔ gIm ∧
∀ g : G | g ∈/ A • ∃ m : M | m ∈ B ∧ ¬ gIm ∧
∀ m : M | m ∈/ B • ∃ g : G | g ∈ A ∧ ¬ gIm</p>
          <p>Fig. 2. State Schema specifying a Formal Context and its Concepts
can be read as ’such that’ and • can be read as ’then’.</p>
          <p>
            Although a proof is not attempted here, the predicate appears, by inspection,
to satisfy Wille’s conditions for deriving concepts so that A = B I and B = AI
[
            <xref ref-type="bibr" rid="ref20 ref42">13</xref>
            ].
3.2
          </p>
        </sec>
        <sec id="sec-3-3-2">
          <title>Query Operations</title>
          <p>In Z, a query postfix, ?, is used to indicate an input to an operation and an
exclamation postfix, !, is used to indicate an output from an operation. The
symbol Ξ indicates that the operation does not change the value of the state
variables.</p>
          <p>In Z, if R is a binary relation between X and Y , then the domain of R
(dom R) is the set of all members of X which are related to at least one member
of Y by R. The range of R (ran R) is the set of all members of Y to which at
least one member of X is related by R.</p>
          <p>By making use of the concept function, S , and the fact that it is injective,
operations to output the intent of an extent and to output the extent of an
intent, are easily specified. Figure 3 specifies the latter in an operation schema
called FindExtent.</p>
          <p>A strength of the Z notation is its notion of preconditions and
postconditions. Preconditions are statements that must be true for the operation to be
successful and postconditions specify the result of the operation. In FindExtent,
the precondition B ? ∈ ran S states that the input set of attributes must be in
the range of S . The postcondition A! = S ∼(B ?) obtains the extent by inverting
S and supplying it with the intent.</p>
          <p>FindIntent is not specified here as it is, essentially, a mirror of FindExtent,
with the input being a set of objects and the output being the corresponding set
of attributes, B ! = S (A?).</p>
          <p>FindExtent
ΞContextAndConcepts
B ? : P M
A! : P G
B ? ∈ ran S
A! = S ∼(B ?)
FindAttributes
ΞContextAndConcepts
g? : G
B ! : P M
g? ∈ dom I
B ! = I (| {g?} |)</p>
          <p>Fig. 3. An operation to find the extent of an intent</p>
          <p>A query operation that outputs an object’s attributes, called FindAttributes
is shown in Figure 4. The set of attributes is obtained by taking the relational
image of I through a set containing the object of interest. Again, the operation
FindObjects (for an attribute of interest) is similar and is not specified here.</p>
          <p>Fig. 4. An operation to find an object’s attributes</p>
          <p>Operation schemas to find object concepts and attribute concepts can be
specified according to Wille’s definitions, γg := ({g}II , {g}I ) and γm := ({m}II , {m}I ),
by piping together the corresponding object/attribute, extent/intent queries
using a chevron notation, &gt;&gt;. The output from the schema preceding the chevrons
becomes the input for the schema that follows them:</p>
          <p>FindObjectConcept =b FindAttributes &gt;&gt; FindExtent ,</p>
          <p>FindAttributeConcept =b FindObjects &gt;&gt; FindIntent .</p>
          <p>In each case, we are interested in the outputs of both of the piped schemas, so
that γg = (A!, B !?) and γm = (A!?, B !). The postfix !? indicates that something
is first an output and then an input.
A strength of the Z notation is its notion of before and after states, i.e. a clear
distinction is made between the value of state variables before an operation is
carried out and their values after the operation is carried out. A state variable
decorated with an apostrophe indicates that is in the after state. The symbol Δ
indicates that an operation changes the state.</p>
          <p>An operation to add a new object to the context can be specified by declaring
the object and the object’s attributes as inputs. The operation schema AddObject
is shown in Figure 5. It is a precondition that the attributes currently exist in
the context.</p>
          <p>In Z, −⊲ subtracts elements from a range and ⊲ restricts a range. These are
used in the postcondition involving S to take into account the possibility that
the attributes of the new object are an existing intent. The relevant concept is
updated by adding the new object to the corresponding extent.</p>
          <p>AddObject
ΔContextAndConcepts
g? : G
B ? : P M
g? ∈/ dom I
B ! ⊆ ran I
I ′ = I ∪ { m : M | m ∈ B ! • g? 7→ m }
S ′ = (S ⊲− {B ?}) ∪ {S(dom S ⊲ {B ?}) ∪ {g?} 7→ B ?}</p>
          <p>Fig. 5. An operation to add a new object</p>
          <p>A similar operation to add a new attribute can be specified, but is not given
here. Other useful operations that can be specified include those to remove an
object from the context, remove an attribute from the context, remove an
attribute from an object, remove an object from an attribute and to add an existing
attribute to an existing object. It also is possible that other notions in FCA, such
as the superconcept/subconcept relationship and attribute/object implications,
will lend themselves to operational specification in Z.
4</p>
        </sec>
      </sec>
      <sec id="sec-3-4">
        <title>A User Profile Example</title>
        <p>Consider a user profile system where users belong to groups and groups are
associated with services. The contexts for this system are
usergroupContext : USER ↔ GROUP
groupserviceContext : GROUP ↔ SERVICE</p>
        <p>The complete state schema UserProfileSystem is not given for the sake of
brevity. The concept functions are also omitted (in practice, where concepts are
explicitly required, it may be more pragmatic to specify an axiom to obtain them
from the context, rather than include them explicitly in the system state).</p>
        <p>An operation is required to form a new group from all users who have access
to a particular set of services. The preconditions are that the group must not
already exist and that there must be at least one user who has access to the set
of services (this also ensures that the services exist). The requirement is specified
in Figure 6.</p>
        <p>FormGroup
ΔUserProfileSystem
newgroup? : GROUP
services? : P SERVICE
newgroup? ∈/ dom groupserviceContext
usergroupcontext 9o groupservicecontext ⊲ services? = ∅
∃ user : USER | services? ⊆</p>
        <p>ran({user } ⊳ usergroupContext 9o groupserviceContext)
usergroupContext′ = usergroupContext ∪ {user : USER | services? ⊆</p>
        <p>ran({user } ⊳ usergroupContext 9o groupserviceContext) • user 7→ newgroup?}
groupserviceContext′ = groupserviceContext ∪
{service : SERVICE | service ∈ services? • newgroup? 7→ service}</p>
        <p>Fig. 6. An operation to form a new group in the user profile system</p>
        <p>Relational composition is carried out using 9o, here to form the relation
between users and services. ⊳ is domain restriction. Set comprehension is used in
the postconditions in the form {... • x 7→ y }. The mapping x 7→ y defines the
form of the elements of the comprehended set.</p>
        <p>The above example shows how formal contexts, arising from FCA, can be
used in the formal specification of system requirements. The operation schema
FormGroup is an unambiguous specification that can be translated into a
program design.</p>
        <p>Equivalence classes
1. {{x1},{y1},{x2},{y2}}
2. {{x1, y1}, {x2}, {y2}}
3. {{x1,x2}, {y1}, {y2}}
4. {{x1}, {y1, x2}, {y2}}
5. {{x1, y1, x2}, {y2}}
6. {{x1, y2}, {y1}, {x2}}
CHAR-types: lin_labels, identity, new_lin_labels;
Arrays of lists: list_subgraphs; list_gen_graphs;
Arrays: words_markers(CHAR, &lt;CHAR,CHAR,CHAR&gt;) and
sorted_words_markers(CHAR, {&lt;CHAR,CHAR,CHAR&gt;, ..., &lt;CHAR,CHAR,CHAR&gt;});
function &lt;identity(G), lin_labels(G)&gt; / GRAPH_LINEARISATION(G, !) where G
is a SCG presented in logical/graphical format over an ordered alphabet !. Given G,
this function returns the pair (i) identity(G) – a sorted marker for identity of concept
instances and (ii) lin_labels(G) which contains the linear sequence of sorted G labels,
where each binary predicate in G is presented as a triple concept1-relation-concept2.
The function integrates interfaces between our encoding and the other CG formats; it
simplifies and normalises the input graph G and translates it to the desired linearised
form. The sorted identity-marker is a string enumerating the equivalent c-nodes; it
contains digits, '/' and '|' as shown in the samples (1), (2) and (3) above.</p>
        <p>function list_gen_graphs / COMPUTE_INJ_GEN(G, !1, !2, '). This function
returns the list of all injective generalisations written in alphabet !2, for a given graph
G written in alphabet !1. The generalisations are calculated using the mapping ',
which defines how the symbols of !1 are to be generalised by symbols of !2.
function new_lin_labels(Gsub) / ENSURE_PROJ_MAPPING(lin_labels(Gsub),
identity(Gsub), !1, lin_labels(Ggen), identity(Ggen), !2, ')
Given a linearised subgraph Gsub, written in the ordered alphabet !1 and its injective
generalisation Ggen, written in the ordered alphabet !2, this function checks whether
the order of c-nodes in the sorted string lin_labels(Ggen) corresponds to the order of
the respective specialised c-nodes in the sorted string lin_labels(Gsub). The check is
done following the mapping ', which defines how the symbols of !1 are to be
generalised by symbols of !2. (Remember that Gsub and Ggen contain equal number of
binary predicates, where the ones of Ggen generalise some respective predicates of
Gsub). If the c-nodes order in lin_labels(Ggen) corresponds to the order of the
respecttive specialised c-nodes in lin_labels(Gsub), new_lin_labels(Gsub) = lin_labels(Gsub).
Otherwise, lin_labels(Gsub) is rearranged in such a way that the order of its nodes is
aligned to the order of generalising nodes in Ggen. Let lin_labels(Ggen) be as follows:
cgen11 relgen1 cgen12 cgen21 relgen2 cgen22 … cgenk1 relgenk cgenk2
where cgenij, 1 i k, j=1,2 are labels of c-nodes and relgeni, 1 i k, are labels of r-nodes.
Then lin_labels(Gsub) is turned to the sequence of labels</p>
        <p>csub11 relsub1 csub12 csub21 relsub2 csub22 … csubk1 relsubk csubk2
where cgenij!csubij for 1 i k, j=1,2 and relgeni! relsubi for 1 i k. The re-arranged labels
of Gsub nodes are returned in new_lin_labels(Gsub). The string new_lin_labels(Gsub) is
no longer lexicographically sorted but its nodes' order is aligned to the order of the
generalising nodes in Ggen. The c-nodes' topological links in new_lin_labels(Gsub) are
given by identity(Ggen). Thus an injective projection ): Ggen"Gsub is encoded.
Algorithm 1. Construction of a minimal acyclic FSA with markers at the final states
AKB = ‹!, Q, q0, F, +, E, - › which encodes all subgraphs' injective generalisations for
a KB of SCGs with binary conceptual relations {G1, G2,…, Gn} over support S.</p>
        <p>Step 1, defining the finite alphabet !: Let S = (TC, TR, I, ) be the KB support
according to definition 1. Define !={x | x!TC or x!TR}&amp;{ x:i | x!TC, i!I and (i)=x}.
Order the m symbols of ! using certain lexicographic order 0 = &lt;a1,a2,…,am&gt;.</p>
        <p>Step 2, indexing all c-nodes: Juxtapose distinct integer indices to all KB c-nodes,
to ensure their default treatment as distinct instances of the generic concept types.
Then !KB = {aij | ai!!, 1 i m and j is an index assigned to the KB c-node ai, 1 j pi
or j='none' when no indices are assigned to ai}.</p>
        <p>Order the symbols of !KB according to the lexicographic order
0 KB = &lt;a1s1,…, a1su, a2p1,…, a2pv, ….., amq1,…, amqx&gt; where s1,s2,…su are the indices
assigned to a1; p1,…,pv are the indices assigned to a2; q1,…qx are the indices
assigned to am and s1&lt;s2&lt;…&lt;su, p1&lt;p2&lt;…&lt;pv, ….. and q1&lt;q2&lt;…&lt;qx .</p>
        <p>Define a mapping ': !KB. ! where '(aij)=ai for each aij !!KB, 1 i m and j is an index
assigned in !KB to the symbol ai!!.</p>
        <p>/* Step 3, computation of all KB (conceptual) subgraphs: */
for i 1/ 1 to n do begin
list_subgraphs(i) 1/ { Gsub-ji | Gsub-ji*Gi according to definition 10}; end;
/* Step 4, computation and encoding of all injective generalisations: */
var gen_index 1/ 1;
for each i and Gsub-ji in list_subgraphs(i) do begin
&lt;identity(Gsub-ji), lin_labels(Gsub-ji)&gt; 1/ GRAPH_LINEARISATION(Gsub-ji , !KB);
list_gen_graphs(i,j) 1/ COMPUTE_INJ_GEN(Gsub-ji, !KB, !, ');
for each Ggen in list_gen_graphs(i,j) do begin
&lt;identity(Ggen), lin_labels(Ggen)&gt; 1/ GRAPH_LINEARISATION(Ggen, !);
new_lin_labels(Gsub-ji) 1/ ENSURE_PROJ_MAPPING(lin_labels(Gsub-ji),
identity(Gsub-ji), !KB, lin_labels(Ggen), identity(Ggen), !, ');
words_markers(gen_index, 1) 1/ lin_labels(Ggen);
words_markers(gen_index, 2) 1/ &lt;identity(Ggen), new_lin_labels(Gsub-ji), Gi&gt;;
gen_index 1/ gen_index+1; end; end;
sorted_words_markers 1/ SORT-BY-FIRST-COLUMN(words_markers) ;
while sorted_words_markers(*,1) contains k&gt;1 repeating words in column 1,
starting at row p do begin
sorted_words_markers(p, 2) 1/ {sorted_words_markers(p,2),</p>
        <p>sorted_words_markers(p+1,2),…, sorted_words_markers(p+k-1,2)};
for 1 s k-1 do begin DELETE-ROW(sorted_words_markers(p+s,*) end; end;
L = {w1, w2,…,wz | wi! sorted_words_markers(*,1), 1 i z and wi wj according to 0,
for i j, 1 i z and 1 j z }.</p>
        <p>
          /* Step 5, FSA construction: */
Consider L as a finite language over !, given as a list of words sorted according to 0.
Apply results of [
          <xref ref-type="bibr" rid="ref14 ref36 ref7">7</xref>
          ] and build directly the minimal acyclic FSA with markers at the
final states AKB=‹!,Q,q0,F,+,E,-›, which recognises L={w1,…,wz}. Then
F={qwi|qwi is the end of the path beginning at q0 with label wi, for wi!L, 1 i z}.
E={ Mi | Mi=sorted_words_markers(i,2), 1 i z} and -: qwi. Mi where qwi!F,
sorted_words_markers(i,1) = wi and sorted_words_markers(i,2) = Mi.
        </p>
        <p>Example 2. We list below 7 (out of 37) subgraphs of the KB at Fig. 2. They are
given as markers &lt;identity-type, linear-subgraph-labels, index-of-main-KB-graph&gt;:
M1: &lt;none, LOVE EXPR PERSON:John, G1&gt;
M2: &lt;1=3, LOVE EXPR PERSON:John LOVE OBJ PERSON:Mary, G1&gt;
M3: &lt;none, LOVE EXPR PERSON, G2&gt;
M4: &lt;1=3, LOVE EXPR PERSON LOVE OBJ PERSON, G2&gt;
M5: &lt;1=3|2=4, LOVE EXPR PERSON LOVE OBJ PERSON, G2&gt;
M6: &lt;2=4, LOVE EXPR PERSON LOVE OBJ PERSON, G2&gt;</p>
        <p>M7: &lt;1=3|2=5, LOVE EXPR PERSON LOVE OBJ PERSON PERSON ATTR NAIVE, G2&gt;
Fig. 4 shows the minimal FSA with markers at the final states, which encodes the 33
injective generalisations of the subgraphs in M1-M7. New markers M8-M11 were
created at step 4 of algorithm 1, to properly encode all data.
4 Injective Projection in Run-Time</p>
        <p>The injective projection is calculated by a look-up in the minimal acyclic FSA,
which encodes all the KB generalisations, with a word built by the query graph labels.
There are two main on-line tasks, given a query G: (i) Presenting G as a sorted
sequence of support symbols, and calculation of its identity-type for linear time O(n);
(ii) Look-up in the FSA AKB by a word wG. Its complexity is clearly O(n), where n is
the number of G symbols. No matter how large the KB is, all injective projections of
G to the KB are found at once with complexity depending on the input length only.</p>
        <p>Now we see the benefits of the suggested explicit off-line enumerations. Actually
we enumerate all possible injective mappings from all injective projection queries to
the KB subgraphs. It becomes trivial to check whether a SCG with binary conceptual
relations is equivalent to certain SCG in the KB. Thus the lexicographic ordering of
conceptual labels provides a convenient formal framework for SCGs comparison.
5 Initial Experiments</p>
        <p>We have generated randomly type hierarchies of 600 concept types and 40 relation
types. The experimental KB consists of 291 SCGs with binary conceptual relations in
normal form, each with length of 3-10 conjuncts. These SCGs have 6753 (conceptual)
subgraphs with 10436190 different injective generalisations. After the lexicographic
sorting of all words (injective generalisations' labels) is done, they belong to 13885
identity-types- i.e. they are topologically structured in a relatively uniform way. The
minimal acyclic FSA with markers at the final states, which recognises all injective
generalisations, has 2751977 states and 3972096 transition arcs. The input text file of
sorted words, prepared for the FSA construction, is 891,4 MB. The minimal FSA is
52,44 MB but the markers-subgraphs are encoded externally, i.e. markers contains
only pointers. The input text file is compressed about 18 times when building the
minimal FSA, which is only 2,4 times bigger than the zipped version of the input file.</p>
        <p>The suggested approach implements off-line as much computations as possible and
provides exclusive run-time efficiency. The implementation requires considerable
offline preprocessing and large space since the off-line tasks operate on raw data. The
star graphs impose strong constraints on the structural patterns while computing
injective generalisations; this is intuitively clear but now we see experimental
evidences about the $uniformity$. Currently we plan an experiment with realistic data.
A Framework for Ontology Evaluation</p>
        <p>Muhammad Fahad, Muhammad Abdul Qadir</p>
        <p>
          Center for distributed and Semantic Computing
Mohammad Ali Jinnah University, Islamabad, Pakistan,
Abstract. Mapping and merging of multiple ontologies to produce consistent,
coherent and correct merged global ontology is an essential process to enable
heterogeneous multi-vendors semantic-based systems to communicate with
each other. To generate such a global ontology automatically, the individual
ontologies must be free of (all types of) errors. We have observed that the
present error classification does not include all the errors. This paper extends
the existing error classification (Inconsistency, Incompleteness and
Redundancy) and provides a discussion about the consequences of these errors.
We highlight the problems that we faced while developing our DKP-OM,
ontology merging system and explain how these errors became obstacles in
efficient ontology merging process. It integrates the ontological errors and
design anomalies for content evaluation of ontologies under one framework.
This framework helps ontologists to build semantically correct ontology free
from errors that enables effective and automatic ontology mapping and merging
with lesser user intervention.
1 Introduction
To furnish the semantics for emerging semantic web, Ontologies should represent
formal specification about the domain concepts and the relationships among them [
          <xref ref-type="bibr" rid="ref1 ref30 ref8">1</xref>
          ].
They have played a fundamental role for describing semantics of data not only in the
emerging semantic web but also in traditional knowledge engineering, and act as a
backbone in knowledge base systems and semantic web applications [
          <xref ref-type="bibr" rid="ref17 ref39">10</xref>
          ]. Like any
other dependable component of a system, Ontology has to go through a repetitive
process of refinement and evaluation during its development lifecycle before its
integration in the semantic applications. Ontology content evaluation is one of the
critical phases of Ontology Engineering because if ontology itself is contaminated
with errors then the applications dependent on it, may have to face some critical and
catastrophic problems and ontology may not serve its purpose [
          <xref ref-type="bibr" rid="ref14 ref36 ref7">7</xref>
          ].
        </p>
        <p>
          Several approaches for evaluation of taxonomic knowledge on ontologies are
contributed in the research literature. Ontologies can be evaluated by considering
design principles [
          <xref ref-type="bibr" rid="ref16 ref17 ref18 ref38 ref39 ref40">9,10,11</xref>
          ], requirements and logical correctness of axioms, relations,
instances, etc. Other approaches would be to evaluate ontologies in terms of their use
in an application [
          <xref ref-type="bibr" rid="ref25">18</xref>
          ] and predictions from their results, comparison with a golden
standard or source of data [
          <xref ref-type="bibr" rid="ref20 ref42">13</xref>
          ]. Considering design principles, Gomez formed error
taxonomy for assistance in the ontology evaluation. Ontology engineers use that error
taxonomy to build well-formed classification of concepts that enable better reasoning
support for fulfillment of sound semantic web vision and to evaluate their ontologies
in perspective of these errors. Besides taxonomic errors, there are some design
anomalies which raise the issues of maintainability of ontologies [
          <xref ref-type="bibr" rid="ref2 ref31 ref9">2</xref>
          ].
        </p>
        <p>
          This paper presents the ontological errors based on design principles for
evaluation of ontologies. It provides the overview of ontological errors and design
anomalies that reduces reasoning power and creates ambiguity while inferring from
concepts. It shows our contribution in taxonomic errors that we experience while
development of ontology merging system, DKP-OM [
          <xref ref-type="bibr" rid="ref13 ref35 ref6">6</xref>
          ]. Finally it integrates the
design anomalies and taxonomic errors under one framework that helps practitioners,
developers and ontologists to build well formed ontologies free from errors that serve
their purposes, and develop tools for ontology evaluation for fulfilment of sound
semantic web vision.
        </p>
        <p>
          Rest of the paper is organized as follows: section 2 presents classification of
ontological errors and design anomalies; section 3 contributes our identified
ontological errors and extends the classes of errors formed by Gomez. Section 4
presents the related work of our domain. Section 5 concludes the paper.
2 Taxonomic Errors and Design Anomalies
Gomez-Perez [
          <xref ref-type="bibr" rid="ref17 ref18 ref39 ref40">10,11</xref>
          ] identified three main classes of taxonomic errors that might
occur when modeling the conceptualization into taxonomies. The subsections
elaborate each class of error made by Gomez.
2.1 Inconsistency Errors
There are mainly three types of errors that cause inconsistency and ambiguity in the
ontology. These are Circulatory errors, Partition errors and Semantic inconsistency
errors.
        </p>
        <p>Circulatory errors: They occur when a class is defined as a subclass or superclass of
itself at any level of hierarchy in the ontology. They can occur with distance 0, 1 or n,
depending upon the number of relations involved when traversing the concept down
the hierarchy of concepts until we get the same from where we started traversal. For
example, circulatory error of distance 0 occurs when ontologist models OddNumber
concept as subclass of NaturalNumber and NaturalNumber as subclass of
OddNumber. As OWL ontologies provide constructs to form property hierarchies, so
we have observed that circulatory errors in property hierarchies can occur.
Partition errors: There are mainly several ways of classification depending upon the
type of decomposition of superclass into subclasses. When all the features of
subclasses are independently described and subclasses do not overlap with each other
then it leads to disjoint decomposition. When ontologists follow the completeness
constraint between the subclasses and the superclass, then it leads to a complete or
exhaustive decomposition. The other can depend on both the disjoint and exhaustive
decomposition. Three types of errors are:
Common instances and classes in disjoint decomposition and partitions: These
errors occur when ontologists create the instances that belong to various disjoint
subclasses or a common class as a subclass of disjoints classes. An example of former
error is when ontologist decomposes the Course concept into disjoint subclasses
GradCourse and UndergradCourse, and furthermore he classifies CS6304 course as
an instance of both disjoint classes. An example of later error is when ontologist
decomposes the NaturalNumber concepts into disjoint subclasses Odd and Even,
furthermore he classifies Prime number class as a subclass of both Odd and Even
subclasses.</p>
        <p>External instances in exhaustive decomposition and partitions: These errors occur
when ontologists made an exhaustive decomposition or partition of a class into many
subclasses but not all the instances of the base class belong to the subclasses, i.e., one
or more instances of base class do not belong to any of the subclasses. For example
ontologist decomposes Accommodation into Hotel, House and Shelter subclasses.
This error occurs if he defines an instance TrainStation as an instance of the class
Accommodation.</p>
        <p>Semantic Inconsistency Errors: These errors occur when ontologists make an
incorrect class hierarchy by classifying a concept as a subclass of a concept to which
that concept does not really belong. For example he classifies the concept SeaPlane as
a subclass of the concept AirPlane. Or the same might did when classifying instances.
We find three main reasons that result incorrect semantic classification and classify
the semantic inconsistency errors into three subclasses, explained in extension in
taxonomic errors section.
2.2 Incompleteness Errors
Sometimes ontologists made the classification of concepts but overlook some of the
important information about them. Such incompleteness often creates ambiguity and
lacks reasoning mechanisms. The following subsections give the overview of
incompleteness errors.</p>
        <p>Incomplete Concept Classification: This error occurs when ontologists overlook
some of the concepts present in the domain while classification of particular concept.
For example ontologists classify concept Location into CulturalLocation,
MountainLocation, and overlook other location types such as BeachLocation,
HistoricLocation, etc.</p>
        <p>
          Partition Errors: Gomez identified that sometimes ontologist omits important
axioms or information about the classification of concept, reducing reasoning power
and inferring mechanisms. He has identified two types of errors that cause incomplete
partition errors to occur, that are:
Disjoint Knowledge Omission: This error occurs when ontologists classify the
concept into many subclasses and partitions, but omits disjoint knowledge axiom
between them. For example ontologist models the BeachLocation, HistoricLocation
and MountainLocation as subclasses of Location concept, but omits to model the
disjoint knowledge axiom between subclasses. We developed the ontology of
Access_Policy, where disjoint knowledge omission between User and Administrator
causes catastrophic results [
          <xref ref-type="bibr" rid="ref26">19</xref>
          ], and provided the algorithm for identification of
disjoint knowledge omission [
          <xref ref-type="bibr" rid="ref23">16</xref>
          ].
        </p>
        <p>Due to significant importance of disjoint axiom between classes, OWL 1.1 allows
to specify disjoint axioms between properties as well. So we also emphasis that
ontologists should check and specify disjoint knowledge between properties, and
avoid creating common instances between them.</p>
        <p>Exhaustive knowledge Omission: This error occurs when ontologists do not follow
the completeness constraint while decomposition of concept into subclasses and
partitions. For example ontologist models the BeachLocation, HistoricLocation and
MountainLocation as disjoint subclasses of Location concept, but does not specify
that whether or not this classification forms an exhaustive decomposition.
2.3 Redundancy Errors
Redundancy occurs when particular information is inferred more than once from the
relations, classes and instances found in ontology. The following are the types of
redundancies that might be made when developing taxonomies.</p>
        <p>
          Redundancies of SubclassOf, Subproperty-Of and InstanceOf relations:
Redundancies of SubclassOf error occur when ontologists specify classes that have
more than one SubclassOf relation directly or indirectly. Directly means that a
SubclassOf relation exist between the same source and target classes. Indirectly
means that a SubclassOf relations exist between a class and its indirect superclass of
any level. For example ontologists specify BeachLocation as a subclass of Location
and Place, and furthermore Location is defined as a SubclassOf Place. Here indirect
SubclassOf relation exists between BeachLocation and Place creating redundancy.
Likewise Redundancy of SubpropertyOf can exist while building property hierarchies.
Redundancies of InstanceOf relation occur when ontologists specify instance Swat as
an InstanceOf Location and Place classes, and it is already defined that Location is a
subclass of Place. The explicit InstancesOf relation between Swat and Place creates
redundancy as Swat is indirect instance of Place as Place is a superclass of Location.
Identical formal definition of classes, properties and instances: Identical formal
definition of classes, properties or instances may occur when ontologist defines
different (or same) names of two classes, properties or instances respectively, but
provides the same formal definition.
2.4 Design Anomalies in Ontologies
Besides taxonomic errors, Baumeister and Seipel [
          <xref ref-type="bibr" rid="ref2 ref31 ref9">2</xref>
          ] identified some design
anomalies that prohibit simplicity and maintainability of taxonomic structures with in
ontology. These do not cause inaccurate reasoning about concepts, but point to
problematic and badly designed areas in ontology. Identification and removal of these
anomalies should be necessary for improving the usability, and providing better
maintainability of ontology.
        </p>
        <p>Property Clumps: Datatype properties and Object properties that are associated with
classes provide us powerful mechanisms for reasoning and inferring about concepts.
Sometimes ontologists badly design ontology using repeatedly a group of properties
in different class definitions. This repeated group of properties is called property
clump and should be replaced by an abstract concept composing those properties in
all the class definitions where this clump is used.</p>
        <p>Chain of Inheritance: Ontology defines taxonomy of concepts and allows
classifying concepts as subClassOf other concepts up to any level. When such
hierarchy of inheritance is long enough and all classes have no appropriate
descriptions in the hierarchy accept inherited child then that ontology suffers from
chain of inheritance. For maintainability and simplicity, this chain of inheritance
should be break-up into subhierarchies.</p>
        <p>Lazy Concepts: Lazy concept is a leaf concept (or a property) in the taxonomy that
never appears in the application and does not have any instances. Such concepts
should be replaced with specialized or generalized concepts that occupy such
instances and would be used in the application domain.</p>
        <p>
          Lonely Disjoints: Sometimes ontologists need to modify the taxonomy of concepts
and move concepts within the class hierarchy. Consider a scenario, where many
disjoint siblings were created and later on a single sibling is moved to another place
somewhere in the hierarchy, and ontologist forgets to delete the disjoint axiom
between them. Such disjoint axioms should be removed from lonely disjoint concepts
to enable better maintainability and reasoning support.
3 Extensions in Taxonomic Errors
We have identified several ontological errors [
          <xref ref-type="bibr" rid="ref14 ref22 ref23 ref26 ref27 ref36 ref7">7,15,16,19,20</xref>
          ] while evaluating
taxonomic knowledge on ontologies and knowledge based systems, and extended the
main three classes of Taxonomy evaluation, i.e., Inconsistency, Incompleteness and
Redundancy. Some of these are experienced while developing DKP-OM: Disjoint
Knowledge Preserver based Ontology Merger [
          <xref ref-type="bibr" rid="ref13 ref35 ref6">6</xref>
          ], a solution we provide for effective
ontology merging. The subsections present our identified ontological errors.
3.1 Semantic Inconsistency Errors
There are mainly three reasons due to which incorrect semantic classification
originates [
          <xref ref-type="bibr" rid="ref14 ref36 ref7">7</xref>
          ]. According to these reasons, we categorize Semantic inconsistency
errors into three subclasses. These subclasses can be used as a check list for class
hierarchy evaluation and help in building well-formed class hierarchy to provide
better interpretation of concepts.
        </p>
        <p>Weaker domain specified by subclass error: When classes that represent the larger
domain are kept subclasses of concept that possess smaller domain then such an error
might occur. For example ontologist classifies UniversityMember, AcademicStaff,
AdminStaff and LabStaff concepts as a subclass of a concept Staff superclass. Here the
semantic inconsistency occurs as he classified more generalized concept
UniversityMember as subclass of the concept Staff. A subclass should always
specializes (subsumed by) the superclass concept’s properties by specifying stronger
domain and make the super concept’s domain narrower.</p>
        <p>Domain breach specified by subclass error: Subconcepts should possess all the
features of the parent concept and should not violate any feature of their parent
concept in their own domain. Superclass domain breach occurs when concepts treated
as subclasses add more features that are not present in superclass but the additional
features are violating some features of their superclasses. For example consider a
Pizza class hierarchy where ontologist classifies concept VegetarianPizza as a
subclass concept of Pizza. Furthermore he classifies ChinesePizza and ItalianPizza
concepts as the subclasses of the concept VegetarianPizza. Semantic Inconsistency
arises as the definition of ChinesePizza allows having any toppings made from boiled
vegetables and any kind of meat.</p>
        <p>Disjoint domain specified by subclass error: When ontologists specify disjoint
domain concepts as subclasses of a concept that occupies a different domain. For
example he classifies concepts Drink and Burger as subclasses of EatableThing
concept. None of the features of Drink match with superclass concept EatableThing
i.e. they belong to disjoint domains.</p>
        <p>
          These semantic inconsistency errors can be applied same to the instances of
superclass and subclasses to whether their conformance with each other.
3.2 Extension in Incompleteness Errors
For powerful reasoning and enhanced inference, OWL ontology provides some tags
that can be associated with properties of classes [
          <xref ref-type="bibr" rid="ref24">17</xref>
          ]. OWL functional and
inversefunctional tags associated with properties indicate how many times a domain concept
can be associated with range concept via a property. Sometimes ontologists do not
give significance to these property tags and do not declare datatype or object
properties as functional or inverse-functional. As a result machine can not reason
about a property effectively leading to serious complications [
          <xref ref-type="bibr" rid="ref27">20</xref>
          ].
        </p>
        <p>
          Functional Property Omission (FPO) for single valued property: According to
Ontology Definition Metamodel [
          <xref ref-type="bibr" rid="ref24">17</xref>
          ], when there is only one value for a given subject
then that property needs to be declared as functional. The tag Functional can be
associated with both the object properties and datatype properties. For example
hasBlood_Group as an object property between Person and Blood_Group is an
example of functional object property. Every subject Person belongs to only one type
of BloodGroup, so hasBlood_Group property should be tagged as functional so that
person should be associated with one blood group. Likewise functional datatype
properties allow only one range R for each domain instance D. Ignoring Functional
tag allows property to have more than one values leading to inconsistency. One of the
main reason for such inconsistency is that ontologist has ignored that OWL ontology
by default supports multi-values for datatype property and object property.
Inverse-Functional Property Omission (IFPO) for a unique valued property:
According to Ontology Definition Metamodel [
          <xref ref-type="bibr" rid="ref24">17</xref>
          ], inverse-functional property of the
object determines the subject uniquely, i.e. it acts like a unique key in databases. This
means that if we state P as an owl InverseFunctionalProperty, then this restricts that
for a single instance there can only be a value x, i.e. there cannot exist two different
instances y and z such that both pairs (y, x) and (z, x) are valid instances of P. In
OWL Full, datatype property can be tagged as inverse-functional property because
datatype property is a subclass of object property. But in OWL DL datatype property
can not be tagged as inverse-functional property because object properties and
datatype properties are disjoint. An example of inverse object property is
National_SecurityNo that belong to the Person as it uniquely identifies the Person.
Ignoring inverse-functional tag with the property National_SecurityNo creates
inconsistency within the ontology due to incomplete specification of concept. We
consider such lack of information as an error, because such ignorance leads machine
not to infer and reason about concepts uniquely.
        </p>
        <p>
          Sufficient knowledge Omission Error (SKO): Ontology comprises concepts and
properties that can be arranged in hierarchies. These concepts in hierarchies should
posses some features so that inference engine can distinguish them appropriately.
According to principles of Description Logic, there should be Necessary description
and Sufficient description associated with each concept [
          <xref ref-type="bibr" rid="ref21 ref43">14</xref>
          ]. Necessary description
rules define the basic criteria by which new concept is formed by subclass of relation,
and Sufficient description defines the concept in terms of another concepts like its self
description by using intersection, union, complement or restriction axioms in OWL
[
          <xref ref-type="bibr" rid="ref22">15</xref>
          ]. Sometimes during ontology designing, ontologists define the concepts but don’t
provide their Sufficient descriptions. As a result, machine can’t reason about them
properly and cannot use them effectively to achieve the goals of semantic web.
        </p>
        <p>
          Finding incompleteness in ontologies automatically is a difficult task. One of the
possible ways to detect such incompleteness errors is to evaluate ontology on test data
[
          <xref ref-type="bibr" rid="ref11 ref33 ref4">4</xref>
          ] (valid and invalid both) that can be generated according to tester’s domain
knowledge [
          <xref ref-type="bibr" rid="ref29">22</xref>
          ], experience with similar concepts and information about soft spots of
ontology.
3.3 Extension in Redundancy Errors
While detecting disjoint knowledge omission in ontology and generating warnings on
its omission [
          <xref ref-type="bibr" rid="ref22">15</xref>
          ], we detect redundancy of disjoint relation in ontologies. The
following subsection provides detail on it.
        </p>
        <p>
          Redundancy of Disjoint Relation (RDR) Error: Redundancy of Disjoint Relation
occurs when the concept is explicitly defined as disjoint with other concepts more
than once (Noshairwan, 2007a). By Description Logic rules [
          <xref ref-type="bibr" rid="ref21 ref43">14</xref>
          ], if a concept is
disjoint with any concept then it is also disjoint with its sub concepts. The one
possible way of occurrence of RDR is that the concept is explicitly defined as disjoint
with parent concept and also with its child concept. For an example, concept Male is
defined as disjoint with Female and also with sub concepts of Female. This type of
redundancy can occur due to direct disjointness (directly disjoint) and indirect
disjointness (concept is disjoint with other because its parent is disjoint with it).
There are many other approaches for ontology evaluation but still there is a big gap
which needs to be filled for sound semantic web ontologies. The standard ontology
evaluation approach by Maedche and Staab [
          <xref ref-type="bibr" rid="ref20 ref42">13</xref>
          ] is to compare ontology with gold
standard ontology for evaluating lexical and vocabulary level of ontology. Besides
comparison with gold standard, Brewster et al. [
          <xref ref-type="bibr" rid="ref11 ref33 ref4">4</xref>
          ] gave the corpus or data driven
ontology evaluation approach. Comparison of ontology with the corpus or data of the
domain knowledge provides a measure of the fit between them; and highlights the
terms that are present/absent in ontology and corpus. Context level evaluation
approach takes into account the larger collection of ontologies as a reference for
evaluation of particular ontology [
          <xref ref-type="bibr" rid="ref29">22</xref>
          ]. The library of ontologies or the context for
evaluation provided by the knowledge engineer acts as reference to follow. Other
approaches of ontology evaluation would be to observe the results of application or
task where this ontology is being used. Prozel and Malanka [
          <xref ref-type="bibr" rid="ref25">18</xref>
          ] proposed the
taskbased approach for ontology evaluation but could not be so much effective, as
ontology acts only a backbone and several other issues of task itself can generate bad
results. Burton-Jones [
          <xref ref-type="bibr" rid="ref12 ref34 ref5">5</xref>
          ] defined a semiotic metrics based on different criteria for
ontology assessment for syntactic and lexical/vocabulary evaluation. Likewise Fox el
al. [
          <xref ref-type="bibr" rid="ref15 ref37">8</xref>
          ] made a set of parameters but these are more useful for manual assessment of
quality of ontology. These ontology evaluation approaches are useful in different
applications, scenarios and environments [
          <xref ref-type="bibr" rid="ref10 ref3 ref32">3</xref>
          ] and the choice of a suitable methodology
should be adopted according to the ontology usage.
5 Conclusion
Ontology driven architecture has revolutionized the inference system by allowing
interoperability between heterogeneous multi-vendors systems. We have identified
that accurate ontologies free from errors enable more interoperability, improve the
accuracy of ontology mapping and merging and lessen human intervention during this
process. We have discussed existing ontological errors, and identified newer types of
errors present in ontologies. We have concluded that without identification and
removal of these errors the most desirable goal of ontology mapping and merging
could not be achieved. We have integrated the overall work about ontology evaluation
based on design principles and anomalies under one framework. This framework acts
as control mechanism that helps ontologist to build accurate ontologies that serve best
for the desired applications, provide better reasoning support, lessen user intervention
in efficient ontology merging and combined use of independently developed online
ontologies can be made possible.
        </p>
        <p>Claire Laudy1,2 and Jean-Gabriel Ganascia2
1 THALES Research &amp; Technology, Palaiseau, France
2 ACASA, Laboratoire d’Informatique de Paris 6, Paris, France
Abstract. On the one hand, Conceptual Graphs are widely used in
natural language processing systems. On the other hand, information fusion
community lacks of tools and methods for knowledge representation.
Using natural language processing techniques for information fusion is a
new field of interest in the fusion community. Our aim is to take the
advantage of both communities and propose a framework for high-level
information fusion. Conceptual Graphs model contains aggregation
operators such as join and maximal join. This paper is dedicated to the
extension of the maximal join operator in order to manage
heterogeneous information fusion. Domain knowledge has to be injected into the
maximal join operation in order to satisfy the constraints of fusion. The
extension relies on relaxing the equality constraint on observations and
on using fusion strategies. A case study illustrates our proposition and
we describe the experimentations that we conducted in order to validate
our approach.
1</p>
      </sec>
      <sec id="sec-3-5">
        <title>Introduction</title>
        <p>
          The first step of the decision-making process is to get information in order to
elaborate a decision from it. Such a process is difficult as information is
distributed across various sources and on different media. A lot of studies concern
the fusion of either low-level data or data expressed through the same media.
Our aim is to concentrate on high-level and heterogeneous information fusion.
Even if some papers report about how to use ontologies to store domain
knowledge ([
          <xref ref-type="bibr" rid="ref1 ref30 ref8">1</xref>
          ]), the Information Fusion community lacks techniques able to model
knowledge. The objectives of our work is thus to propose an approach and a
framework dedicated to high-level and heterogeneous information fusion. By
high-level information, we mean that our aim is to manipulate semantic objects.
        </p>
        <p>
          Conceptual graphs [
          <xref ref-type="bibr" rid="ref2 ref31 ref9">2</xref>
          ] are a widely used formalism for knowledge
representation. The advantages of using graph structures, and particularly conceptual
graphs model, to represent information have been stated in [
          <xref ref-type="bibr" rid="ref10 ref3 ref32">3</xref>
          ]. The authors
explain how criminal intelligence information and model can effectively be stored
as conceptual graphs. We propose to take advantage of this representation and
go further by using the same model for information fusion. Using the same model
for both information representation and information fusion has a major
advantage. It allows us to remove the bias due to the translation from one formalism
to another when using distinct models.
        </p>
        <p>Among all the operators that were defined on the conceptual graphs
structures, we are particularly interested in the maximal join. Maximal join allows
the fusion of two graphs that are not strictly identical. We propose to use it in
order to fuse different descriptions of a single object of the real world. Maximal
Join must nevertheless be extended. Domain knowledge is widely used in the
information fusion community in order to solve conflicts during fusion.
Therefore, we propose to introduce some domain knowledge inside the maximal join
operation.</p>
        <p>
          Section 2 presents related works as well as the case study that we used to
illustrate our proposition. The use of the conceptual graphs formalism for fusion
is described in section 3. In particular, we detail in this section the suitability
of maximal join operator for high-level information fusion. Section 4 details our
proposition of extension for the maximal join. This extension relies on the use of
external fusion strategies detailed in the same section. We describe in section 5
the experimentation that we conducted on the case study, in order to validate
our approach. We then conclude and present future work.
2
Our aim is to use the output of intelligent sensors as input observations for
our system. For textual information, these intelligent sensors are systems able to
analyze the meaning of the texts and store it as machine readable information. As
conceptual graphs were initially developed in order to analyze natural language,
a lot of studies exist ([
          <xref ref-type="bibr" rid="ref11 ref33 ref4">4</xref>
          ], [
          <xref ref-type="bibr" rid="ref12 ref34 ref5">5</xref>
          ], [
          <xref ref-type="bibr" rid="ref13 ref35 ref6">6</xref>
          ]), aiming at transforming textual information
items into conceptual graphs. Considering other media, studies such as [
          <xref ref-type="bibr" rid="ref14 ref36 ref7">7</xref>
          ] and
[
          <xref ref-type="bibr" rid="ref15 ref37">8</xref>
          ] have been realized. They aim at automatically analyzing images and videos
and store the resulting descriptions as conceptual graphs. Finally, as stated in
[
          <xref ref-type="bibr" rid="ref16 ref38">9</xref>
          ] and [
          <xref ref-type="bibr" rid="ref17 ref39">10</xref>
          ] conceptual graphs are widely used to formalize several domains of
knowledge as different as biomedical risks or corporate modeling. Therefore, we
use conceptual graphs for knowledge representation. Furthermore, we propose to
go beyond the usual use of conceptual graphs and take advantage of conceptual
graphs operators for information fusion.
        </p>
        <p>
          The information fusion community is more involved in studies aiming at
fusing low level data. The use of techniques and methods taken from natural
language processing is a new field of interest in the fusion community (see [
          <xref ref-type="bibr" rid="ref18 ref40">11</xref>
          ]
and [
          <xref ref-type="bibr" rid="ref19 ref41">12</xref>
          ] for instance). People look at how to use ontologies to model a domain.
We claim that conceptual graphs are a good candidate for information fusion
since the formalism contains the maximal join operator and the structures are
easily understandable.
The approach that we propose can be applied to any domain for which a model
can be drawn a priori and stored as an ontology. In order to validate it on real
data, we used a real world case study that concerns TV program descriptions.
The purpose is to fuse descriptions given by different sources. Our aim is to
obtain more complete and precise descriptions of the TV programs and to get a
better scheduling of the programs.
        </p>
        <p>Our first source of information (called DVB stream) is the live stream of
metadata associated with the video stream on the TNT (T´elevision Num´erique
Terrestre). The DVB stream gives descriptions of TV programs containing
schedule and title information. It is very precise about the begin and end times of
programs and delivers information about the technical characteristics of the audio
and video streams.</p>
        <p>The second source of information is an online TV magazine. The descriptions
contain information about the scheduling of the programs, their titles and the
channels on which they are scheduled. They also contain more details about the
contents (summary of the program, category, list of actors and presenters etc).</p>
      </sec>
      <sec id="sec-3-6">
        <title>Using Conceptual Graphs for Information Fusion</title>
        <p>
          Conceptual Graphs [
          <xref ref-type="bibr" rid="ref2 ref31 ref9">2</xref>
          ] is a formalism particularly well suited to represent
knowledge in a media- and source- independent way. We briefly introduce the way we
will use it for information fusion.
        </p>
        <p>Fig. 1. Type hierarchy for TV programs</p>
        <p>Defining the domain model is the first step of the fusion process. First, the
ontology of the domain is defined. Figure 1 depicts a subset of the type
hierarchy that was defined for the TV program case study. Then, the set of situations
that are expected to happen are formulated through the canonical basis.
Potential interactions between the entities (defined as concepts and relations in the
ontology) are represented using conceptual graph structures. Figure 2 shows an
example of an abstract canonical graph. It describes the model of a TV program.</p>
        <p>After defining the domain model, we automatically acquire the observations
into the conceptual graph formalism. Figure 3 show example of observations that
were made on DVB stream and telepoche.fr website and stored as conceptual
graphs.</p>
        <p>Fig. 2. TV Program Model</p>
        <p>Fig. 3. Observations on DVB stream and telepoche.fr</p>
        <p>Maximal Join is a major function in the process of fusion of conceptual graph
structures. Two compatible sets of concepts from two different conceptual graphs
are merge into a single one. There may be several possibilities of fusion between
two observations, according to which combinations of observed items are fused or
not. This phenomenon is well managed by the maximal join operator, as joining
two graphs maximally results in a set of graphs, each one of it being a fusion
hypothesis.
4
4.1</p>
      </sec>
      <sec id="sec-3-7">
        <title>Towards a Framework for Information Fusion</title>
        <sec id="sec-3-7-1">
          <title>Extending Maximal Join operator</title>
          <p>Maximal join is a fusion operator which has to be modified in order to
manage observations coming from different sensors. These observations may depict
different points of view or different levels of detail and abstraction. The values
of the concepts may be different while representing several observations of the
same object.</p>
          <p>Figure 4 gives an example of such a case. The maximal join of the two graphs
G1 and G2 results in G3. The two concepts [Date: ”2006.11.27.06.45.00”] and
[Date: ”2006.11.27.06.47.54”] cannot be joined using the standard maximal join
operator as their values are different. However, because we know the domain that
is modeled here, we have clues to say that the two concepts still represent the
same entity in the real world. A TV program has only one begin time and there</p>
          <p>Fig. 4. Limitation of maximal join
are often slight differences between the times given by different sources. Fusion
heuristics must be added in the maximal join operation. Therefore, the notion of
compatibility between concepts is extended from compatible conceptual types to
compatible referents and individual values. The domain knowledge necessary to
this extension is stored as compatibility rules that are called Fusion Strategies.
As explained before, the notion of compatibility between concepts in the maximal
join operation has to be extended in order to support information fusion. Real
data is noisy and knowledge about the domain is often needed in order to fuse two
different but compatible values into a single one. Therefore, we introduced the
notion of fusion strategies. They are rules encoding domain knowledge and fusion
heuristics. We use them to compute the fused value of two different observations
of the same object. On the one hand, the fusion strategies extend the notion
of compatibility that is used in the maximal join operation. According to some
fusion strategy, two entities with two different values may be compatible and
thus fusable. On the other hand, the strategies encompass functions that give
the result of the fusion of two compatible values.</p>
          <p>Fusion strategies integrating domain knowledge and operator’s preferences
are the intelligent part of our fusion system. These strategies are implemented
as IF &lt; conditions &gt; T HEN &lt; f used − value &gt; rules. They take conceptual
graphs and conditions on the concepts as premises. The conclusion is a
conceptual graph that integrates functions defining the values and referents of its
concepts.</p>
        </sec>
      </sec>
      <sec id="sec-3-8">
        <title>Validation</title>
        <p>
          We implemented a fusion platform based on the approach that we propose. The
platform was developed in JAVA and uses the AMINE platform ([
          <xref ref-type="bibr" rid="ref20 ref42">13</xref>
          ]) as a service
provider for conceptual graphs definitions and basic manipulations. The fusion
strategies are rules that were implemented as independent JAVA classes.
5.1
        </p>
        <sec id="sec-3-8-1">
          <title>Experimentation</title>
          <p>As detailed before, the domain that we chose in order to validate our proposition
concerns TV program descriptions. The aim is to obtain as much TV program
descriptions as possible, concerning the TV programs scheduled on a TV channel,
during one day. Furthermore, these descriptions should be as precise as possible
with regards to the programs that were effectively played on the channel.</p>
          <p>
            In order to compare the result of the fusion to the programs that were really
performed, we collected TV program descriptions from the INAth`eque. The INA,
Institut National de l’Audiovisuel ([
            <xref ref-type="bibr" rid="ref21 ref43">14</xref>
            ]), collects the descriptions of all the
programs that have been broadcasted on the French TV and radio. The exact begin
and end times of the different programs are recorded. First, we know whether a
fused program corresponds to the program that was really played. Second, we
compare the times that were processed by fusion to the real diffusion times.
          </p>
          <p>During one day, we request every 5 minutes the two sources of information
to give us the next scheduled program on one channel. The two provided TV
program descriptions are then fused using one of the fusion strategies. Once the
fusion is done, we make sure that the description follows the general model for
TV program descriptions. For instance, if the program has two different titles,
it means that the fusion failed and the resulting description is rejected.</p>
          <p>The well formed descriptions are then compared to the reference data. If they
are compatible, the fused program description is considered to be correctly found
with regards to reality. If the description is either badly formed or any part of
the description doesn’t correspond to the reference data, we consider that the
program wasn’t correctly found. For correctly found programs descriptions, we
then compare the computed begin and end times to the real ones.</p>
          <p>We measured the quality of the fusion that we obtained using different
strategies. Therefore, we launched our experimentations using the fusion platform first
combined with no strategy and then with three different ones. The first
experiment -no fusion strategy- is equivalent to using the maximal join operator for
information fusion. The three fusion strategies are the following:
Strategy 1 extends dates compatibility. Two dates are compatible if the
difference between the two is less than five minutes. If two dates are compatible
but different, the fused date should be the earliest one if it is a ”begin date”
and the latest one otherwise.</p>
          <p>Strategy 2 extends dates and titles compatibility. The dates compatibility is
the same as the one of strategy 1. Two titles are compatible if one of them
is contained in the other one, after removing typography clues (upper cases,
punctution marks...).</p>
          <p>Strategy 3 extends dates and titles compatibility. The dates compatibility is
the same as the one of strategy 1. Two titles are compatible if the total
length of common substrings between the two exceeds a given length, after
removing typography clues.
5.2</p>
        </sec>
        <sec id="sec-3-8-2">
          <title>Results</title>
          <p>We present here the results that we obtained during our experimentation. We
first looked at the percentage of programs that were correctly found, according
Fig. 5. Percentage of programs correctly fused and identified with different strategies
to the different strategies that we used. Figure 5 shows the results we obtained
on a representative selection of TV channels.</p>
          <p>As expected, we can see that the fusion of observations using the maximal
join operation only is not sufficient. Only the descriptions with strictly identical
values are fused. There is too much noise in real data for a fusion process that
doesn’t take into account some knowledge about the domain. Therefore, the
three previously cited fusion strategies were applied. The more the compatibility
constraints between two values are relaxed, the better the results are. This is
obvious as it is equivalent to inject more and more knowledge about the domain
and knowledge about the general behavior of objects in the external world.</p>
          <p>A second interpretation of our results consisted in the observation of the time
lag between the fused description and the reference ones. Figures 6 and 7 give
examples of the results obtained on two different channels. Each point represents
a program and is located in the grid according to the difference between the fused
begin and end times and the real broadcasted times. On Figure 6 only three
points are visible. Actually, only two programs were badly guessed and all the
others are represented by the point with coordinates (0,0). On Figure 7 we can
see that almost all the programs are starting after the fused begin time. This
seems to be due to the fact that advertisement is scheduled at the beginning of
the time slots dedicated to each TV program.</p>
          <p>Fig. 6. Time lag between fused and broadcasted time on France 4 channel</p>
          <p>Fig. 7. Time lag between fused and broadcasted time on TF1 channel
The different experimentations that we carried out showed that the quality
the fusion process is very heterogeneous, according to several parameters. First of
all, it depends on the channel on which the observations are done. Some channels
broadcast the programs almost always at the scheduled time, so the observations
on both sources are identical and coherent with reality. In the meantime, most
channels don’t follow this rule. Then, the time of the day when the observation
is made is important as well, as the specificity of the channel. For non popular
channels and at times of low audience, we observed a lot of errors in the programs
given by the TV magazine.</p>
        </sec>
      </sec>
      <sec id="sec-3-9">
        <title>Conclusion</title>
        <p>This paper proposes to use the conceptual graphs model for information
representation and fusion. Using the same model for both purposes avoids the bias due
to the translation from one formalism to another one. We detailed the extension
that we proposed for the maximal join operator. This extension allows to fuse
not strictly identical observations. It is based on the use of domain knowledge
to relax the constraints when aggregating concepts. The standard maximal join
is only based on structures and types compatibility. The extended version
introduces the notion of fusion strategy. Fusion strategies are rules that allow to add
a domain dependent notion to the fusion process. A case study was developed
in order to illustrate and validate our approach on real data.</p>
        <p>The first results of our study are promising as we showed that the use of the
maximal join operation is relevant for information fusion. The operator must
nevertheless be enriched with domain knowledge in order to be usful on real
data which are noisy.</p>
        <p>Current and future work will first deal with the study and improvement of
the fusion strategies. In particular, we will focus on the use of the reliability of
the information sources. Then, we will develop strategies that take the context
of observation into account.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Sowa</surname>
          </string-name>
          , J.
          <source>Conceptual Structures - Inform. Processing in Mind and Machine</source>
          . Reading,
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Chein</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          and
          <string-name>
            <given-names>M.-L.</given-names>
            <surname>Mugnier</surname>
          </string-name>
          .
          <article-title>Conceptual Graphs: fundamental notions</article-title>
          .
          <source>Revue d'Intelligence Artificielle</source>
          , Vol.
          <volume>6</volume>
          , no.
          <issue>4</issue>
          ,
          <issue>1992</issue>
          , pp.
          <fpage>365</fpage>
          -
          <lpage>406</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Mugnier</surname>
            ,
            <given-names>M.</given-names>
            -L. and M.
          </string-name>
          <string-name>
            <surname>Chein</surname>
          </string-name>
          .
          <article-title>Polynomial Algorithms for Projection and Matching</article-title>
          ,
          <source>In: 7th Annual Workshop on Conceptual Graphs (AWCG'92)</source>
          ,
          <year>1992</year>
          , pp.
          <fpage>49</fpage>
          -
          <lpage>58</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Mugnier</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.-L. On</surname>
          </string-name>
          <article-title>Generalization / Specialization for Conceptual Graphs</article-title>
          .
          <source>Journal of Experimental and Theoretical Computer Science</source>
          , Vol.
          <volume>7</volume>
          ,
          <issue>1995</issue>
          , pp.
          <fpage>325</fpage>
          -
          <lpage>344</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Baget</surname>
          </string-name>
          , J.-F. and
          <string-name>
            <surname>M.-L. Mugnier</surname>
          </string-name>
          .
          <article-title>Extensions of Simple Conceptual Graphs: the Complexity of Rules and Constraints</article-title>
          .
          <source>JAIR</source>
          , vol.
          <volume>16</volume>
          ,
          <year>2002</year>
          , pp.
          <fpage>425</fpage>
          -
          <lpage>465</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Hopcroft</surname>
            ,
            <given-names>J.</given-names>
            and J.
          </string-name>
          <string-name>
            <surname>Ullman</surname>
          </string-name>
          . Introduction to Automata Theory, Languages, and
          <string-name>
            <surname>Computation</surname>
          </string-name>
          . Addison-Wesley, Reading, MA,
          <year>1979</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Daciuk</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>St. Mihov</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Watson</surname>
            , and
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Watson</surname>
          </string-name>
          ,
          <article-title>Incremental Construction of Minimal Acyclic Finite State Automata</article-title>
          ,
          <source>J. of Comp. Linguistics</source>
          , Vol.
          <volume>26</volume>
          ,
          <string-name>
            <surname>Issue</surname>
            <given-names>1</given-names>
          </string-name>
          ,
          <year>2000</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          1.
          <string-name>
            <surname>Antoniou</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Harmelen</surname>
            ,
            <given-names>F.V.</given-names>
          </string-name>
          <year>2004</year>
          .
          <article-title>A Semantic Web Primer</article-title>
          . MIT Press Cambridge, ISBN 0-262-01210-3
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          2.
          <string-name>
            <surname>Baumeister</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Seipel</surname>
            ,
            <given-names>D.S.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Owls-Design Anomalies in Ontologies”</article-title>
          ,
          <source>18th Intl. Florida Artificial Intelligence Research Society Conference (FLAIRS)</source>
          , pp
          <fpage>251</fpage>
          -
          <lpage>220</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          3.
          <string-name>
            <surname>Brank</surname>
            <given-names>J</given-names>
          </string-name>
          . et al.
          <year>2005</year>
          .
          <article-title>A Survey of Ontology Evaluation Techniques</article-title>
          .
          <source>Published in multiconference IS</source>
          <year>2005</year>
          , Ljubljana, Slovenia SIKDD.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          4.
          <string-name>
            <surname>Brewster</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          et al.
          <year>2004</year>
          .
          <article-title>Data driven ontology evaluation</article-title>
          .
          <source>Proceedings of Intl. Conf. on Language Resources and Evaluation</source>
          , Lisbon.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          5.
          <string-name>
            <surname>Burton-Jones</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , et al.
          <year>2004</year>
          .
          <article-title>A semiotic metrics suite for assessing the quality of ontologies. Data and Knowledge Engineering</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          6.
          <string-name>
            <surname>Fahad</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Qadir</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Noshairwan</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            ,
            <surname>Iftikhar</surname>
          </string-name>
          ,
          <string-name>
            <surname>N.</surname>
          </string-name>
          <year>2007a</year>
          .
          <article-title>DKP-OM: A Semantic Based Ontology Merger</article-title>
          .
          <source>In Proc. 3rd International conference on Semantic Technologies, I-Semantics 5-7 September</source>
          <year>2007</year>
          ,
          <source>Journal of Universal Computer Science (J.UCS).</source>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          7.
          <string-name>
            <surname>Fahad</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Qadir</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Noshairwan</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <year>2007b</year>
          .
          <article-title>Semantic Inconsistency Errors in Ontologies</article-title>
          .
          <source>Proc. of GRC 07</source>
          ,
          <article-title>Silicon Valley USA</article-title>
          .
          <source>IEEE CS</source>
          . pp
          <fpage>283</fpage>
          -
          <lpage>286</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          8.
          <string-name>
            <surname>Fox</surname>
            ,
            <given-names>M. S.</given-names>
          </string-name>
          , et al.
          <year>1998</year>
          .
          <article-title>An organization ontology for enterprise modelling</article-title>
          . In: M.
          <string-name>
            <surname>Prietula</surname>
          </string-name>
          et al.,
          <string-name>
            <surname>Simulating</surname>
            <given-names>organizations</given-names>
          </string-name>
          , MIT Press.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          9.
          <string-name>
            <surname>Gomez-Perez</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>1994</year>
          .
          <article-title>Some ideas and examples to evaluate ontologies</article-title>
          .
          <source>KSL</source>
          , Stanford University.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          10.
          <string-name>
            <surname>Gomez-Perez</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lopez</surname>
            ,
            <given-names>M.F</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Garcia</surname>
            ,
            <given-names>O.C.</given-names>
          </string-name>
          <year>2001</year>
          .
          <article-title>Ontological Engineering: With Examples from the Areas of Knowledge Management, E-Commerce and the Semantic Web</article-title>
          .
          <source>Springer ISBN:1-85253-55j-3</source>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          11.
          <string-name>
            <surname>Gomez-Perez</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , et al.
          <year>1999</year>
          .
          <article-title>Evaluation of Taxonomic Knowledge on Ontologies and Knowledge-Based Systems</article-title>
          . Intl. Workshop on Knowledge Acquisition, Modeling and Management.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          12.
          <string-name>
            <surname>Jelmini</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>M-Maillet</surname>
            <given-names>S.</given-names>
          </string-name>
          <year>2004</year>
          .
          <article-title>OWL-based reasoning with retractable inference”</article-title>
          ,
          <source>In RIAO Conference Proceedings</source>
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          13.
          <string-name>
            <surname>Maedche</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Staab</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2002</year>
          .
          <article-title>Measuring similarity betwe- en ontologies</article-title>
          .
          <source>Proc. CIKM</source>
          <year>2002</year>
          . LNAI vol.
          <volume>2473</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          14.
          <string-name>
            <surname>Nardi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          et al.
          <year>2000</year>
          .
          <article-title>The Description Logic Handbook: Theory, Implementation, and Applications</article-title>
          . Noshairwan,
          <string-name>
            <given-names>W.</given-names>
            ,
            <surname>Qadir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.A.</given-names>
            ,
            <surname>Fahad</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>2007a</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          15.
          <article-title>Sufficient Knowledge Omission error and Redundant Disjoint Relation in Ontology</article-title>
          .
          <source>InProc. 5th Atlantic Web Intelligence Conference June 25-27</source>
          ,
          <fpage>2007</fpage>
          - Fontainebleau, France
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          16.
          <string-name>
            <surname>Noshairwan</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Qadir</surname>
            <given-names>M.A.</given-names>
          </string-name>
          <year>2007b</year>
          .
          <article-title>Algorithms to Warn Against Incompleteness Errors in Ontology Evaluation</article-title>
          .
          <source>1st AISPC Jan</source>
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          17. Ontology Definition Metamodel
          <year>2005</year>
          . Second Revised Submission to OMG/RDF
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          18.
          <string-name>
            <surname>Porzel</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malaka</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <year>2004</year>
          .
          <article-title>A task-based approach for ontology evaluation</article-title>
          .
          <source>ECAI 2004 Workshop Ont. Learning and Population.</source>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          19.
          <string-name>
            <surname>Qadir</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Noshairwan</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <year>2007a</year>
          .
          <article-title>Warnings for Disjoint Knowledge Omission in Ontologies</article-title>
          .
          <source>Second International Conference on internet and Web Applications</source>
          and
          <article-title>Services (ICIW07)</article-title>
          . IEEE, p.
          <fpage>45</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          20.
          <string-name>
            <surname>Qadir</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fahad</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shah</surname>
            ,
            <given-names>S.A.H.</given-names>
          </string-name>
          ,
          <year>2007b</year>
          .
          <source>Incompleteness Errors in Ontologies. Proc. of Intl GRC 07</source>
          , USA. IEEE Computer Society. pp
          <fpage>279</fpage>
          -
          <lpage>282</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          21.
          <string-name>
            <surname>Qadir</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fahad</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Noshairwan</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <year>2007c</year>
          .
          <article-title>On Conceptualization Mismatches in Ontologies</article-title>
          .
          <source>Proc. of GRC 07</source>
          , USA. IEEE CS. pp
          <fpage>275</fpage>
          -
          <lpage>279</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          22.
          <string-name>
            <surname>Supekar</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>A peer-review approach for ontology evaluation</article-title>
          .
          <source>Proc. 8th Intl. Protégé Conference</source>
          , Madrid, Spain,
          <source>July 18-21</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          1.
          <string-name>
            <given-names>C.</given-names>
            <surname>Matheus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kokar</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Baclawski</surname>
          </string-name>
          .
          <article-title>A Core Ontology for Situation Awareness</article-title>
          .
          <source>6th International Conference on Information Fusion</source>
          , Cairns, Queensland, Australia,
          <year>2003</year>
          , pp.
          <fpage>545</fpage>
          -
          <lpage>552</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          2.
          <string-name>
            <given-names>J. F.</given-names>
            <surname>Sowa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Conceptual</given-names>
            <surname>Structures</surname>
          </string-name>
          .
          <source>Information Processing in Mind and Machine</source>
          ,
          <string-name>
            <surname>Addison-Wesley</surname>
          </string-name>
          , Reading, MA, 1984
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          3.
          <string-name>
            <given-names>R. N.</given-names>
            <surname>Reed</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Kocura</surname>
          </string-name>
          ,
          <source>Conceptual Graphs based Criminal Intelligence Analysis</source>
          , in Contributions to 13th
          <source>International Conference on Conceptual Structures</source>
          ,
          <year>2005</year>
          , pp.
          <fpage>146</fpage>
          -
          <lpage>149</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          4.
          <string-name>
            <given-names>P.</given-names>
            <surname>Zweigenbaum</surname>
          </string-name>
          , and
          <string-name>
            <surname>J. Bouaud.</surname>
          </string-name>
          ,
          <article-title>Construction d'une repr´esentation s´emantique en Graphes Conceptuels partir d'une analyse LFG, 4`eme Conf´erence sur le Traitement Automatique des Langues Naturelles</article-title>
          , Grenoble, France,
          <year>1997</year>
          , pp.
          <fpage>30</fpage>
          -
          <lpage>39</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          5.
          <string-name>
            <given-names>J.</given-names>
            <surname>Villaneau</surname>
          </string-name>
          ,
          <string-name>
            <surname>J-Y. Antoine</surname>
            , and
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Ridoux</surname>
          </string-name>
          ,
          <article-title>LOGUS : un syst`eme formel de compr´ehension du franais parl´e spontan´e-pr´esentation et ´evaluation, 9`eme Conf´erence sur le Traitement Automatique des Langues Naturelles</article-title>
          , Nancy, France,
          <year>2002</year>
          , pp.
          <fpage>165</fpage>
          -
          <lpage>174</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          6. M.
          <article-title>Montes-y-</article-title>
          <string-name>
            <surname>Gomez</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Gelbukh</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Lopez-Lopez</surname>
          </string-name>
          ,
          <article-title>Text mining at detail level using conceptual graphs, 10th international conference on conceptual structures</article-title>
          , Borovets, Bulgaria,
          <year>2002</year>
          , pp.
          <fpage>122</fpage>
          -
          <lpage>136</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          7.
          <string-name>
            <given-names>P.</given-names>
            <surname>Mulhem</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W. K.</given-names>
            <surname>Leow</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y. K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <source>Fuzzy Conceptual Graphs for Matching Images of Natural Scenes, 7th International Joint Conference on Artificial Intelligence</source>
          , Seattle, Washington, USA,
          <year>2001</year>
          , pp.
          <fpage>13971404</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          8.
          <string-name>
            <given-names>M.</given-names>
            <surname>Charhad</surname>
          </string-name>
          , Mod`ele de Documents Vid´
          <article-title>eo bas´es sur le Formalisme des Graphes Conceptuels pour l'Indexation et la Recherche par le Contenu S´emantique</article-title>
          , Th`ese
          <string-name>
            <surname>de L'universit J. Fournier</surname>
          </string-name>
          , Grenoble,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          9.
          <string-name>
            <given-names>F.</given-names>
            <surname>Volot</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Joubert</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Fieschi</surname>
          </string-name>
          ,
          <article-title>Knowledge and Data Representationwith conceptual graphs for Biomedical Information Processing : a Review, Methods Inf Med</article-title>
          ., N37 pp.
          <fpage>86</fpage>
          -
          <lpage>96</lpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          10.
          <string-name>
            <given-names>O.</given-names>
            <surname>Gerbe</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Guay</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Perron</surname>
          </string-name>
          ,
          <article-title>Using Conceptual Graphs for Methods Metamodeling</article-title>
          , 4th International Conference on Conceptual Structures, Bondi Beach, Sydney, Australia,
          <year>1996</year>
          , pp.
          <fpage>161</fpage>
          -
          <lpage>175</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          11.
          <string-name>
            <given-names>F.</given-names>
            <surname>Deloule</surname>
          </string-name>
          , D. Beauchˆene, P. Lambert,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          ,
          <article-title>Data Fusion for the Management of Multimedia Documents</article-title>
          , 10th international Conference on Information Fusion, Quebec, Canada,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          12.
          <string-name>
            <surname>M. Gagnon</surname>
          </string-name>
          ,
          <source>Ontology-based Integration of Data Sources, 10th international Conference on Information Fusion</source>
          , Quebec, Canada,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>13. AMINE Platform: http://amine-platform.sourceforge.net/</mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>14. INAth`eque: http://www.ina.fr/archives-tele-radio/universitaires/index.html</mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>