<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <fpage>48</fpage>
      <lpage>89</lpage>
      <abstract>
        <p>This volume contains the papers presented at the 5th International Workshop on Uncertainty Reasoning for the Semantic Web (URSW 2009), held as a part of the 8th International Semantic Web Conference (ISWC 2009) at the Westfields Conference Center near Washington, DC, USA, October 26, 2009. It contains 6 technical papers and 3 position papers, which were selected in a rigorous reviewing process, where each paper was reviewed by at least four program committee members. The International Semantic Web Conference is a major international forum for presenting visionary research on all aspects of the Semantic Web. The International Workshop on Uncertainty Reasoning for the Semantic Web is an exciting opportunity for collaboration and cross-fertilization between the uncertainty reasoning community and the Semantic Web community. Effective methods for reasoning under uncertainty are vital for realizing many aspects of the Semantic Web vision, but the ability of current-generation Web technology to handle uncertainty is extremely limited. Recently, there has been a groundswell of demand for uncertainty reasoning technology among Semantic Web researchers and developers. This surge of interest creates a unique opening to bring together two communities with a clear commonality of interest but little history of interaction. By capitalizing on this opportunity, URSW could spark dramatic progress toward realizing the Semantic Web vision. Audience: The intended audience for this workshop includes the following: (1) researchers in uncertainty reasoning technologies with interest in Semantic Web and Webrelated technologies; (2) Semantic Web developers and researchers; (3) people in the knowledge representation community with interest in the Semantic Web; (4) ontology researchers and ontological engineers; (5) Web services researchers and developers with interest in the Semantic Web; and (6) developers of tools designed to support Semantic Web implementation, e.g., Jena, Prote´ge´, and Prote´ge´-OWL developers.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Topics: We intended to have an open discussion on any topic relevant to the general
subject of uncertainty in the Semantic Web (including fuzzy theory, probability
theory, and other approaches). Therefore, the following list should be just an initial guide:
(1) syntax and semantics for extensions to Semantic Web languages to enable
representation of uncertainty; (2) logical formalisms to support uncertainty in Semantic Web
languages; (3) probability theory as a means of assessing the likelihood that terms in
different ontologies refer to the same or similar concepts; (4) architectures for applying
plausible reasoning to the problem of ontology mapping; (5) using fuzzy approaches to
deal with imprecise concepts within ontologies; (6) the concept of a probabilistic
ontology and its relevance to the Semantic Web; (7) best practices for representing uncertain,
incomplete, ambiguous, or controversial information in the Semantic Web; (8) the role
of uncertainty as it relates to Web services; (9) interface protocols with support for
uncertainty as a means to improve interoperability among Web services; (10)
uncertainty reasoning techniques applied to trust issues in the Semantic Web; (11) existing
implementations of uncertainty reasoning tools in the context of the Semantic Web;
(12) issues and techniques for integrating tools for representing and reasoning with
uncertainty; and (13) the future of uncertainty reasoning for the Semantic Web.</p>
    </sec>
    <sec id="sec-2">
      <title>We wish to thank all authors who submitted papers and all workshop participants for fruitful discussions. We would like to thank the program committee members and external referees for their timely expertise in carefully reviewing the submissions.</title>
    </sec>
    <sec id="sec-3">
      <title>October 2009</title>
    </sec>
    <sec id="sec-4">
      <title>Fernando Bobillo</title>
    </sec>
    <sec id="sec-5">
      <title>Paulo C. G. da Costa</title>
    </sec>
    <sec id="sec-6">
      <title>Claudia d’Amato</title>
    </sec>
    <sec id="sec-7">
      <title>Nicola Fanizzi</title>
    </sec>
    <sec id="sec-8">
      <title>Kathryn B. Laskey</title>
    </sec>
    <sec id="sec-9">
      <title>Kenneth J. Laskey</title>
    </sec>
    <sec id="sec-10">
      <title>Thomas Lukasiewicz</title>
    </sec>
    <sec id="sec-11">
      <title>Trevor Martin</title>
    </sec>
    <sec id="sec-12">
      <title>Matthias Nickles</title>
    </sec>
    <sec id="sec-13">
      <title>Michael Pool</title>
    </sec>
    <sec id="sec-14">
      <title>Pavel Smrzˇ</title>
      <p>Workshop Organization</p>
      <sec id="sec-14-1">
        <title>Program Chairs</title>
      </sec>
    </sec>
    <sec id="sec-15">
      <title>Fernando Bobillo (University of Zaragoza, Spain)</title>
    </sec>
    <sec id="sec-16">
      <title>Paulo C. G. da Costa (George Mason University, USA)</title>
    </sec>
    <sec id="sec-17">
      <title>Claudia d’Amato (University of Bari, Italy)</title>
    </sec>
    <sec id="sec-18">
      <title>Nicola Fanizzi (University of Bari, Italy)</title>
    </sec>
    <sec id="sec-19">
      <title>Kathryn B. Laskey (George Mason University, USA)</title>
    </sec>
    <sec id="sec-20">
      <title>Kenneth J. Laskey (MITRE Corporation, USA)</title>
    </sec>
    <sec id="sec-21">
      <title>Thomas Lukasiewicz (University of Oxford, UK)</title>
    </sec>
    <sec id="sec-22">
      <title>Trevor Martin (University of Bristol, UK)</title>
    </sec>
    <sec id="sec-23">
      <title>Matthias Nickles (University of Bath, UK)</title>
    </sec>
    <sec id="sec-24">
      <title>Michael Pool (Convera, Inc., USA)</title>
    </sec>
    <sec id="sec-25">
      <title>Pavel Smrzˇ (Brno University of Technology, Czech Republic)</title>
      <sec id="sec-25-1">
        <title>Program Committee</title>
      </sec>
    </sec>
    <sec id="sec-26">
      <title>Andreas Tolk (Old Dominion University, USA)</title>
    </sec>
    <sec id="sec-27">
      <title>Johanna Vo¨lker (University of Karlsruhe, Germany)</title>
    </sec>
    <sec id="sec-28">
      <title>Peter Vojta´sˇ (Charles University, Czech Republic)</title>
      <sec id="sec-28-1">
        <title>External Reviewers</title>
      </sec>
    </sec>
    <sec id="sec-29">
      <title>Zhiqiang Gao</title>
    </sec>
    <sec id="sec-30">
      <title>Combining Semantic Web Search with the Power of Inductive Reasoning . . . . . .</title>
      <p>Claudia d’Amato, Nicola Fanizzi, Bettina Fazzinga, Georg Gottlob, and
Thomas Lukasiewicz</p>
    </sec>
    <sec id="sec-31">
      <title>Evidential Nearest-Neighbors Classification for Inductive ABox Reasoning . . . .</title>
      <p>Nicola Fanizzi, Claudia d’Amato, and Floriana Esposito</p>
    </sec>
    <sec id="sec-32">
      <title>Ontology Granulation Through Inductive Decision Trees . . . . . . . . . . . . . . . . . . .</title>
      <p>Bart Gajderowicz and Alireza Sadeghian
Axiomatic First-Order Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</p>
      <p>Kathryn B. Laskey</p>
    </sec>
    <sec id="sec-33">
      <title>An Algorithm for Learning with Probabilistic Description Logics . . . . . . . . . . . .</title>
      <p>Jose´ Eduardo Ochoa Luna and Fabio Gagliardi Cozman</p>
      <sec id="sec-33-1">
        <title>Position Papers</title>
      </sec>
    </sec>
    <sec id="sec-34">
      <title>BeliefOWL: An Evidential Representation in OWL Ontology . . . . . . . . . . . . . . .</title>
      <p>Amira Essaid and Boutheina Ben Yaghlane
Fuzzy Taxonomies for Creative Knowledge Discovery . . . . . . . . . . . . . . . . . . . . .</p>
      <p>Trevor Martin, Zheng Siyao, and Andrei Majidian
Uncertainty Reasoning for Linked Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</p>
      <p>Dave Reynolds
3
15
27
39
51
63
77
81
85</p>
      <sec id="sec-34-1">
        <title>Probabilistic Ontology and Knowledge Fusion for</title>
      </sec>
      <sec id="sec-34-2">
        <title>Procurement Fraud Detection in Brazil</title>
        <p>Rommel N. Carvalho1, Kathryn B. Laskey1, Paulo C. G. Costa1, Marcelo Ladeira2,
Laécio L. Santos2, and Shou Matsumoto2,
1 George Mason University</p>
        <p>4400 University Drive</p>
        <p>Fairfax, VA 22030-4400 USA
rommel.carvalho@gmail.com, {klaskey, pcosta}@gmu.edu</p>
        <p>2 University of Brasilia
Campus Universitário Darcy Ribeiro</p>
        <p>Brasilia – DF 70910-900 Brazil
Abstract. To cope with society’s demand for transparency and corruption
prevention, the Brazilian Office of the Comptroller General (CGU) has carried
out a number of actions, including: awareness campaigns aimed at the private
sector; campaigns to educate the public; research initiatives; and regular
inspections and audits of municipalities and states. Although CGU has collected
information from hundreds of different sources - Revenue Agency, Federal
Police, and others - the process of fusing all this data has not been efficient
enough to meet the needs of CGU’s decision makers. Therefore, it is natural to
change the focus from data fusion to knowledge fusion. As a consequence,
traditional syntactic methods must be augmented with techniques that represent
and reason with the semantics of databases. However, commonly used
approaches fail to deal with uncertainty, a dominant characteristic in corruption
prevention. This paper presents the use of Probabilistic OWL (PR-OWL) to
design and test a model that performs information fusion to detect possible
frauds in procurements involving Federal money. To design this model, a
recently developed tool for creating PR-OWL ontologies was used with support
from PR-OWL specialists and careful guidance from a fraud detection specialist
from CGU.
1 Introduction
A primary responsibility of the Brazilian Office of the Comptroller General (CGU) is
to prevent and detect government corruption. To carry out this mission, CGU must
gather information from a variety of sources and combine it to evaluate whether
further action, such as an investigation, is required. One of the most difficult
challenges is the information explosion. Auditors must fuse vast quantities of
information from a variety of sources in a way that highlights its relevance to decision
makers and helps them focus their efforts on the most critical cases. This is no trivial
duty. The Growing Acceleration Program (PAC) alone has a budget greater than 250
billion dollars with more than one thousand projects only on the state of Sao Paulo
(http://www.brasil.gov.br/pac/). All of these have to be audited and inspected by CGU
– and, in spite having only three thousand employees. Therefore, CGU must optimize
its processes in order to carry out its mission.</p>
        <p>
          The Semantic Web (SW), like the document web that preceded it, is based on
radical notions of information sharing. These ideas [
          <xref ref-type="bibr" rid="ref1 ref27 ref34 ref43">1</xref>
          ] include: (i) the Anyone can say
Anything about Any topic (AAA) slogan; (ii) the open world assumption, in which
we assume there is always more information that could be known, and (iii) nonunique
naming, which appreciates the reality that different speakers on the Web might use
different names to define the same entity. In a fundamental departure from
assumptions of traditional information systems architectures, the Semantic Web is
intended to provide an environment in which information sharing can thrive and a
network effect of knowledge synergy is possible. But this style of information
gathering can generate a chaotic landscape rife with confusion, disagreement and
conflict.
        </p>
        <p>
          We call an environment characterized by the above assumptions a Radical
Information Sharing (RIS) environment. The challenge facing SW architects is
therefore to avoid the natural chaos to which RIS environments are prone, and move
to a state characterized by information sharing, cooperation and collaboration.
According to [
          <xref ref-type="bibr" rid="ref1 ref27 ref34 ref43">1</xref>
          ], one solution to this challenge lies in modeling, and this is where
ontologies languages like Web Ontology Language (OWL) come in.
        </p>
        <p>As it will be shown in Section 3, the domain of procurement fraud detection is a
RIS environment. However, uncertainty is ubiquitous to knowledge fusion.
Uncertainty is especially important to applications such as fraud detection, in which
perpetrators seek to conceal illicit intentions and activities, making crisp assertions
extremely hard and rare. In such environments, partial (not complete) or approximate
(not exact) information is more the rule than the exception.</p>
        <p>
          Bayesian networks (BNs) have been widely applied to draw inferences to
information and knowledge fusion in the presence of uncertainty. However, according
to [
          <xref ref-type="bibr" rid="ref2 ref28 ref35 ref44">2</xref>
          ] BNs are not expressive enough for many real-world applications. More
specifically, BNs assume a simple attribute-value representation – that is, each
problem instance involves reasoning about the same fixed number of attributes, with
only the evidence values changing from problem instance to problem instance.
Complex problems on the scale of the semantic web often involve intricate
relationships among many variables, and the limited representational power of BNs is
insufficient for building useful, detailed models.
        </p>
        <p>
          Multi-Entity Bayesian Network (MEBN) logic can represent and reason with
uncertainty about any propositions that can be expressed in first-order logic [
          <xref ref-type="bibr" rid="ref29 ref3 ref36 ref45">3</xref>
          ].
Probabilistic OWL (PR-OWL) uses MEBN’s strengths to provide a framework for
building probabilistic ontologies (PO), a major step towards semantically aware,
probabilistic knowledge fusion systems [
          <xref ref-type="bibr" rid="ref30 ref37 ref4 ref46">4</xref>
          ]. This paper uses PR-OWL to design and
test a model for fusing information to detect possible frauds in procurements
involving Federal funds.
        </p>
        <p>The paper is organized as follows. Section 2 introduces Multi-Entity Bayesian
Networks (MEBN), an expressive Bayesian logic, and PR-OWL, an extension of the
OWL language that can represent probabilistic ontologies having MEBN as its
underlying logic. Section 3 presents a case study from CGU to demonstrate the power
of PR-OWL ontologies for knowledge representation and fusion. Finally, Section 4
presents some concluding remarks.
2</p>
        <p>MEBN and PR-OWL
Multi-Entity Bayesian Networks (MEBN) [5 and 6] extend BNs (BN) to achieve
firstorder expressive power. MEBN represents knowledge as a collection of MEBN
Fragments (MFrags), which are organized into MEBN Theories (MTheories).</p>
        <p>An MFrag contains random variables (RVs) and a fragment graph representing
dependencies among these RVs. An MFrag is a template for a fragment of a Bayesian
network. It is instantiated by binding its arguments to domain entity identifiers to
create instances of its RVs. There are three kinds of RV: context, resident and input.
Context RVs represent conditions that must be satisfied for the distributions
represented in the MFrag to apply. Input nodes represent RVs that may influence the
distributions defined in the MFrag, but whose distributions are defined in other
MFrags. Distributions for resident RV instances are defined in the MFrag.
Distributions for resident RVs are defined by specifying local distributions
conditioned on the values of the instances of their parents in the fragment graph.</p>
        <p>A set of MFrags represents a joint distribution over instances of its random
variables. MEBN provides a compact way to represent repeated structure in a BN. An
important advantage of MEBN is that there is no fixed limit on the number of RV
instances, and the random variable instances are dynamically instantiated as needed.</p>
        <p>An MTheory is a set of MFrags that satisfies conditions of consistency ensuring
the existence of a unique joint probability distribution over its random variable
instances.</p>
        <p>To apply an MTheory to reason about particular scenarios, one needs to provide
the system with specific information about the individual entity instances involved in
the scenario. On receipt of this information, Bayesian inference can be used both to
answer specific questions of interest (e.g., how likely is it that a particular
procurement is being directed to a specific enterprise?) and to refine the MTheory
(e.g., each new tactical situation includes additional statistical data about the
likelihood of a given attack for that set of circumstances). Bayesian inference is used
to perform both problem specific inference and learning in a sound, logically coherent
manner (for more details see [6 and 7]).</p>
        <p>
          State-of-the-art systems are increasingly adopting ontologies as a means to ensure
formal semantic support for knowledge sharing [8, 9, 10, 11, 12, and 13].
Representing and reasoning with uncertainty is becoming recognized as an essential
capability in many domains. A common error is to provide support for uncertainty
representation by just annotating ontologies with numerical probabilities. This
approach leads to brittleness, as too much information is lost due to the lack of a
representational scheme that can capture structural nuances of the probabilistic
information. More expressive representation formalisms are needed [
          <xref ref-type="bibr" rid="ref30 ref37 ref4 ref46">4</xref>
          ].
        </p>
        <p>Probabilistic Ontologies (PR-OWL) [14 and 15] was proposed as a more
expressive formalism for representing knowledge in domains characterized by
uncertainty. Figure 1 presents the main concepts needed to define an MTheory in
PROWL. In the diagram, the ellipses represent the general classes, while the arcs
represent the main relationships among the classes.</p>
        <p>
          The procurement fraud detection probabilistic ontology was built in
UnBBayesMEBN, a tool for building and reasoning with PR-OWL probabilistic ontologies.
UnBBayes-MEBN was the first software to implement PR-OWL/MEBN (see [
          <xref ref-type="bibr" rid="ref16 ref17 ref18 ref19">16, 17,
18, 19</xref>
          ] for more details). UnBBayes-MEBN supports Multi-Entity Bayesian Network
(MEBN) and enables creation and editing of Probabilistic Ontologies in PR-OWL
[
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. The MEBN/PR-OWL Graphical User Interface (GUI) [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] allows users to
define MFrags and make probabilistic queries. UnBBayes-MEBN also implements an
algorithm for generating a Situation Specific Bayesian Network (SSBN) [
          <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
          ],
which is an ordinary BN created by instantiating instances of the MFrags to respond
to a probabilistic query. Once the SSBN is generated, the inference engine
(Reasoning) is called to process findings and update beliefs. UnBBayes-MEBN uses
the Protégé-OWL library to load and save PR-OWL files (IO) in a format compatible
with OWL. It supports first order logic context node evaluation (FOL), through the
use of the PowerLoom library. It also defines and implements a built-in mechanism
for typing and recursion. Finally, it permits the definition of dynamic conditional
probabilistic tables.
        </p>
        <p>UnBBayes has proven to be a simple, yet powerful, tool for designing probabilistic
ontologies and for uncertain reasoning in complex situations such as procurement
fraud detection. It is straightforward to use and provides powerful features (e.g.
dynamic table) not available in systems (e.g., Quiddity) previously employed to
reason with PR-OWL/MEBN knowledge bases.
3 Procurement Fraud Detection
A major source of corruption is the procurement process. Although laws attempt to
ensure a competitive and fair process, perpetrators find ways to turn the process to
their advantage while appearing to be legitimate. This is why a specialist has
didactically structured the different kinds of procurement frauds CGU has dealt with
in past years.</p>
        <p>These different fraud types are characterized by criteria, such as business owners
who work as a front for the company, use of accounting indices that are not common
practice, etc. Indicators have been established to help identify cases of each of these
fraud types. For instance, one principle that must be followed in public procurement is
that of competition. Every public procurement should establish minimum requisites
necessary to guarantee the execution of the contract in order to maximize the number
of participating bidders. Nevertheless, it is common to have a fake competition when
different bidders are, in fact, owned by the same person. This is usually done by
having someone as a front for the enterprise, which is often someone with little or no
education.</p>
        <p>The ultimate goal of this case study is to structure the specialist knowledge in a
way that an automated system can reason with the evidence in a manner similar to the
specialist. Such an automated system is intended to support specialists and to help
train new specialists, but not to replace them. Initially, a few simple criteria were
selected as a proof of concept. Nevertheless, it is shown that the model can be
incrementally updated to incorporate new criteria. In this process, it becomes clear
that a number of different sources must be consulted to come up with the necessary
indicators to create new and useful knowledge for decision makers about the
procurements.</p>
        <p>Figure 2 presents an overview of the procurement fraud detection process. The data
for our case study represent several requests for proposal and auctions that are issued
by the Federal, State and Municipal Offices (Public Notices – Data). As the focus of
the work is in representing the specialist knowledge and reasoning through
probabilistic ontologies and not in the collection of information, the idea is that the
analysts that work at CGU, already making audits and inspections, accomplish the
collection of information through questionnaires that can specifically be created for
the collecting of indicators for the selected criteria (Information Gathering). These
questionnaires can be created using a system that is already in production at CGU.
Once they are answered the necessary information is going to be available (DB –
Information). Hence, UnBBayes, using the probabilistic ontology designed by experts
(Design – UnBBayes), will be able to collect these millions of items of information
and transform them into dozens or hundreds of items of knowledge, through logic and
probabilistic inference, e.g. procurement announcements, contracts, reports, etc - a
huge amount of data - are analyzed allowing the gathering of relevant relations and
properties - a large amount of information - which in turn are used to draw some
conclusions about possible irregularities - a smaller number of items of knowledge
(Inference – Knowledge). This knowledge can be filtered so that only the
procurements that show a probability higher than a threshold, e.g. 20%, are
automatically forwarded to the responsible department along with the inferences
about potential fraud and the supporting evidence (Report for Decision Makers).</p>
        <p>The criteria selected by the specialist were the use of accounting indices and the
demand of experience in just one contract. There are four common types of indices
that are usually used as requirements in procurements (ILC, ILG, ISG, and IE). Any
other type could indicate a made-up index specifically designed to direct the
procurement to some specific company. The greater the numbers of uncommon
accounting indices used by the procurement the more suspicious it is, i.e. the higher
the chance of having fraud. In addition, a procurement specifies a minimum value for
these accounting indices. The minimum value that is usually required is 1.0. The
higher this minimum value, the more the competition is narrowed, and therefore the
higher the chance the procurement is being directed to some company.</p>
        <p>The other criterion, demanding proof of experience in only one contract, is suspect
because in almost every case, the experience is not gained only by a particular
contract, but also by doing it over and over again in different contracts. It does not
matter if you have built 1,000 ft2 of wall in just one contract or 100 ft2 in 10 different
contracts. The experience gained will be basically the same.</p>
        <p>The procurement fraud detection model was developed as a probabilistic ontology
(using PR-OWL) to define its semantics and uncertain characteristics. The MTheory
created for the model, using UnBBayes-MEBN, was divided into three different
MFrags.</p>
        <p>The first, Figure 3, presents the criteria required from a company to participate in
the procurement, containing information about the type of accounting index (ILC,
ILG, ISG, IE, and Other) and the minimum value for it (between 0 and 1, between 1
and 2, between 2 and 3, and greater than 3). This MFrag also contains information
about where a specific index is used (which procurement), and if the procurement
demands experience in only one contract.</p>
        <p>Fig. 4. DirectingProcurementByIndexes MFrag.</p>
        <p>The second, Figure 4, represents whether procurement is being directed to a
specific company by the use of unusual accounting indices. As explained before, this
analysis is based on the type of the index and the minimum value it requires. This
evaluation takes into consideration every index used in a specific procurement, hence
it is dynamic.</p>
        <p>The last MFrag, Figure 5, represents the overall possibility that procurement is
being directed to a specific company based on the result of its being directed by the
use of unusual indices and by the requirement of experience in only one contract, as
explained before.</p>
        <p>Fig. 5. DirectingProcurement MFrag.</p>
        <p>To test the model, two scenarios, that represent the two groups of suspect and non
suspect procurements, were chosen from a set of real cases, as shown:
• Suspect procurement (proc1):
o ind1 = ILC &gt;= 2.0;
o ind2 = ILG &gt;= 1.5;
o ind3 = Other &gt;= 3.0.</p>
        <p>o It demands experience in only one contract.
• Non suspect procurement (proc2):</p>
        <p>o ind4 = IE &gt;= 1.0;
o ind5 = ILG &gt;= 1.0;
o ind6 = ILC &gt;= 1.0;
o It does not demand experience in only one contract.</p>
        <p>The information above was introduced in our model as known entities and
findings. After that we queried the system to give us information about the node
IsProcurementDirected(proc) for both proc1 and proc2. UnBBayes-MEBN than
executed the SSBN algorithm and generated the same node structure as shown in
Figure 6, because both procurements have three accounting indices and information
about the demanding experience in only one contract. However, as expected, the
parameters and findings are different giving different results to the query, as shown
below:
•</p>
        <p>Non suspect procurement:
o 0.01% that the procurement was directed to a specific company by
using accounting indices;
o 0.10% that the procurement was directed to a specific company.</p>
        <p>Suspect procurement:
o 55.00% that the procurement was directed to a specific company by
using accounting indices;
o 29.77%, when the information about demanding experience in only
one contract was omitted, and 72.00%, when it was given, that the
procurement was directed to a specific company.</p>
        <p>The specialist analyzed and agreed with the knowledge generated by the
probabilistic ontology reasoned developed using PR-OWL/MEBN in UnBBayes. He
stated that the probabilities represent, semantically (i.e. high, medium, and low
chance), what he would think when analyzing the same entities and findings.</p>
        <p>Although the SSBNs generated for this proof of concept present the same structure,
it is common to have a different one as the context varies from procurement to
procurement. For instance, we have come across several procurements that have all
four common indices and some other different ones. In this case, if there are two
additional indices (ind5 and ind6), then the resulting SSBN would have two more
copies for nodes IndexType(index) andIndexMinValue(index). This would make the
use of BN not applicable. The ability to make multiple copies of nodes based on a
context is only available in a more expressive formalism, as MEBN.</p>
        <p>An additional capability not available with BN is to specify constraints on
applicability of knowledge. Such constraints can only be implemented in a more
expressive language. As we are dealing with BN formalism it is only natural to think
of a formalism that extends BN. MEBN, as a Bayesian first-order logic, makes it
possible to define these constraints using FOL.</p>
        <p>Figure 7 presents the constraints (context nodes) necessary to model the fraud
detection scenarios considered here. In this MFrag, the criterion is to identify if there
is a suspicious business relationship between enterprises entA and entB. The more
cases where enterprise B wins a procurement that the basic project was developed by
enterprise A, the higher the chance they have some kind of personal business
relationship, which means that it is more likely that enterprise B is developing the
basic projects in such a way that will favor enterprise A, inhibiting the desired
competition.</p>
        <p>Since the designed model is restricted to just two criteria, the team started to think
about other criteria that could be incorporated and tested further. Figure 8 presents the
suggested MFrag for detecting owners that act as a front to the real owner of the
company (the person who really has the power to make decisions and that gets all the
money), by looking up their socio-economic attributes and checking the size of the
company. In other words, if a company is highly profitable, yet has an owner with
little education, low income, no car, no house, etc, then the company is probably a
front.</p>
        <p>From the criteria presented and modeled in this Section, we can clearly see the
need for a principled way of dealing with uncertainty. But what is the role of
Semantic Web in this domain? Well, it is easy to see that our domain of fraud
detection is a RIS environment. The data CGU has available does not come only from
its audits and inspections. In fact, much complementary information can be retrieved
from other Federal Agencies, including Federal Revenue Agency, Federal Police, and
others. Imagine we have information about the enterprise that won the procurement,
and we want to know information about its owners, such as their personal data and
annual income. This type of information is not available at CGU’s Data Base (DB),
but must be retrieved from the Federal Revenue Agency’s DB. Once the information
about the owners is available, it might be useful to check their criminal history. For
that (see Figure 9), information from the Federal Police must be used. In this example,
we have different sources saying different things about the same person: thus, the
AAA slogan applies. Moreover, there might be other Agencies with crucial
information related to our person of interest; in other words, we are operating in an
open world. Finally, to make this sharing and integration process possible, we have to
make sure we are talking about the same person, who may (especially in case of
fraud) be known by different names in different contexts.
5</p>
        <p>Conclusion
The problem that CGU and many other Agencies have faced of processing all the
available data into useful knowledge is starting to be solved with the use of
probabilistic ontologies, as the procurement fraud detection model showed. Besides
fusing the information available, the designed model was able to represent the
specialist knowledge for the two real cases we evaluated. UnBBayes reasoning given
the evidence and using the designed model were accurate both in suspicious and non
suspicious scenarios. These results are encouraging, suggesting that a fuller
development of our proof of concept system is promising.</p>
        <p>In addition, it is fairly easy to introduce new criteria and indicators in the model in
an incremental way. Thus, new rules for identifying fraud can be added without
rework. After a new rule is incorporated into the model, a set of new tests can be
added to the previous one with the objective of always validating the new model
proposed, without doing everything from scratch.</p>
        <p>Furthermore, the use of this formalism through UnBBayes allows advantages such
as impartiality in the judgment of irregularities in procurements (given the same
conditions the system will always deliver the same result), scalability (capacity to
analyze thousands of procurements in a short time when compared to human
capacity) and a joint analysis of large volumes of indicators (the higher the number of
indicators to examine jointly the more difficult it is for the specialist analysis to be
objective and consistent).</p>
        <p>As a next step, CGU is choosing new criteria to be incorporated into the designed
probabilistic ontology. This next set of criteria will require information from different
Brazilian Agencies’ databases. Therefore, the semantic power of ontologies with the
uncertainty handling capability of PR-OWL will be extremely useful for fusing
information from multiple databases.</p>
        <p>Acknowledgments. Rommel Carvalho gratefully acknowledges full support from the
Brazilian Office of the Comptroller General (CGU) for the research reported in this
paper, and its employees involved in this research, especially Mário Vinícius
Claussen Spinelli, the domain expert.</p>
        <p>
          Probabilistic Description Logic Learning
In this section, we focus on learning DL axioms and probabilities tailored to
crALC. To learn the terminology component we are inspired by Probabilistic
ILP methods and thus we follow generic syntax and semantics given in [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. The
generic supervised concept learning task is devoted to finding axioms that best
represent assertions positive (covered) and negatives, in a probabilistic setting
this cover relation is given by:
Definition 1. (Probabilistic Covers Relation) A probabilistic covers
relation takes as arguments an example e, a hypothesis H and possibly the
background theory B, and returns the probability value P (e| H, B) between 0 and 1 of
the example e given H and B, i.e., covers(e, H, B) = P (e| H, B).
        </p>
        <p>
          Given Definition 1 we can define the Probabilistic DL learning problem as
follows [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]:
Definition 2. (The Probabilistic DL Learning Problem) Given a set E =
Ep ∪ Ei of observed and unobserved examples Ep and Ei (with Ep ∩ Ei = ∅) over
the language LE , a probabilistic covers relation covers(e, H, B) = P (e| H, B), a
logical language LH for hypotheses, and a background theory B, find a hypothesis
H∗ such that H∗ = arg maxH score(E, H, B) and the following constraints hold:
∀ep ∈ Ep : covers(ep, H∗, B) &gt; 0 and ∀ei ∈ Ei : covers(ei, H∗, B) = 0. The
score is some objective function, usually involving the probabilistic covers relation
of the observed examples such as the observed likelihood Qep∈Ep covers(ep, H∗, B)
or some penalized variant thereof.
        </p>
        <p>Negative examples conflict with the usual view on learning examples in
statistical learning. Therefore, when we speak of positive and negative examples we
are referring to observed and observed ones.</p>
        <p>As we focus in crALC, B = K = (T , A), and given a target concept C,
E = IndC+(A) ∪ IndC−(A) ⊒ Ind(A), are positive and negative examples or
individuals. For instance, candidate hypotheses can be given by C ⊒ H1, . . . , Hk,
where H1 = B ⊓ ∃D.⊤, H2 = A ⊔ E, . . ..</p>
        <p>We assume each candidate hypothesis together with the example e for the
target concept as being a probabilistic variable or feature in a probabilistic model2;
according to available examples, each candidate hypothesis turns out to be true,
false or unknown whether result for instance checking C(a) on K, Ind(A) is
respectively true, false or unknown. The learning task is restricted to finding a
probabilistic classifier for the target concept.</p>
        <p>
          A suitable framework for this probabilistic setting is the Noisy-OR classifier,
a probabilistic model within the Bayesian networks classifiers commonly referred
to as models of independence of clausal independence (ICI) [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. In a Noisy-OR
classifier we aim at learning a class C given a large number of attributes.
        </p>
        <p>
          As a rule, in an ICI classifier for each attribute variable Aj , j = 1, . . . , k
(A denotes the multidimensional variable (A1, . . . , Ak) and a = (a1, . . . , ak)
2 A similar assumption is adopted in the nFOIL algorithm [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ].
its states) we have one child A′j that has assigned a conditional probability
distribution P (A′j| Aj ). Variables A′j , j = 1, . . . , k are parents of probability of
the class variable C. PM (C| A′) represents a deterministic function f that assigns
to each combination of values (a′1, . . . , a′k) a class c. A generic ICI classifier is
illustrated in Figure 1 .
6
A1
        </p>
        <p>C
&gt; 6 Z}Z</p>
        <p>A′
2
6
A2</p>
        <p>Z</p>
        <p>Z
. . .</p>
        <p>Z
. . .</p>
        <p>A′
k
6</p>
        <p>Ak</p>
        <p>
          Fig. 1. ICI models [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>
          The probability distribution of this model is given by [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]:
        </p>
        <p>k
PM (c, a′, a) = PM (c| a′) Y PM (a′j| aj) · PM (aj),
j=1
where the conditional probability PM (c| a′) is one if c = f (a′) and zero otherwise.
The Noisy-OR model is an ICI model where f is the OR function:</p>
        <p>PM (C = 0| A′ = 0) = 1 and PM (C = 0| A′ 6= 0) = 0.</p>
        <p>The joint probability distribution of the Noisy-OR model is</p>
        <p>PM (· ) = PM (C| A′1, . . . , A′k) ·
k !
Y PM (A′j| Aj ) · PM (Aj ) .
j=1
It follows that</p>
        <p>PM (C = 0| A = a) = Y PM (A′j = 0| Aj = aj),</p>
        <p>j
PM (C = 1| A = a) = 1 − Y PM (A′j = 0| Aj = aj).</p>
        <p>j
(1)
(2)</p>
        <p>Using a threshold 0 ≤ t ≤ 1 all data vectors a = (a1 . . . , ak) such that
PM (C = 0| A = a) &lt; t are classified to class C = 1.</p>
        <p>
          The Noisy-OR classifier has the following semantics. If an at tribute Aj is in a
state aj then the instance (a1, . . . , aj , . . . , ak) is classified as C = 1 unless there
is an inhibitory effect, with probability PM (A′j = 0| Aj = aj ). All inhibitory
effects are assumed to be independent. Therefore the probability that an
instance does not belong to class C (C = 0), is a product of all inhibitory effects
Qj PM (A′j = 0| Aj = aj ). For learning this classifier the EM-algorithm has been
proposed [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. The algorithm is directly applicable to any ICI model; in fact, an
efficient implementation resort to a transformation of an ICI model using a
hidden variable (further details in [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]). We now shortly review the EM-algorithm
tailored to Noisy-OR combination functions.
        </p>
        <p>Every iteration of the EM-algorithm consists of two steps: t he expectation
step (E-step) and maximization step (M-step). In a transfor med
decomposable model the E-step corresponds to computing the expected marginal count
n(A′l, Al) given data D = { e1, . . . , en} (ei = { ci, ai} = { ci, ai1, . . . , aik} ) and
model M:
n(A′l, Al) =
n
X PM (A′l, Al| ei) for all l = 1, . . . , k
i=1
where for each (a′l, al)</p>
        <p>PM (A′l = a′l, Al = al| ei) =</p>
        <p>PM (A′l = a′l| ei) if al = ai,</p>
        <p>l
0 otherwise.</p>
        <p>
          Assume a Noisy-Or classifier PM and an evidence C = c, A = a. The updated
probabilities (the E-step) of A′l for l = 1, . . . , k can be computed as follows [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]:
PM (A′l = a′l| C = c, a)
!
if c = 0 and a′l = 0,
if c = 0 and a′l = 1,
if c = 1 and a′l = 0,
if c = 1 and a′l = 1,
 1

 0

= 
        </p>
        <p>1
 z ·

 z1 · PM (A′l = 1| Al = al)</p>
        <p>PM (A′l = 0| Al = al) − Qj PM (A′j = 0| Aj = aj )
where z is a normalization constant. The maximization step corresponds to
setting</p>
        <p>P M∗ (A′l| Al) = n(A′l, Al) , for all l = 1 . . . , k.</p>
        <p>n(Al)</p>
        <p>
          Given the Noisy-OR classifier, the complete learning algori thm is described
in Figure 2, where λ denotes the maximum likelihood parameters. We have used
the refinement operators introduced in [
          <xref ref-type="bibr" rid="ref29 ref3 ref36 ref45">3</xref>
          ] and the Pellet reasoner3 for instance
checking. It may happen that during learning a given example for a candidate
hypothesis Hi cannot be proved to belong to the target concept. This is not
necessarily a counterexample for that concept. In this case, we can make use of
3 http://clarkparsia.com/pellet/.
Input: a target concept C, background knowledge K = (T , A), a training set E =
IndC+(A) ∪ IndC−(A) ⊆ Ind(A) containing assertions on concept C.
        </p>
        <p>Output: induced concept definition C.</p>
        <p>Repeat</p>
        <p>Initialize C′ = ⊥
Compute hypotheses C′ ⊒ H1, . . . , Hn based on refinement operators for ALC
logic
Let h1, . . . , hn be features of the probabilistic Noisy-OR classifier, apply the EM
algorithm
For all hi</p>
        <p>Compute score Qep∈Ep covers(ep, hi, B)</p>
        <p>Let h′ the hypothesis with the best score
According to h′ add H′ to C
Until score({ h1, . . . , hi} , λi, E) &gt; score({ h1, . . . , hi+1} , λi+1, E)</p>
        <p>Fig. 2. Complete learning algorithm.
the EM algorithm of the Noisy-OR classifier to estimate the cl ass ascribed to
the instance.</p>
        <p>
          In order to learn probabilities associated to terminologies obtained for the
former algorithm we commonly resort to the EM algorithm. In this sense, we
are influenced in several respects from approaches given in [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
4
To demonstrate feasibility of our proposal, we have run preliminary tests on
relational data extracted from the Lattes curriculum platform, the Brazilian
government scientific repository4. The Lattes platform is a public source of
relational data about scientific research, containing data on several thousand
researchers and students. Because the available format is encoded in HTML, we
have implemented a semi-automated procedure to extract con tent. A restricted
database has been constructed based on randomly selected documents. We have
performed learning of axioms based on elicited asserted concepts and roles,
further probabilistic inclusions have been added according to the crALC syntax.
Figure 3 illustrates the network generated for a domain of size 2.
        </p>
        <p>For instance, to properly identify a professor, the following concept
description has been learned:
Professor ≡ Person</p>
        <p>⊓(∃hasPublication.Publication ⊔ ∃advises.Person ⊔ ∃worksAt.Organization)
When Person(0) 5 is given by evidence, the probability value P (Professor(0)) =
0.68 (we have considered a large number of professors in our experiments), as
4 http://lattes.cnpq.br.
5 Indexes 0, 1 . . . n represent individuals from a given domain.
further evidence is given the probability value changes to:</p>
        <p>P (Professor(0)∃| hasPublication(1)) = 0.72,
and</p>
        <p>P (Professor(0)∃| hasPublication(1) ⊔ ∃advises(1)) = 0.75.</p>
        <p>The former concept definition can conflict with standard ILP approaches,
where a more suitable definition might be mostly based on conjuntions. In
contrast, in this particular setting, the probabilistic logic approach has a nice and
flexible behavior. However, it is worth noting that terminological constructs
basically rely on the refinement operator used during learning.</p>
        <p>Another query, linked to relational classification, allows us to prevent
duplicate publications. One can be interested in retrieving the number of publications
for a given research group. Whereas this task might seem trivial, difficulties arise
mainly due to multi-authored documents. In principle, each co-author would
have a different entry for the same publication in the Lattes platform, and it
must be emphasized that each entry is be prone to contain errors. In this sense,
a probabilistic concept for duplicate publications was learned:
DuplicatePublication ≡ Publication
⊓(∃hasSimilarTitle.Publication ⊔ ∃hasSameYear.Publication
⊔hasSameType.Publication))</p>
        <p>It clearly states that a duplicate publication is related to publications that
share similar title6, same year and type (journal article, chapter book and so on).
At first, the prior probability is low: P (DuplicatePublication(0)) = 0.05. Evidence
on title similarity increases considerably the probability value:</p>
        <p>P (DuplicatePublication(0)∃| hasSimilarTitle(0, 1)) = 0.77.
6 Similarity was carried out by applying a L“IKE” database ope rator on titles.
Further evidence on type almost guarantees a duplicate concept:</p>
        <p>P (DuplicatePublication(0)∃| hasSimilarName(1) ⊓ ∃hasSameType(1)) = 0.99.
It must be noted that title similarity does not guarantee a duplicate document.
Two documents can share the same title (same author), but nothing prevents
them from being published on different means (for instance, a congress paper
and an extended journal article). Probabilistic reasoning is valuable to deal with
such issues.
5
In this paper we have presented algorithms that perform learning of both
probabilities and logical constructs from relational data for the recently proposed
Probabilistic DL crALC. Learning of parameters is tackled by the EM
algorithm whereas structure learning is conducted by a combined approach relying
on statistical and ILP methods. We approach learning of concepts as a
classification task; a Noisy-OR classifier has been accordingly adap ted to do so.</p>
        <p>Preliminary results have focused on learning a probabilistic terminology from
a real-world domain — the Brazilian scientific repository. P robabilistic logic
queries have been posed on the induced model; experiments suggest that our
methods are suitable for learning ontologies in the Semantic Web.</p>
        <p>Our planned future work is to investigate the scalability of our learning
methods.</p>
        <p>Acknowledgements
The first author is supported by CAPES. The second author is partially
supported by CNPq. The work reported here has received substantial support
through FAPESP grant 2008/03995-5.
BeliefOWL: An Evidential Representation in</p>
        <p>OWL Ontology</p>
        <p>Amira Essaid1 and Boutheina Ben Yaghlane2
1 LARODEC Laboratory, Institut Super´ieur de Gestion de Tuni s</p>
        <p>
          essaid amira@yahoo.fr
2 LARODEC Laboratory, Institut des Hautes Etudes Commerciales de Carthage
boutheina.yaghlane@ihec.rnu.tn
Abstract. The OWL is a language for representing ontologies but it
is unable to capture the uncertainty about the concepts for a domain.
To address the problem of representing uncertainty, we propose in this
paper, the theoretical aspects of our tool BeliefOWL which is based on
evidential approach. It focuses on translating an ontology into a directed
evidential network by applying a set of structural translation rules. Once
the network is constructed, belief masses will be assigned to the different
nodes in order to propagate uncertainties later.
1
Many ontology definition languages have been developed to define ontologies in a
formal way. Among them the OWL 3 which is based on crisp logic. This language
suffers from its lack to represent real domains containing incomplete knowledge
or uncertain information. To overcome this, an extension of the OWL seems
to be a convenient solution. Many researches find this extension important and
try to propose approaches for handling uncertainty in ontology field. For that
purpose, two main mathematical theories have been applied: the probability
theory ([
          <xref ref-type="bibr" rid="ref2 ref28 ref35 ref44">2</xref>
          ],[
          <xref ref-type="bibr" rid="ref33 ref40 ref49 ref7">7</xref>
          ]) and the fuzzy sets theory ([
          <xref ref-type="bibr" rid="ref30 ref37 ref4 ref46">4</xref>
          ],[
          <xref ref-type="bibr" rid="ref32 ref39 ref48 ref6">6</xref>
          ]).
        </p>
        <p>
          However not all the problems of uncertainty lend themselves to one of these
theories. We can find ourselves faced to situations where we are called to
represent the total ignorance or the partial one about information concerning classes.
This can be resolved by applying the Dempster-Shafer theory [
          <xref ref-type="bibr" rid="ref31 ref38 ref47 ref5">5</xref>
          ]. At this stage,
we are interested to use this theory and especially we are encouraged to work
with the directed evidential networks [
          <xref ref-type="bibr" rid="ref1 ref27 ref34 ref43">1</xref>
          ] which are viewed as effective and
appropriate graphical representation for uncertain knowledge. Adding to that, the use
of conditional belief functions provides a well representation of the uncertainty
in the relationships among the variables of a graph.
        </p>
        <p>In this position paper we present our tool BeliefOWL as an approach for
extending an OWL ontology with belief functions as well as the translation of
this ontology into an evidential network.
3 http://www.w3.org/2001/sw/webOnt</p>
        <p>Uncertainty in OWL
The OWL is an expressive language for representing classes and the relations
between them for a domain of discourse. However the source of information
itself can suffer from giving a sufficient information of a concept. Sometimes we
can find ourselves unable to express the exact relation existing between classes
because of an incomplete knowledge about the domain of discourse or missed
values. Uncertainty extension to the OWL is starting to know a considerable
focus during the last years.</p>
        <p>
          To cope with uncertain information in OWL extension, we propose the use
of the Dempster-Shafer theory [
          <xref ref-type="bibr" rid="ref31 ref38 ref47 ref5">5</xref>
          ]. In fact this theory allow s assigning beliefs
not only to a single element but to a set of elements. Furthermore, it gives
the experts the possibility to represent the total ignorance or the partial one
about information concerning the classes of an ontology and the relations that
may exist between them. Besides, this theory provides a method for combining
several pieces of evidence from different sources to establish a new belief by using
Dempsters’ rule of combination.
        </p>
        <p>
          One of our goal is to translate an OWL taxonomy into a directed evidential
network (DEVN). The DEVN is a model introduced in [
          <xref ref-type="bibr" rid="ref1 ref27 ref34 ref43">1</xref>
          ] to represent knowledge
under uncertainty by using the belief functions. It is defined as a directed acyclic
graph (DAG) where the nodes represent variables and the directed arcs linking
nodes describe conditional dependence relations between these variables. These
relations are expressed by conditional belief functions for each variable given
its parents. Two kinds of belief functions are depicted to represent uncertainty
in the DEVN: the prior belief function and the conditional belief function. The
former concerns the root node and the latter expresses the belief function of a
node given the value taken by its parents.
3
        </p>
        <p>Presentation of the BeliefOWL
The figure 1 resumes the different steps followed leading to our tool. In fact the
BeliefOWL has as input an OWL ontology and as output a directed evidential
network (DEVN).</p>
        <p>Step 1: A Belief Extension to OWL: An OWL ontology can define
classes, properties and individuals. In this paper we will focus on attributing
belief masses to the different classes of an OWL taxonomy. For this purpose,
we define some new classes able to represent and to introduce this uncertain
information.</p>
        <p>– Prior evidence : We define two classes to express the prior evidence
&lt;beliefDistribution&gt; and&lt;priorBelief&gt;. The former is used to enumerate the
different masses related to the different classes of an OWL taxonomy. It has an
object property &lt;hasPriorBelief&gt; that specifies the relation between classes
BeliefOWL: An Evidential Representation in OWL Ontology
OWLontology</p>
        <p>Step2:
StructuralTranslation</p>
        <p>Step1:
BeliefExtensiontoOWL</p>
        <p>Animal
Male Human Female
Man NodeUnion Woman
NodeInter_1 NodeDisjoint NodeInter_2
DEVNDAG</p>
        <p>Evidential ontology</p>
        <p>Step3:
ConditionalBeliefmasses
attribution</p>
        <p>m(A) Animal
mm[[AM]l(]M(Ml))MMaanleNodeInter_1 NNoHoddueemUDmnaisnijoo[inAnt](H) NoFdeemImnmtae[lrAe[_WF2]]o((mFW)an)</p>
        <p>DEVNwithbelief</p>
        <p>masses</p>
        <p>Fig. 1. BeliefOWL Framework
&lt;beliefDistribution&gt; and &lt;priorBelief&gt;. The latter expresses the prior
evidence and has a datatype property &lt;massValue&gt; which enables to assign a
mass value between 0 and 1.
– Conditional evidence : It is defined through two main classes
&lt;beliefDistribution&gt; and&lt;condBelief&gt;. The former is the same as in the case of prior
evidence but has an object property &lt;hasCondBelief&gt;. The latter identifies
the conditional evidence and has a datatype property&lt;massValue&gt;.
Step 2: Constructing an Evidential Network: Given an OWL ontology, we
translate it in a DAG by specifying the different nodes to be created as well as the
relations existing between these nodes. The construction of the DAG interests
some of the OWL statements those related to classes.</p>
        <p>– &lt;owl:class&gt;: It is represented as a variable node in the translated DEVN.
– &lt;rdfs:subClassOf&gt;: When a class is a subclass of another one, a directed arc
is drawn from the superclass node to the child subclass node.
– &lt;owl:disjointWith&gt;,&lt;owl:equivalentClass&gt;:When two classes are related to
each other by one of these statements, a new node is created in the translated
DEVN and a directed arc is drawn between the two classes and the node
added.
– &lt;owl:intersectionOf&gt;: A class C may be defined as the intersection of some
classes Ci(i,. . . ,n). This can be represented in the translated DEVN by an
arc from each Ci to C and another one from C and each Ci to a new node
created for representing the intersection.
– &lt;owl:unionOf&gt;: A class C may be defined as the union of some classes
Ci(i,. . . ,n). This can be represented in the translated DEVN by an arc from
C to each Ci to C and another one from C and each Ci to a new node created
for representing the union.</p>
        <p>Step 3: Evidence Attribution: Once the DAG of our network is constructed,
the remaining issue is to assign masses for each node of the network. Considering
the DAG that we have got, we can depict two kinds of nodes:
– ClassesNodes : are the nodes representing the different classes of our
taxonomy and defined by &lt;owl:class&gt;. To this kind of nodes we attribute the prior
belief functions and the conditional ones given into the evidential ontology.
– ConstNodes : are those related to the constructors of our taxonomy
without considering &lt;rdfs:subClassOf&gt; because this kind of constructor is not
represented by a specific node. Concerning the constNodes, masses will be
attributed according to the constructor we are talking about. In fact if we
have a node created to depict an intersection between two classes, the mass
will be attributed by applying the Dempsters’ rule of combin ation.
Concerning the node representing an union, the disjunctive rule of combination will
be applied in that case.</p>
        <p>Once our evidential network is constructed and the masses are assigned to
each node a propagation process can be performed.
4
In this paper, we have presented the beliefOWL which is a new approach for
representing uncertainty in an OWL ontology. We considered only the case for
including uncertainty in classes. This uncertainty is modeled via the
DempsterShafer theory of evidence. We have presented the theoretical aspects of our tool
which consists on translating an OWL ontology into a network. For this purpose,
we extend the OWL ontology classes with belief masses, then we apply structural
translation rules in order to get a DAG of a directed evidential network. The
masses added to the ontology will be extracted and will be attributed to the
networks’ nodes classes.</p>
        <p>Further work can carry about the properties and the individuals. The prior
beliefs assigned to the different nodes of the network are given by an expert, in
the future the assignment can be done automatically through a learning process.
References
Trevor Martin1,2, Zheng Siyao1,3 and Andrei Majidian2</p>
        <p>1 AI Group, University of Bristol, BS8 1TR UK
2 Intelligent Systems Lab, BT, Adastral Park, Ipswich IP5 3RE, UK
3 School of Computer Science and Engineering, BeiHang University, Beijing, China
Abstract. A systematic form of creative knowledge discovery is outlined,
requiring taxonomies to generalise knowledge structures and mappings between
taxonomies to find parallels between knowledge structures from different
domains. These share many of the features needed to handle uncertainty in the
semantic web, and results will be relevant to the URSW community.</p>
        <p>Keywords: fuzzy taxonomy, creative knowledge discovery, fuzzy association
rules, uncertainty in semantic web
1
Almost by definition, creative knowledge discovery is difficult to automate and
harder to assess objectively. By creative knowledge discovery, we mean finding
previously unknown links between concepts or small “chunks” of knowledge in such
a way that useful additional knowledge is generated. It can be distinguished from
“standard” knowledge discovery by defining the latter as the search for explanatory
and/or predictive patterns and rules in large volume data within a specific domain. For
example, a knowledge discovery process might examine an ISP(internet service
provider)’s customer database and determine that people who have a high monthly
spend and who send more than three emails to the support centre in a single month are
very likely to change to a different provider in the following month. Such knowledge
is implicit within the data but is useful in predicting and understanding behaviour.</p>
        <p>By contrast, creative knowledge discovery is more concerned with “thinking the
unthought-of” and looking for new links, new perspectives, etc. Such links are often
found by drawing parallels between different domains and looking to see how well
those parallels hold - for example, compare the ISP example mentioned above to a
hotel chain finding that regular guests who report dissatisfaction with two or more
stays often cease to be regular guests unless they are tempted back by special
treatment (such as complimentary room upgrades). This is a simple illustration of
similar problems (losing customers) in different domains. A solution in one domain
(complimentary upgrades) could inspire a solution in the second (e.g. a higher
download allowance at the same price). Of course, such analogies may break down
when probed too far but they often provide the creative insight necessary to spark a
new solution through a new way of looking at a problem. In many cases, this
inspiration is often referred to as “serendipity”, or accidental discovery.</p>
        <p>
          It is possible that many serendipitous discoveries are subsequently rationalised as
the outcome of rigorous application of the scientific process. The traditional view of
the scientist is as a generator and tester of hypotheses - often this is presented as an
almost mechanical process and systems such as King’s robot scientist [
          <xref ref-type="bibr" rid="ref1 ref27 ref34 ref43">1</xref>
          ] take this to
an extreme, using an inductive logic programming approach to systematically
generate and test hypotheses in a laboratory.
        </p>
        <p>In this paper we outline a project to automate creative knowledge discovery. The
aim is to find parallels between different knowledge repositories - in this case,
semantically annotated networks of documents or process models - in the hope of
transferring useful links from one network to another. In the case of process models
from different domains, the aim is to identify possible improvements in one process if
its analogue in the other domain is more efficient in some way.</p>
        <p>This work shares many of the problems faced by research into uncertainty in the
semantic web - the mapping between repositories is very similar to a mapping
between ontologies, and the creation of knowledge networks encounters several issues
that are well-known from the semantic web, such as the need for imprecise concepts,
integration of sources that represent entities and classes at different levels of detail
etc. The work is at an early stage, and this paper briefly outlines (i) a possible
approach to automating creativity which relies on the use of fuzzy taxonomies and (ii)
preliminary work on automatic extraction of taxonomies from data; this requires a
representation of uncertainty similar to that needed for the semantic web.
2</p>
        <p>A Method for Creative Knowledge Discovery</p>
        <p>
          Can creativity - in this sense of suddenly making novel connections - be
automated? Koestler [
          <xref ref-type="bibr" rid="ref2 ref28 ref35 ref44">2</xref>
          ] summarised this view of creativity as follows:
“The creative act is not an act of creation in the sense of the Old Testament. It does not create
something out of nothing: it uncovers, selects, re-shuffles, combines, synthesizes already
existing facts, idea, faculties, skills. The more familiar the parts, the more striking the new
whole”
Table 1 - attributes of two music players (taken from [
          <xref ref-type="bibr" rid="ref30 ref37 ref4 ref46">4</xref>
          ])
        </p>
        <p>Conventional tape recorder Sony Walkman
big small
clumsy neat
records does not record
plays back plays back
uses magnetic tape uses magnetic tape
tape is on reels tape is in cassette
speakers in cabinet speakers in headphones
mains electricity battery</p>
        <p>
          Sherwood [
          <xref ref-type="bibr" rid="ref29 ref3 ref36 ref45">3</xref>
          ] proposes a systematic approach, in which a situation or artefact is
represented as an object with multiple attributes, and the consequences of changing
attributes, removing constraints, etc are progressively explored. For example, given
an old style reel-to-reel tape recorder as starting point, Sherwood’s approach is to list
some of its essential attributes, substitute plausible alternatives for a number of these
attributes, and evaluate the resulting conceptual design or solution. Table 1 shows
how this could have led to the Sony Walkman in the late 70s [
          <xref ref-type="bibr" rid="ref30 ref37 ref4 ref46">4</xref>
          ]. Again, with the
benefit of hindsight the reader should be able to see that by changing magnetic tape to
a hard disk and considering the way music is purchased and distributed, the same
method could (retrospectively, at least) lead one to invent the iPod. Of course, having
the vision to choose new attributes and the knowledge and foresight to evaluate the
result is the hard part - and the creative steps are usually only obvious with hindsight.
        </p>
        <p>This systematic approach is ideally suited to handling data which is held in an
object-attribute-value format, provided we have a means of changing/generalising
attribute values. We intend to use taxonomies for this purpose, so that “sensible”
changes can be made (e.g. mains, battery are both possible values for a power
attribute). Representing an object O as a set of attribute-value pairs
{(ai, vi ) attribute ai of object O has value vi} we generate a new “design” O* = {(ai, T(vi ))}
by changing one or more values using Ti, a non-deterministic transformation of a
value to another value from the same taxonomy. Given sufficient time, this would
simply enumerate all possible combinations of attribute va!lues. We can reduce the
search space by looking at the solution to an analogous problem in a different domain.</p>
        <p>
          Our aim is to adapt previously developed tools for taxonomy matching [
          <xref ref-type="bibr" rid="ref31 ref38 ref47 ref5">5</xref>
          ] so that
analogies can be found; the next section briefly outlines a way to extract taxonomic
structure when it is not explicitly available.
3
        </p>
        <p>
          An ontology essentially consists of a taxonomy of concepts, one or more relations
between concepts, and rules which impose constraints and allow data transformation.
The idea of an ontology is central to the semantic web [
          <xref ref-type="bibr" rid="ref32 ref39 ref48 ref6">6</xref>
          ], although there can be a
very high cost in creation and maintenance. This is reflected in practical experience
it is rare to find web-based data that is fully marked up with RDF or OWL metadata.
It is far more common to encounter data that is stored in a relational database or an
equivalent XML-tagged format. Such data often contains implicit taxonomies - a
relational table may flatten hierarchical data into one or more attributes. For example,
a film database may record genre(s) and sub-genre(s) as separate fields, hiding the
hierarchical dependency. The hierarchy may be obvious to a human reader of the data,
but it is invisible to the machine. Similarly, XML tags can hide structure. XML relies
on human interpretation for its “semantics” - a programmer can take advantage of the
fact that &lt;iPod&gt; and &lt;walkman&gt; are subtypes of &lt;music Player&gt;, but a program has
no way of knowing this unless it is made explicit by means of a taxonomy. Although
a well-designed schema will make hierarchical structure explicit, our experience is
that a significant proportion of data sources rely on programmer intuition instead.
        </p>
        <p>
          We have investigated formal concept analysis (FCA) [
          <xref ref-type="bibr" rid="ref33 ref40 ref41 ref49 ref50 ref7 ref8">7, 8</xref>
          ] as a way of extracting
hidden structure from a dataset in object-attribute-value form. In its simplest form,
FCA considers a binary-valued table, where each row corresponds to an object and
each column to an attribute (property). The extension to a fuzzy case is (relatively)
straightforward, by considering a fuzzy relation R* and alpha-cuts which reduce the
problem to the crisp case. A brief outline and promising initial results are given in [
          <xref ref-type="bibr" rid="ref42 ref51 ref9">9</xref>
          ].
4
        </p>
        <p>Applications
Two specific domains form demonstrators for this work. XML process mining
algorithms exist to discover process model from log files; various additions include
heuristic and fuzzy approaches to handle noisy data. Semantic processing mining
involves ontology knowledge. The ProM [www.processmining.org] platform takes
SA-MXML (semantic annotated mxml) files as input, where the annotation conforms
to the Web Service Modelling Language. The aim of this demonstrator is to find
(partial) similarities between process models in different domains, and use process
simulation tools to determine whether one process can be improved by slightly
altering it to match the second process more closely. The second demonstrator is
based on web forum discussions and support centre documentation, and will attempt
to improve the automated provision of “help” information.
5</p>
        <p>Summary
This paper has briefly outlined a project to automate aspects of creative knowledge
discovery. The project is in early stages. Although not a direct application of
uncertain reasoning in the semantic web, it shares many of the same problems and
useful cross-fertilisation of ideas should be possible.</p>
        <p>Acknowledgement : this work was partly funded by the FP7 BISON (Bisociation
Networks for Creative Information Discovery) project, number 211898.
6</p>
        <p>Position paper: Uncertainty reasoning for linked data</p>
        <p>Dave Reynolds1
1 Hewlett Packard Laboratories, Bristol</p>
        <p>Dave.e.Reynolds@gmail.com
Abstract. Linked open data offers a set of design patterns and conventions for
sharing data across the semantic web. In this position paper we enumerate some
key uncertainty representation issues which apply to linked data and suggest
approaches to tackling them. We suggest the need for vocabularies to enable
representation of link certainty, to handle ambiguous or imprecise values and to
express sets of assumptions based on named graph combinators.</p>
        <p>
          Keywords: Uncertainty reasoning; linked open data; semantic web
1 Introduction
The need for reasoning over uncertain information within the semantic web occurs in
many different situations. It can arise from intrinsic uncertainty in the world being
modeled or from limitations of the sensing or reasoning agent itself (epistemic). The
term uncertainty is often used to refer to many different notions including ambiguity,
randomness, vagueness, inconsistency, incompleteness [
          <xref ref-type="bibr" rid="ref1 ref27 ref34 ref43">1</xref>
          ][
          <xref ref-type="bibr" rid="ref42 ref51 ref9">9</xref>
          ].
        </p>
        <p>In recent years an approach to the semantic web, called linked data, has been
developed and offers a promising route to practical and widespread semantic web
uptake. It provides a set of design guidelines or patterns for how the semantic web
technologies, and broader web architecture, can be used for sharing information. The
existing guidelines and practices have no provision for representation of uncertainty;
yet linked data is indeed fraught with many of these different types of uncertainty.</p>
        <p>In this brief position paper we examine the ways in which uncertainty can occur in
a linked data setting and sketch possible approaches to addressing the issues raised.
2</p>
        <p>
          Linked data
Linked Data is a set of conventions for publishing data on the semantic web. It is
based on principles outlined by Tim Berners-Lee [
          <xref ref-type="bibr" rid="ref2 ref28 ref35 ref44">2</xref>
          ]. These principle advocate the use
of http URIs for naming entities, the publication of data about these URIs using the
standards (RDF, SPARQL) and inclusion of links to other URIs so that agents can
discover more information. While quite simple these guidelines, along with a growing
body of practical advice [
          <xref ref-type="bibr" rid="ref29 ref3 ref36 ref45">3</xref>
          ], have led to publication and linking of many datasets in
this form [
          <xref ref-type="bibr" rid="ref30 ref37 ref4 ref46">4</xref>
          ]. This has resulted in high profile commercial applications such as [
          <xref ref-type="bibr" rid="ref31 ref38 ref47 ref5">5</xref>
          ].
        </p>
        <p>While not explicitly stated, the style of linked data places an emphasis on data
sharing and simplicity, with corresponding less emphasis on depth of modeling and
reasoning. Yet the intrinsic nature of the linked data approach leads to issues of
uncertainty representation and reasoning. This is due to the emphasis on cross-linking
multiple data sources that have been independently developed and modeled.
Uncertainty can arise from the instance linking process, from the mapping between
different sources models and due to differing hidden assumptions in the underlying
datasets. Yet the essence of linked data, and a large part of the reason for its uptake, is
simplicity. The data is intended to be self-descriptive and accessible through simple
link following and graph union or through SPARQL endpoints. Our challenge is to
develop a common, easy to deploy, approach to uncertainty representation which can
be applied to linked data sets without losing this simplicity.
3</p>
        <p>Some sources of uncertainty in linked data applications
In this section we enumerate some key sources of uncertainty for linked data. We
focus on the sources which directly result from the intrinsic nature of linked data – the
cross-linking of independently developed RDF datasets.</p>
        <p>
          Ambiguity resulting from data merging
In linked data, entities (Individuals) which co-occur with different URIs in different
datasets are unified. This is achieved by publishing owl:sameAs relations between
identified entities, either within the dataset or as a separate link set. The process of
identifying such co-references is imperfect. Firstly, the co-references are typically
found by a mixture of string matching, attribute matching, and type constraints,
generally based on a statistical or machine learning algorithm [
          <xref ref-type="bibr" rid="ref32 ref39 ref48 ref6">6</xref>
          ]. Thus co-references
are only identified with some probability (or less formal heuristic weighting). Yet the
asserted links are binary and the strength of association is lost. Secondly, the nature of
the entities is ambiguous in some datasets. For example, Wikipedia and thus DBPedia
conflate the concepts of the City Bristol in the UK and the associated Unitary
Authority. A co-reference link that identified the ambiguous DBPedia concept with
one that specifically denotes the Unitary Authority would be an error in general, even
though it may be an acceptable approximation in some situations.
        </p>
        <p>Misalignment of precision and assumptions between merged sources
Many datasets in the linked data web publish property values for the entities they
describe; for example, the population of the City of London. Yet those values are
sometimes imprecise or dependent upon measurement assumptions that are not made
explicit. For example, the population of a city depends on the time of the
measurement, the measurement methodology and the precise definition of the
boundary of the city; it is also subject to statistical uncertainty. As a result, at the time
of writing, a linked data query on London returns a graph with four assertions on its
population ranging from 7,700,000 to 8,500,000. One of these sources of variation,
the time of measurement, is sometimes made explicit in data and indeed one of the
four assertions is (indirectly) time qualified. However, such contextual qualification is
not consistently available and, in any case, only accounts for one source of variation.
Thus when datasets are linked the resulting union will often have multiple conflicting
values for supposedly functional properties.</p>
        <p>Uncertainty Reasoning for Linked Data
3.3 Misalignment of models
When linking datasets we also want to map the associated ontologies. This process is
just as error prone as entity co-reference since the axiomatization of concepts in the
ontologies is rarely so complete as to allow a unambiguous mapping. Errors in the
ontology mapping can lead to global effects such as unexpected identification of
related concepts. Determining and publishing such alignment errors is the subject of
considerable research and is outside the scope of this paper.
3.4 Absence of source reliability information
Separate from the uncertainty arising from combination and linking of datasets then
the datasets themselves can be uncertain or contain errors (either accidental or
malicious). While this is true in general in the semantic web, the linked data approach
implies broad cross linking with no provision for narrow scoping of link references.
This exacerbates the problems of the veracity or trustworthiness of included datasets.
4</p>
        <p>
          Mitigation approaches
We now discuss approaches to mitigate the effects of these uncertainty sources on the
consumers of linked data. In keeping with linked data methodology we seek simple,
broadly applicable, design patterns. In particular, we suggest the need for design
patterns for making the uncertainty inherent in the linked datasets more explicit, and
mechanisms to enable selective combination of datasets (so that problematic values or
links can be omitted). In this a short position paper we only sketch the suggested
approaches as a basis for discussion in the workshop.
4.1 Link vocabulary
The link vocabulary would provide a common representation for co-reference links,
enabling publication of the link certainty information on which per-link inclusion
decisions can be made. This can be achieved by extending the voiD ontology [
          <xref ref-type="bibr" rid="ref41 ref50 ref8">8</xref>
          ] with
a concept UncertainLinkSet (as a subclass of void:LinkSet), and associated
properties for describing the method used for deriving the link set. The
UncertainLinkSet itself would contain n-ary relations (WeightedLink) comprising the
link and associated link weight. Different subclasses of WeightedLink indicate
different interpretations of the link weight (such as probabilistic or ad hoc).
4.2 Imprecise value vocabulary
The imprecise value vocabulary would provide a common representation for
imprecise values that arise from data set merger, as discussed in 3.2. This would allow
republication of merged datasets which explicitly show the variation in source data
values. Returning to our example of the population of London the merged set might
look like:
:London :population [a :ImpreciseValue;
:samplevalue [:value 7700000; :source :s1; :context :y2009]
:samplevalue [:value 7900000; :source :s2; :context :y2008]
:estimatedValue 785123]
4.3 Override graphs
Finally we suggest the need for override graphs so that one agent can publish
retractions and overrides to the link assertions or data assertions made by another.
        </p>
        <p>
          The current approach to this, in linked data applications, is to partition data and
link sets into named graphs [
          <xref ref-type="bibr" rid="ref33 ref40 ref49 ref7">7</xref>
          ]. For example, rather than include all the co-reference
links directly in the same graph as the entity descriptions, we partition them into a
separate named graph. In this way a RESTful access can see the union of the relevant
graphs but a SPARQL endpoint can support selection of which graphs to include.
This allows agents to avoid selected link sets or sub-sources but only at the grain size
of the entire graph. To overcome this limitation we suggest extending the VoiD
vocabulary to include graph combinators difference, union and replace. So one source
can decide which subsets of the data and links to trust, and can then publish the
assumptions it is making as a set of deltas over the source graphs. The difference
graphs enable per-link and per-assertion changes to be expressed even if the
underlying source only publishes the link set or data assertions as monolithic graphs.
        </p>
        <p>Discussion</p>
        <p>Of the issues in section 3 we have suggested an agenda for how to address some of
them. The link and imprecise value vocabularies enable publication of link
uncertainty (3.1) and value ambiguity (3.2) information in linked data sets. The
vocabularies themselves would not remove the uncertainties, nor the problems of
estimating them. However, simply having a means to publish this information is
already a step forward. The suggested graph combinators would enable an agent to
make and publish more selective data combinations, based on its interpretation of link
strengths and data values. This does not solve the problems of deciding which parts of
which sources to trust, but it does enable more effective sharing of such decisions.
References
Ben Yaghlane, Boutheina . . . . . . . . . . . . 77
Carvalho, Rommel N. . . . . . . . . . . . . . . . . 3
Cozman, Fabio Gagliardi . . . . . . . . . . . . 63
da Costa, Paulo C. G. . . . . . . . . . . . . . . . . . 3
d’Amato, Claudia . . . . . . . . . . . . . . . 15, 27
Esposito, Floriana . . . . . . . . . . . . . . . . . . 27
Essaid, Amira . . . . . . . . . . . . . . . . . . . . . . 77
Fanizzi, Nicola . . . . . . . . . . . . . . . . . . 15, 27
Fazzinga, Bettina . . . . . . . . . . . . . . . . . . . 15
Gajderowicz, Bart . . . . . . . . . . . . . . . . . . 39
Gottlob, Georg . . . . . . . . . . . . . . . . . . . . . 15
Ladeira, Marcelo . . . . . . . . . . . . . . . . . . . . 3
Laskey, Kathryn B. . . . . . . . . . . . . . . . 3, 51
Lukasiewicz, Thomas . . . . . . . . . . . . . . . 15
Majidian, Andrei . . . . . . . . . . . . . . . . . . . 77
Martin, Trevor . . . . . . . . . . . . . . . . . . . . . . 77
Matsumoto, Shou . . . . . . . . . . . . . . . . . . . . 3
Ochoa Luna, Jose´ Eduardo . . . . . . . . . . 63
Reynolds, Dave . . . . . . . . . . . . . . . . . . . . . 81
Sadeghian, Alireza . . . . . . . . . . . . . . . . . . 39
Santos, Lae´cio L. . . . . . . . . . . . . . . . . . . . . 3
Siyao, Zheng . . . . . . . . . . . . . . . . . . . . . . . 77</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Lisi</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Esposito</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Foundations of onto-relationa l learning</article-title>
          .
          <source>In: ILP 0'8: Proceedings of the 18th International Conference on Inductive Logic Programming</source>
          , Berlin, Heidelberg, Springer-Verlag (
          <year>2008</year>
          )
          <fpage>1581</fpage>
          -
          <lpage>75</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Iannone</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palmisano</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fanizzi</surname>
            ,
            <given-names>N.:</given-names>
          </string-name>
          <article-title>An algorithm based on counterfactuals for concept learning in the semantic web</article-title>
          .
          <source>Applied Intelligence</source>
          <volume>26</volume>
          (
          <issue>2</issue>
          ) (
          <year>2007</year>
          )
          <fpage>1391</fpage>
          -
          <lpage>59</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Lehmann</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hitzler</surname>
            ,
            <given-names>P.:</given-names>
          </string-name>
          <article-title>A refinement operator based learning algorithm for the ALC description logic</article-title>
          . In Blockeel, H.,
          <string-name>
            <surname>Shavlik</surname>
            ,
            <given-names>J.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tadepalli</surname>
          </string-name>
          , P., eds.
          <source>: ILP 0'8: Proceedings of the 17th International Conference on Inductive Logic Programming</source>
          . Volume
          <volume>4894</volume>
          of Lecture Notes in Computer Science., Springer (
          <year>2008</year>
          )
          <fpage>1471</fpage>
          -
          <lpage>60</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Muggleton</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>Inductive Logic Programming</article-title>
          .
          <string-name>
            <surname>McGraw-Hil l</surname>
          </string-name>
          , New York (
          <year>1992</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Cozman</surname>
            ,
            <given-names>F.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Polastro</surname>
          </string-name>
          , R.B.:
          <article-title>Loopy propagation in a probabilistic description logic</article-title>
          .
          <source>In: SUM 0'8: Proceedings of the 2nd International Con ference on Scalable Uncertainty Management</source>
          , Berlin, Heidelberg, Springer-Ve rlag
          <article-title>(</article-title>
          <year>2008</year>
          )
          <fpage>1201</fpage>
          -
          <lpage>33</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Getoor</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taskar</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)</article-title>
          . The MIT Press (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7. de Campos,
          <string-name>
            <given-names>C.P.</given-names>
            ,
            <surname>Cozman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.G.</given-names>
            ,
            <surname>Ochoa-Luna</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.E.</surname>
          </string-name>
          :
          <article-title>Assembl ing a consistent set of sentences in relational probabilistic logic with stochastic independence</article-title>
          .
          <source>Journal of Applied Logic</source>
          <volume>7</volume>
          (
          <year>2009</year>
          )
          <fpage>1371</fpage>
          -
          <lpage>54</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Hailperin</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Sentential Probability Logic</article-title>
          . Lehigh University Press, Bethlehem, United
          <string-name>
            <surname>States</surname>
          </string-name>
          (
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Cozman</surname>
            ,
            <given-names>F.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Polastro</surname>
          </string-name>
          , R.:
          <article-title>Complexity analysis and variational inference for interpretation-based probabilistic description logics</article-title>
          .
          <source>In: Conference on Uncertainty in Artificial Intelligence</source>
          . (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Quinlan</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mostow</surname>
          </string-name>
          , J.:
          <article-title>Learning logical definitions from relations</article-title>
          .
          <source>In: Machine Learning</source>
          . (
          <year>1990</year>
          )
          <fpage>2392</fpage>
          -
          <lpage>66</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Pearl</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference</article-title>
          . Morgan Kaufmann, San Mateo (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Vomlel</surname>
          </string-name>
          , J.:
          <article-title>Noisy-OR classifier: Research articles</article-title>
          .
          <source>Int . J. Intell. Syst</source>
          .
          <volume>21</volume>
          (
          <issue>3</issue>
          ) (
          <year>2006</year>
          )
          <fpage>3813</fpage>
          -
          <lpage>98</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Baader</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nutt</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          :
          <article-title>Basic description logics</article-title>
          .
          <source>In: Description Logic Handbook</source>
          . Cambridge University Press (
          <year>2002</year>
          )
          <fpage>471</fpage>
          -
          <lpage>00</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Fanizzi</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , DA'mato,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Esposito</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          :
          <article-title>DL-FOIL concept learning in description logics</article-title>
          .
          <source>In: ILP 0'8: Proceedings of the 18th International C onference on Inductive Logic Programming</source>
          , Berlin, Heidelberg, Springer-Verlag (
          <year>2008</year>
          )
          <fpage>1071</fpage>
          -
          <lpage>21</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15. Mitchell, T.:
          <article-title>Machine Learning</article-title>
          .
          <source>McGraw-Hill</source>
          , New York (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Raedt</surname>
            ,
            <given-names>L.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kersting</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Probabilistic inductive logic programming</article-title>
          .
          <source>In: Probabilistic ILP - LNAI 4911</source>
          . Springer-Verlag Berlin (
          <year>2008</year>
          )
          <fpage>12</fpage>
          -
          <lpage>7</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Muggleton</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Raedt</surname>
            ,
            <given-names>L.D.:</given-names>
          </string-name>
          <article-title>Inductive logic programming: Theory and methods</article-title>
          .
          <source>Journal of Logic Programming 192-0</source>
          (
          <year>1994</year>
          )
          <fpage>6296</fpage>
          -
          <lpage>79</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Kietz</surname>
            ,
            <given-names>J.U.</given-names>
          </string-name>
          :
          <article-title>Learnability of description logic programs</article-title>
          .
          <source>In: Inductive Logic Programming</source>
          , Springer (
          <year>2002</year>
          )
          <fpage>1171</fpage>
          -
          <lpage>32</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Rouveirol</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ventos</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Towards learning in CARIN- ALN</article-title>
          .
          <source>In: ILP 0'0: Proceedings of the 10th International Conference on Inductive Logic Programming</source>
          , London, UK, Springer-Verlag (
          <year>2000</year>
          )
          <fpage>1912</fpage>
          -
          <lpage>08</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Donini</surname>
            ,
            <given-names>F.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lenzerini</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nardi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schaerf</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>AL-log: integrating Datalog and description logics</article-title>
          .
          <source>Journal of Intelligent and Cooperative Information Systems</source>
          <volume>10</volume>
          (
          <year>1998</year>
          )
          <fpage>2272</fpage>
          -
          <lpage>52</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Rosati</surname>
            ,
            <given-names>R.:</given-names>
          </string-name>
          <article-title>DL+log: Tight integration of description logics and disjunctive datalog</article-title>
          .
          <source>In: KR</source>
          . (
          <year>2006</year>
          )
          <fpage>687</fpage>
          -
          <lpage>8</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Cohen</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , Hirsh, H.:
          <article-title>Learning the CLASSIC description logic: Theoretical and experimental results</article-title>
          .
          <source>In: (KR94): Principles of Knowledge Representation and Reasoning: Proceedings of the Fourth International Conference</source>
          , Morgan Kaufmann (
          <year>1994</year>
          )
          <fpage>1211</fpage>
          -
          <lpage>33</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Badea</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , Nienhuys-Cheng, S.H.:
          <article-title>A refinement operator f or description logics</article-title>
          .
          <source>In: ILP 0'0: Proceedings of the 10th International Conferen ce on Inductive Logic Programming</source>
          , London, UK, Springer-Verlag (
          <year>2000</year>
          )
          <fpage>405</fpage>
          -
          <lpage>9</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Lehmann</surname>
          </string-name>
          , J.:
          <article-title>Hybrid learning of ontology classes</article-title>
          .
          <source>In: Proceedings of the 5th International Conference on Machine Learning and Data Mining</source>
          . Volume
          <volume>4571</volume>
          of Lecture Notes in Computer Science., Springer (
          <year>2007</year>
          )
          <fpage>8838</fpage>
          -
          <lpage>9</lpage>
          8
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Inuzuka</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kamo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ishii</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Seki</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Itoh</surname>
          </string-name>
          , H.:
          <article-title>Top- down induction of logic programs from incomplete samples</article-title>
          .
          <source>In: ILP 9'6 : 6th Internat ional Workshop</source>
          . Volume 1314 of LNAI.,
          <source>SV</source>
          (
          <year>1997</year>
          )
          <fpage>2652</fpage>
          -
          <lpage>84</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Landwehr</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kersting</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>DeRaedt</surname>
          </string-name>
          , L.:
          <article-title>Integrating Nıav¨e Bayes and FOIL</article-title>
          .
          <source>J. Mach. Learn. Res</source>
          .
          <volume>8</volume>
          (
          <year>2007</year>
          )
          <fpage>4815</fpage>
          -
          <lpage>07</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Ben</given-names>
            <surname>Yaghlane</surname>
          </string-name>
          ,
          <string-name>
            <surname>B.</surname>
          </string-name>
          :
          <article-title>Uncertainty representation and reasoning in directed evidential networks</article-title>
          ,
          <source>PhD thesis</source>
          , Institut Super´ieur de Gestion de Tun is Tunisia,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          2.
          <string-name>
            <surname>Ding</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>BayesOWL: A Probabilistic Framework for Semantic Web</article-title>
          ,
          <source>PhD thesis</source>
          , University of Maryland, Baltimore Country, (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          3.
          <string-name>
            <surname>Fukushige</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Representing probabilistic relations in RDF, proc</article-title>
          . of Work.
          <source>on URSW at the 4th ISWC</source>
          , Galway, Ireland, (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          4.
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Extending</surname>
            <given-names>OWL</given-names>
          </string-name>
          <article-title>by fuzzy description logic</article-title>
          ,
          <source>proc. of the 17th IEEE ICTAI05</source>
          ,
          <string-name>
            <surname>Hong</surname>
            <given-names>Kong</given-names>
          </string-name>
          , China,
          <fpage>562</fpage>
          -
          <lpage>567</lpage>
          (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          5.
          <string-name>
            <surname>Shafer</surname>
          </string-name>
          ,G.:
          <source>A Mathematical Theory of Evidence</source>
          , Princeton University Press (
          <year>1976</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          6.
          <string-name>
            <surname>Stoilos</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stamou</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tzouvaras</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Horrocks</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Fuzzy</surname>
            <given-names>OWL</given-names>
          </string-name>
          :
          <article-title>Uncertainty and the Semantic Web</article-title>
          ,
          <source>proc. of the Inter. Work. on OWL-ED05</source>
          , G alway, Ireland, (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          7.
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calmet</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>OntoBayes: An Ontology-Driven Unce rtainty Model, proc</article-title>
          . of CIMSCA/IAWTIC, Washington, DC, USA. IEEE Computer Society,
          <fpage>457</fpage>
          -
          <lpage>463</lpage>
          (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          1.
          <string-name>
            <surname>King</surname>
            ,
            <given-names>R.D.</given-names>
          </string-name>
          , et al.,
          <article-title>Functional genomic hypothesis generation and experimentation by a robot scientist</article-title>
          .
          <source>Nature</source>
          ,
          <year>2004</year>
          (
          <volume>6971</volume>
          ): p.
          <fpage>247</fpage>
          -
          <lpage>251</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          2.
          <string-name>
            <surname>Koestler</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <source>The Act of Creation</source>
          .
          <year>1964</year>
          : Macmillan.
          <volume>751</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          3.
          <string-name>
            <surname>Sherwood</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <article-title>SilverBullet Machine:</article-title>
          :Guide to Creativity,
          <year>2009</year>
          , www.silverbulletmachine.com
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          4.
          <string-name>
            <surname>Sherwood</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <article-title>Koestler's Law: The Act of Discovering Creativity-And How to Apply It in Your Law Practice</article-title>
          .
          <source>Law Practice</source>
          ,
          <year>2006</year>
          .
          <volume>32</volume>
          (
          <issue>8</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          5.
          <string-name>
            <surname>Martin</surname>
            ,
            <given-names>T.P</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Azvine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <article-title>Granular Assoc Rules for Multiple Taxonomies: A Mass Assignment Approach in Uncertain Reasoning in the Sem Web</article-title>
          , M.Nickles, Ed 2008,Springer.
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          6.
          <string-name>
            <surname>Berners-Lee</surname>
            ,
            <given-names>T</given-names>
          </string-name>
          , J.Hendler, and
          <string-name>
            <given-names>O.</given-names>
            <surname>Lassila</surname>
          </string-name>
          , Semantic Web,
          <source>in Scientific American</source>
          <year>2001</year>
          p28-
          <fpage>37</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          7.
          <string-name>
            <surname>Ganter</surname>
            ,
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Wille</surname>
          </string-name>
          ,
          <source>Formal Concept Analysis: Mathematical Foundations</source>
          .
          <year>1998</year>
          : Springer.
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          8.
          <string-name>
            <surname>Priss</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <source>Formal Concept Analysis in Information Science.</source>
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          9.
          <string-name>
            <surname>Majidian</surname>
            , A. and
            <given-names>T.P.</given-names>
          </string-name>
          <string-name>
            <surname>Martin</surname>
          </string-name>
          .
          <article-title>Extracting Taxonomies from Data - a Case Study using Fuzzy FCA</article-title>
          .
          <source>in Web Intelligence-09</source>
          .
          <year>2009</year>
          . Milan, Italy: IEEE Computing.
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          1.
          <string-name>
            <surname>Laskey</surname>
            ,
            <given-names>K.J.</given-names>
          </string-name>
          , et al.:
          <source>Uncertainty Reasoning for the World Wide Web, W3C Incubator Group Report, 31 March</source>
          <year>2008</year>
          . http://www.w3.org/2005/Incubator/urw3/XGR-urw3/
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          2.
          <string-name>
            <surname>Berners-Lee</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Linked Data</article-title>
          . http://www.w3.org/DesignIssues/LinkedData.html (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>3. http://linkeddata.org/</mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>4. http://linkeddata.org/data-sets</mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          5.
          <string-name>
            <surname>Kobilarov</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , et al.:
          <string-name>
            <surname>Media Meets Semantic Web --- How the BBC Uses</surname>
          </string-name>
          <article-title>DBpedia and Linked Data to Make Connections</article-title>
          .
          <source>Proc. of the 6th European Semantic Web Conference on the Semantic Web: Research and Applications (Heraklion</source>
          , Crete, Greece).
          <source>Lecture Notes In Computer Science</source>
          , vol.
          <volume>5554</volume>
          . Springer-Verlag, Berlin, Heidelberg,
          <fpage>723</fpage>
          -
          <lpage>737</lpage>
          . (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          6.
          <string-name>
            <surname>Elmagarmid</surname>
            ,
            <given-names>A.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ipeirotis</surname>
            ,
            <given-names>P.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Verykios</surname>
            ,
            <given-names>V.S.</given-names>
          </string-name>
          :
          <article-title>Duplicate Record Detection: A Survey</article-title>
          .
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          <volume>19</volume>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          7.
          <string-name>
            <surname>Carroll</surname>
            ,
            <given-names>J. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bizer</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hayes</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Stickler</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Named graphs, provenance and trust</article-title>
          .
          <source>In Proceedings of the 14th international Conference on World Wide Web (Chiba</source>
          , Japan, May
          <volume>10</volume>
          - 14,
          <year>2005</year>
          ).
          <source>WWW '05. ACM</source>
          , New York, NY,
          <fpage>613</fpage>
          -
          <lpage>622</lpage>
          . (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          8.
          <string-name>
            <surname>Alexander</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cyganiak</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hausenblas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          : voidD Guide :
          <article-title>Using the vocabulary of Interlinked Datasets</article-title>
          . http://rdfs.org/ns/void-guide (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          9.
          <string-name>
            <surname>Kruse</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schwecke</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Heinsohn</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>1991</year>
          <article-title>Uncertainty and Vagueness in Knowledge Based Systems</article-title>
          . Springer-Verlag New York, Inc.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>