<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Introducing Fuzzy Trust for Managing Belief Conflict over Semantic Web Data</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Miklos Nagy</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maria Vargas-Vera</string-name>
          <email>m.vargas-vera@open.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Enrico Motta</string-name>
          <email>e.motta@open.ac.uk</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computing Department The Open University Walton Hall</institution>
          ,
          <addr-line>Milton Keynes MK7 6AA</addr-line>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Knowledge Media Institute (KMi) The Open University Walton Hall</institution>
          ,
          <addr-line>Milton Keynes MK7 6AA</addr-line>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Interpreting Semantic Web Data by different human experts can end up in scenarios, where each expert comes up with different and conflicting ideas what a concept can mean and how they relate to other concepts. Software agents that operate on the Semantic Web have to deal with similar scenarios where the interpretation of Semantic Web data that describes the heterogeneous sources becomes contradicting. One such application area of the Semantic Web is ontology mapping where different similarities have to be combined into a more reliable and coherent view, which might easily become unreliable if the conflicting beliefs in similarities are not managed effectively between the different agents. In this paper we propose a solution for managing this conflict by introducing trust between the mapping agents based on the fuzzy voting model.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Assessing the performance and quality of different ontology mapping algorithms,
which operate in the Semantic Web environment has gradually been evolved
during the recent years. One remarkable effort is the Ontology Alignment
Evaluation Initiative 3 , which provides a possibility to evaluate and compare the
mapping quality of different systems. However it also points out the difficulty of
evaluating ontologies with large number of concepts i.e. the library track where
due to the size of the vocabulary only a sample evaluation is carried out by a
number of domain experts. Once each expert has assessed the correctness of the
sampled mappings their assessment is discussed and they produce a final
assessment, which reflects their collective judgment. Our ontology mapping algorithm
DSSim [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] tries to mimic the aforementioned process, using different software
3 http://oaei.ontologymatching.org/
agents as experts to evaluate and use beliefs over similarities of different
concepts in the source ontologies. Our mapping agents use WordNet as background
knowledge to create a conceptual context for the words that are extracted from
the ontologies and employ different syntactic and semantic similarities to create
their subjective beliefs over the correctness of the mapping. DSSim addresses
the uncertain nature of the ontology mapping by considering different similarity
measures as subjective probability for the correctness of the mapping. It employs
the Dempster-Shafer theory of evidence in order to create and combine beliefs
that has been produced by the different similarity algorithms. For the detailed
description of the DSSim algorithm one can refer to [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Using belief combination
has their advantages compared to other combination methods . However the
belief combination has received a verifiable criticism from the research community.
There is a problem with the belief combination if agents have conflicting beliefs
over the solution. The main contribution of this paper is a novel trust
management approach for resolving conflict between beliefs in similarities, which is the
core component of the DSSim ontology mapping system.
      </p>
      <p>The paper is organized as follows. Section 2 provides the description of the
problem and its context. Section 3 describes the voting model and how it is
applied for determining trust during the ontology mapping. In section 4 we
present our experiments that have been carried out with the benchmarks of the
Ontology Alignment Initiative. Section 5 gives and overview of the related work.
Finally, section 6 describes our future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Problem description</title>
      <p>In the context of the Semantic Web trust can have different meaning therefore
before we describe the problem let us define the basic notions of our argument.
Definition 1 Trust: One mapping agent’s measurable belief in the competence
of the other agents’ belief over the established similarities.</p>
      <p>Definition 2 Content related trust: Dynamic trust measure that is dependent
on the actual vocabulary of the mappings, which has been extracted from the
ontologies and can change from mapping to mapping.</p>
      <p>Definition 3 Belief: The state in which a software agent holds a proposition or
premise over a possible mapping of selected concept pair combination to be true.
Numerical representation of belief can be assigned to a value between [0..1].</p>
      <p>If we assume that in the Semantic Web environment it is not possible to
deduct an absolute truth from the available sources then we need to evaluate
content dependent trust levels by each application that processes the information
on the Semantic Web e.g. how a particular information coming from one source
compares the same or similar information that is coming from other sources.</p>
      <p>
        Dominantly the existing approaches that address the problem of the
trustworthiness of the available data on the Semantic Web are reputation based e.g.
using digital signatures that would state who the publisher of the ontology is.
However another and probably most challenging aspect of trust appears when
we process the available information on the Semantic Web and we discover
contradictory information from the evidences. Consider an example from ontology
mapping. When we assess similarity between two terms, ontology mapping can
use different linguistic and semantic[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] information in order to determine the
similarity level e.g. background knowledge or concept hierarchy. In practice any
similarity algorithm will produce good and bad mappings for the same domain
depending of the actual interpretation of the terms in the ontologies e.g.
using different background knowledge descriptions or class hierarchy. In order to
overcome this shortcoming the combination of different similarity measures are
required. During the recent years a number of methods and strategies have been
proposed[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] to combine these similarities. In practice considering the overall
results these combination methods will perform well under different circumstances
except when contradictory evidence occurs during the combination process.
      </p>
      <p>In our ontology mapping framework different agents assess similarities and
their beliefs on the similarities need to be combined into a more coherent
result. However these individual beliefs in practice are often conflicting. A conflict
between two beliefs in Dempster-Shafer theory can be interpreted qualitatively
as one source strongly supports one hypothesis and the other strongly supports
another hypothesis, where the two hypotheses are not compatible. In this
scenario applying Dempster’s combination rule to conflicting beliefs can lead to an
almost impossible choice, because the combination rule strongly emphasizes the
agreement between multiple sources and ignores all the conflicting evidences.</p>
      <p>We argue that the problem of contradictions can only be handled from case
to case by introducing trust for the similarity measures, which is applied only
for the selected mapping and can change from mapping to mapping during the
process depending on the available evidences. We propose evaluating trust in the
different beliefs that does not depend on the credentials of the ontology owner
but it purely represents the trust in a proposed subjective belief that has been
established by using different similarity algorithms.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Fuzzy trust management for conflicting belief combination</title>
      <p>In ontology mapping the conflicting results of the different beliefs in
similarity can be resolved if the mapping algorithm can produce an agreed solution,
even though the individual opinions about the available alternatives may vary.
We propose a solution for reaching this agreement by evaluating fuzzy trust
between established beliefs through voting, which is a general method of
reconciling differences. Voting is a mechanism where the opinions from a set of votes
are evaluated in order to select the alternatives that best represent the
collective preferences. Unfortunately deriving binary trust like trustful or not trustful
from the difference of belief functions is not so straightforward since the different
voters express their opinion as subjective probability over the similarities. For a
particular mapping this always involves a certain degree of vagueness hence the
threshold between the trust and distrust cannot be set definitely for all cases that
can occur during the process. Additionally there is no clear transition between
characterising a particular belief highly or less trustful.</p>
      <p>Fuzzy model is based on the concept of linguistic or ”fuzzy” variables. These
variables correspond to linguistic objects or words, rather than numbers e.g.
trust or belief conflict . The fuzzy variables themselves are adjectives that
modify the variable (e.g. ”high” trust, ”small” trust). The membership function is
a graphical representation of the magnitude of participation of each input. It
associates a weighting with each of the inputs that are processed, define
functional overlap between inputs, and ultimately determines an output response.
The membership function can be defined differently and can take different shapes
depending on the problem it has to represent. Typical membership functions are
trapezoidal, triangle or exponential. The selection of our membership function
is not arbitrary but can be derived directly from fact that our input the belief
difference has to produce the trust level as an output. Each input has to
produce output, which requires a trapezoidal and overlapping membership function.
Therefore our argument is that the trust membership value, which is expressed
by different voters, can be modelled properly by using fuzzy representation as
depicted on Fig. 1.</p>
      <p>Imagine the scenario where before each agent evaluates the trust in other
agent’s belief over the correctness of the mapping it calculates the difference
between its own and the other agent’s belief. The belief functions for each agent
are derived from different similarity measures therefore the actual value might
differ from agent to agent. Depending on the difference it can choose the
available trust levels e.g. one agent’s measurable belief over the similarity is 0.85
and an another agent’s belief is 0.65 then the difference in beliefs is 0.2 which
can lead to high and medium trust levels. We model these trust levels as fuzzy
membership functions.</p>
      <p>In fuzzy logic the membership function µ(x) is defined on the universe of
discourse U and represents a particular input value as a member of the fuzzy set i.e.
µ(x) is a curve that defines how each point in the U is mapped to a membership
value (or degree of membership) between 0 and 1.</p>
      <p>For representing trust in beliefs over similarities we have defined three
overlapping trapezoidal membership functions, which represents high, medium and
low trust in the beliefs over concept and property similarities in our ontology
mapping system.
3.1</p>
      <p>
        Fuzzy voting model
The fuzzy voting model was developed by Baldwin [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and has been used in
Fuzzy logic applications. However, to our knowledge it has not been introduced
in the context of trust management on the Semantic Web. In this section, we
will briefly introduce the fuzzy voting model theory using a simple example of
10 voters voting against or in favour of the trustfulness of an another agent’s
belief over the correctness of mapping. In our ontology mapping framework each
mapping agent can request a number of voting agents to help assessing how
trustful the other mapping agent’s belief is.
      </p>
      <p>
        According to Baldwin [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] a linguistic variable is a quintuple (L, T (L), U, G, µ)
in which L is the name of the variable, T (L) is the term set of labels or words
(i.e. the linguistic values), U is a universe of discourse, G is a syntactic rule and
µ is a semantic rule or membership function. We also assume for this work that
G corresponds to a null syntactic rule so that T (L) consists of a finite set of
words. A formalization of the fuzzy voting model can be found in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
Consider the set of words { Low trust (Lt), Medium trust (Mt) and High trust
(Ht) } as labels of a linguistic variable trust with values in U = [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]. Given
a set “m” of voters where each voter is asked to provide the subset of words
from the finite set T (L), which are appropriate as labels for the value u. The
membership value χµ(w)(u) is taking the proportion of voters who include u in
their set of labels which is represented by w.
      </p>
      <p>We need to introduce more opinions to the system i.e. we need to add the opinion
of the other agents in order to vote for the best possible outcome. Therefore we
assume for the purpose of our example that we have 10 voters (agents). Formally,
let us define</p>
      <p>
        V = A1, A2, A3, A4, A5, A6, A7, A8, A9, A10
(1)
Θ = Lt, Mt, Ht
The number of voters can differ however assuming 10 voters can ensure that
1. The overlap between the membership functions can proportionally be
distributed on the possible scale of the belief difference [0..1]
2. The work load of the voters does not slow the mapping process down
Let us start illustrating the previous ideas with a small example - By
definition consider our linguistic variable L as TRUST and T(L) the set of
linguistic values as T (L) = (Low trust, M edium trust, High trust). The universe of
discourse is U , which is defined as U = [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]. Then, we define the fuzzy sets
µ(Low trust), µ(M edium trust) and µ(High trust) for the voters where each
voter has different overlapping trapezoidal membership functions as described
on Table 1.
The data in Table 1 are demonstrative only for the purpose of an example,
which is presented in this paper. The difference in the membership functions
represented by the different vertices of the trapezoid in Table 1 ensures that
voters can introduce different opinions as they pick the possible trust levels for
the same difference in belief.
      </p>
      <p>The possible set of trust levels L = T RU ST is defined by the Table 2. Note that
in the table we use a short notation Lt means Low trust, Mt means Medium trust
and Ht means High trust. Once the fuzzy sets (membership functions) have been
defined the system is ready to assess the trust memberships for the input values.
Based on the difference of beliefs in similarities the different voters will select
the words they view as appropriate for the difference of belief. Assuming that
the difference in beliefs(x) is 0.67(one agent’s belief over similarities is 0.85 and
an another agent’s belief is 0.18) the voters will select the labels representing the
trust level as described in Table 2. Note that each voter has its own membership
Lt Lt Lt Lt Lt Lt Lt Lt Lt Lt
Mt Mt Mt Mt Mt Mt</p>
      <p>Ht Ht Ht
and</p>
      <p>χµ(Low trust)(u) = 1
χµ(Medium trust)(u) = 0.6</p>
      <p>χµ(High trust)(u) = 0.3
L =</p>
      <sec id="sec-3-1">
        <title>Low trust M edium trust</title>
        <p>+
1 0.6</p>
      </sec>
      <sec id="sec-3-2">
        <title>High trust</title>
        <p>+
0.3
A value x(actual belief difference between two agents) is presented and voters
randomly pick exactly one word from a finite set to label x as depicted in Table
3. The number of voters will ensure that a realistic overall response will prevail
during the process.</p>
        <p>Taken as a function of x these probabilities form probability functions. They
should therefore satisfy:
which gives a probability distribution on words:
function where the level of overlap is different for each voter. As an example the
belief difference 0.67 can represent high, medium and low trust level for the first
voter(A1) and it can only represent low trust for the last voter(A10).
Then we compute the membership value for each of the elements on set T (L).
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
Ht Mt Lt Lt Mt Mt Lt Lt Lt Lt
! P r(L = w|x) = 1</p>
        <p>w ∈ T (L)
! P r(L = Low trust|x) = 0.6
! P r(L = M edium trust|x) = 0.3</p>
        <p>! P r(L = High trust|x) = 0.1</p>
        <p>As a result of voting we can conclude that given the difference in belief
x = 0.67 the combination should not consider this belief in the similarity function
since based on its difference compared to another beliefs it turns out to be a
distrustful assessment. The before mentioned process is then repeated as many
times as many different beliefs we have for the similarity i.e. as many as different
similarity measures exist in the ontology mapping system.
3.2</p>
        <p>Introducing trust into ontology mapping
The problem of trustworthiness in the context of ontology mapping can be
represented in different ways. In general, trust issues on the Semantic Web are
associated with the source of the information i.e. who said what and when and
what credentials they had to say it. From this point of view the publisher of
the ontology could greatly influence the outcome of the trust evaluation and the
mapping process can prefer mappings that came from a more “trustful” source.</p>
        <p>However we believe that in order to evaluate trust it is better to look into
our processes that map these ontologies, because from the similarity point of
view it is more important to see how the information in the ontologies are
“conceived” by our algorithms than who have created them e.g. do our
algorithms exploit all the available information in the ontologies or just part of
it. The reason why we propose such trust evaluation is because ontologies of
the Semantic Web usually represent a particular domain and support a specific
need. Therefore even if two ontologies describe the same concepts and
properties their relation to each other can differ depending on the conceptualisation of
their creators, which is independent from the organisation where they belong.
In our ontology mapping method we propose that the trust in the provided
similarity measures, which is assessed between the ontology entities are
associated to the actual understanding of the mapping entities, which differs from
case to case e.g. a similarity measure can be trusted in one case but not
trustful in an another case during the same process. Our mapping algorithm that
incorporates trust management into the process is described by Algorithm 1.</p>
        <p>Input: Similarity belief matrixes Sn×m = {S1, .., Sk}</p>
        <p>Output: Mapping candidates
1 for i=1 to n do
2 BeliefVectors BeliefVectors ← GetBeliefVectors(S[i, 1 − m]) ;
3 Concepts ← GetBestBeliefs(BeliefVectors BeliefVectors) ;
4 Scenario ← CreateScenario(Concepts) ;
5 for j=1 to size( Concepts) do
6 Scenario ← AddEvidences (Concepts) ;
7
8
9
10
11
12
13
14 end
15 end
16 Scenario ← CombineBeliefs(Evidences) ;
17 MappingList ← GetMappings(Scenario) ;
18 end
end
if Evidences are contradictory then
for count=1 to numberOf( Experts) do</p>
        <p>Voters ← CreateVoters(10 ) ;
TrustValues ← VoteTrustMembership(Evidences) ;
ProbabilityDistribution ← CalculateTrustProbability(TrustValues) ;
Evidences ← SelectTrustedEvidences(ProbabilityDistribution) ;</p>
        <p>Algorithm 1: Belief combination with trust
Our mapping algorithm receives the similarity matrixes(both syntactic and
semantic) as an input and produces the possible mappings as an output. The
similarity matrixes represent the assigned similarities between all concepts in
ontology 1 and 2. Our mapping algorithm iterates through all concepts in
ontology 1 and selects the best possible candidate terms from ontology 2 which is
represented as a vector of best beliefs(step 2). Once we have selected the best
beliefs we get the terms that corresponds to these beliefs and create a mapping
scenario. This scenario contains all possible mapping pairs between the selected
term in ontology 1 and the possible terms from ontology 2(step 3 and 4). Once we
have build our mapping scenario we start adding evidences from the similarity
matrixes(step 6). These evidences might contradict because different similarity
algorithms can assign different similarity measure for the same mapping
candidates. In these evidences are contradictory we need to evaluate which measure
i.e. mapping agent’s belief we trust in this particular scenario(step 8-15). The
trust evaluation(see details in section 3.1) is invoked which invalidates the
evidences(agent beliefs) which cannot be trusted in this scenario. Once the conflict
resolution routine is finished, the valid beliefs can be combined and the possible
mapping candidates can be selected from the scenario.</p>
        <p>The advantage of our proposed solution is that the evaluated trust is
independent from the source ontologies themselves and can change depending on the
available information in the context.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Empirical evaluation</title>
      <p>The evaluation was measured with recall and precision, which are useful
measures that have a fixed range and meaningful from the mapping point of view.
Before we present our evaluation let us discuss what improvements one can
expect considering the mapping precision or recall. Most people would expect that
if the results can be doubled i.e. increased by 100% then this is a remarkable
achievement. This might be the case for anything but ontology mapping. In
reality researchers are trying to push the limits of the existing matching algorithms
and anything between 10% and 30% is considered a good improvement. The
objective is always to make improvement in preferably both in precision and recall</p>
      <p>We have carried out experiments with the benchmark ontologies of the
Ontology Alignment Evaluation Initiative(OAEI), 4 which is an international initiative
that has been set up for evaluating ontology matching algorithms. The
experiments were carried out to assess how trust management influences results of our
mapping algorithm. Our main objective was to evaluate the impact of
establishing trust before combining beliefs in similarities between concepts and properties
in the ontology. The OAEI benchmark contains tests, which were systematically
generated starting from some reference ontology and discarding a number of
information in order to evaluate how the algorithm behave when this information
is lacking. The bibliographic reference ontology (different classifications of
publications) contained 33 named classes, 24 object properties, 40 data properties.
Further each generated ontology was aligned with the reference ontology. The
benchmark tests were created and grouped by the following criteria:
– Group 1xx: simple tests such as comparing the reference ontology with itself,
with another irrelevant ontology or the same ontology in its restriction to
OWL-Lite
4 http://oaei.ontologymatching.org/
– Group 2xx: systematic tests that were obtained by discarding some features
from some reference ontology e.g. name of entities replaced by random strings
or synonyms
– Group 3xx: four real-life ontologies of bibliographic references that were
found on the web e.g. BibTeX/MIT, BibTeX/UMBC
As a basic comparison we have modified our algorithm (without trust), which
does not evaluate trust before conflicting belief combination just combine them
using Dempster’s combination rule. The recall and precision graphs for the
algorithm with trust and without trust over the whole benchmarks are depicted
on Fig. 2. Experiments have proved that with establishing trust one can reach
higher average precision and recall rate.</p>
      <p>(a) Recall</p>
      <p>(b) Precision
Figure 2 shows the improvement in recall and precision that we have achieved
by applying our trust model for combining contradictory evidences. From the
precision point of view the increased recall values have not impacted the results
significantly, which is good because the objective is always the improvement of
both recall and precision together. We have measured the average improvement
for the whole benchmark test set that contains 51 ontologies. Based on the
experiments the average recall has increased by 12% and the precision is by 16%.
The relative high increase in precision compared to recall is attributed to the
fact that in some cases the precision has been increased by 100% as a
consequence of a small recall increase of 1%. This is perfectly normal because if the
recall increases from 0 to 1% and the returned mappings are all correct (which is
possible since the number of mappings are small) then the precision is increases
from 0 to 100%. Further the increase in recall and precision greatly varies from
test to test. Surprisingly the precision have decreased in some cases(5 out of 51).
The maximum decrease in precision was 7% and maximum increase was 100%.
The recalls have never decreased in any of the tests and the minimum increase
was 0.02% whereas the maximum increase was 37%.</p>
      <p>As mentioned in our scenario in our ontology mapping algorithm there are
number of mapping agents that carry out similarity assessments hence create
belief mass assignments for the evidence. Before the belief mass function is
combined each mapping agent need to calculate dynamically a trust value, which
describes how confident the particular mapping agent is about the other
mapping agent’s assessment. This dynamic trust assessment is based on the fuzzy
voting model and depending on its own and other agents’ belief mass function.
In our ontology mapping framework we assess trust between the mapping agents’
beliefs and determine which agent’s belief cannot be trusted, rejecting the one,
which is as the result of trust assessment become distrustful.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Related work</title>
      <p>
        To date trust has not been investigated in the context of ontology mapping.
Ongoing research has mainly been focusing on how trust can be modelled in
the Semantic Web context [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] where the trust of user’s belief in statements
supplied by any other user can be represented and combined. Existing approaches for
resolving belief conflict are based on either negotiation or the definition of
different combination rules that consider the possibility of belief conflict. Negotiation
based techniques are mainly proposed in the context of agent communication.
For conflicting ontology alignment an argumentation based framework has been
proposed [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] , which can be applied for agent communication and web services
where the agents are committed to a ontology and they try to negotiate with
other agent over the meaning of their concepts. Considering multi-agent
systems on the Web existing trust management approaches have successfully used
fuzzy logic to represent trust between the agents from both individual[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and
community[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] perspective. However the main objective of these solutions is to
create a reputation of an agent, which can be considered in future interactions.
Considering the different variants [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] of combination rules that considers
conflicting belief a number of alternatives have been proposed. These methods
are based on well founded theoretical base but they all modify the combination
rule itself and such these solutions do not consider the process in which these
combinations take place. We believe that the conflict needs to be treated before
the combination occurs. Further our approach does not assume that any agent
is committed to a particular ontology but our agents are considered as “experts”
in assessing similarities of terms in different ontologies and they need to reach
conclusion over conflicting beliefs in similarities.
6
      </p>
    </sec>
    <sec id="sec-6">
      <title>Conclusion</title>
      <p>In this paper we have shown how the fuzzy voting model can be used to
evaluate trust, and determine which belief is contradictory with other beliefs before
combining them into a more coherent state. We have proposed new levels of
trust in the context of ontology mapping, which is a prerequisite for any systems
that makes use of information available on the Semantic Web. Our system is
flexible because the membership functions for the voters can be changed
dynamically in order to influence the outputs according to the different similarity
measures that can be used in the mapping system. We have described initial
experimental results with the benchmarks of the Ontology Alignment Initiative,
which demonstrates the effectiveness of our approach through the improved
recall and precision rates. There are many areas of ongoing work, with our primary
focus being additional experimentation to investigate different kind of
membership functions for the different voters and to consider the effect of the changing
number of voters and the impact on precision and recall.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Nagy</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vargas-Vera</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Motta</surname>
          </string-name>
          , E.:
          <article-title>Dssim - managing uncertainty on the semantic web</article-title>
          .
          <source>In: Proceedings of the 2nd International Workshop on Ontology Matching</source>
          . (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Nagy</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vargas-Vera</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Motta</surname>
          </string-name>
          , E.:
          <article-title>Multi-agent ontology mapping with uncertainty on the semantic web</article-title>
          .
          <source>In: Proceedings of the 3rd IEEE International Conference on Intelligent Computer Communication and Processing</source>
          . (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Euzenat</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shvaiko</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          : Ontology matching. Springer-Verlag, Heidelberg (DE) (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Baldwin</surname>
            ,
            <given-names>J.F.</given-names>
          </string-name>
          <article-title>In: Mass assignment Fundamentals for computing with words</article-title>
          .
          <source>Volume 1566 of Selected and Invited Papers from the Workshop on Fuzzy Logic in Artificial Intelligence ,Lecture Notes In Computer Science</source>
          . Springer-Verlag (
          <year>1999</year>
          )
          <fpage>22</fpage>
          -
          <lpage>44</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Lawry</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>A voting mechanism for fuzzy logic</article-title>
          .
          <source>International Journal of Approximate Reasoning</source>
          <volume>19</volume>
          (
          <year>1998</year>
          )
          <fpage>315</fpage>
          -
          <lpage>333</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Richardson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Agrawal</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Domingos</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Trust management for the semantic web</article-title>
          .
          <source>In: Proceedings of the 2nd International Semantic Web Conference</source>
          . (
          <year>2003</year>
          )
          <fpage>351</fpage>
          -
          <lpage>368</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Laera</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Blacoe</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tamma</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Payne</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Euzenat</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bench-Capon</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Argumentation over ontology correspondences in mas</article-title>
          .
          <source>In: AAMAS '07: Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems</source>
          , New York, NY, USA, ACM (
          <year>2007</year>
          )
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Griffiths</surname>
          </string-name>
          , N.:
          <article-title>A fuzzy approach to reasoning with trust, distrust and insufficient trust</article-title>
          .
          <source>In: Proceedings of the 10th International Workshop on Cooperative Information Agents</source>
          . (
          <year>2006</year>
          )
          <fpage>360</fpage>
          -
          <lpage>374</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Rehak</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pechoucek</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Benda</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Foltyn</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Trust in coalition environment: Fuzzy number approach</article-title>
          .
          <source>In: Proceedings of The 4th International Joint Conference on Autonomous Agents and Multi Agent Systems - Workshop Trust in Agent Societies</source>
          . (
          <year>2005</year>
          )
          <fpage>119</fpage>
          -
          <lpage>131</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Yamada</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>A new combination of evidence based on compromise</article-title>
          .
          <source>Fuzzy Sets Syst</source>
          .
          <volume>159</volume>
          (
          <issue>13</issue>
          ) (
          <year>2008</year>
          )
          <fpage>1689</fpage>
          -
          <lpage>1708</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Josang</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>The consensus operator for combining beliefs</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>141</volume>
          (
          <issue>1</issue>
          ) (
          <year>2002</year>
          )
          <fpage>157</fpage>
          -
          <lpage>170</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>