=Paper= {{Paper |id=Vol-218/paper-6 |storemode=property |title=Social Contexts and the Probabilistic Fusion and Ranking of Opinions: Towards a Social Semantics for the Semantic Web |pdfUrl=https://ceur-ws.org/Vol-218/paper6.pdf |volume=Vol-218 |dblpUrl=https://dblp.org/rec/conf/semweb/NicklesC06 }} ==Social Contexts and the Probabilistic Fusion and Ranking of Opinions: Towards a Social Semantics for the Semantic Web== https://ceur-ws.org/Vol-218/paper6.pdf
   Social Contexts and the Probabilistic Fusion and
Ranking of Opinions: Towards a Social Semantics for the
                    Semantic Web?

                                    Matthias Nickles1 and Ruth Cobos2
       1
            AI/Cognition Group, Computer Science Department, Technical University of Munich,
                     D-85748 Garching b. München, Germany, nickles@cs.tum.edu
              2
                Departamento de Ingenierı́a Informática, Universidad Autónoma de Madrid,
                              28049 - Madrid, Spain, ruth.cobos@uam.es



           Abstract. In the (Semantic) Web, the existence or producibility of certain, consen-
           sually agreed or authoritative knowledge cannot be assumed, and criteria to judge the
           trustability and reputation of knowledge sources may not be given. These issues give
           rise to the formalization of web information in terms of heterogeneous and possibly
           inconsistent public assertions and intentions, providing valuable meta-information in
           contemporary application fields, like open or distributed ontologies, social software,
           ranking and recommender systems, and domains with a high amount of controversies,
           such as politics and culture. As an approach towards this, we introduce a lean, intu-
           itive formalism for the Semantic Web which allows for the explicit representation of
           semantic heterogeneity by means of so-called social contexts, and optionally for the
           probabilistic aggregation and social rating of possibly uncertain or contradictious as-
           sertions. Inter alia, this allows to stochastically generalize multiple assertions (yielding
           complexity reduction), and generalizes the concept of folksonomies to any ontologies
           and description logic knowledge bases emergent from social choice processes.
           Keywords: Semantic Web, OWL, Uncertainty, Information Integration, Context Logic,
           Social Choice


1     Introduction
Information found in open environments like the web can usually not be treated as objective,
certain knowledge directly, and also not as sincere beliefs (due to the mental opaqueness of
the autonomous information sources). But only very few approaches to the semantic modeling
of what could be called ”opinions” or ”public assertions”, which are neither (real) beliefs nor
objective knowledge, have emerged, mostly in the field of distributed artificial intelligence [10,
11]. Instead, most formal approaches to knowledge representation and reasoning handle logical
inconsistencies and information source controversies as something which should be avoided or
filtered out using criteria such as trust and provenance. Against that, we argue that making
(meta-)knowledge about the social, heterogeneous and controversial nature of web informa-
tion explicit can be extremely useful [7] - e.g., to gain a picture of the opinion landscape in
controversial domains such as politics, for subsequent decision making and conflict resolution,
for the acquisition and ranking of information from multiple, possibly dissent sources, and not
at last for tasks like the learning whom (not) to trust. Such (meta-)knowledge is especially
crucial in domains with a strong viewpoint competition and difficult or impossible consensus
finding like politics, product assessment and culture, and in current and forthcoming Seman-
tic Web applications which support explicitly or implicitly people interaction, like (semantic)
blogging, discussion forums, collaborative tagging and folksonomies, and social computing
?
    An extended variant of this paper is published as Technical Report FKI-254-06, AI/Cognition
    Group, Technical University of Munich, 2006.
in general. Approaching this issue, this work presents a lean approach to the formal rep-
resentation of semantical heterogeneity by means of social contexts and the social rating of
possibly contradictious or uncertain assertions via opinion weighting and probabilistic opinion
aggregation.


2     Modeling heterogeneous viewpoints using social contexts

2.1   Modeling social structures

Our technical approach is based on providing an interrelationship of a social ontology or
social knowledge base (KB) for the description of social concepts and individuals (like per-
sons, agents and organizations, and maybe their relationships) on the one hand, and a set of
possibly controversial or uncertain statements (axioms and facts) on the other. Special terms
consisting of names from the social ontology/KB then identify so-called social contexts for the
contextualization and optionally the fusion of semantically heterogeneous statements. This
amounts to a technique which makes use of the context-driven partitioning of the respective
web language semantics analogously to the approach presented in, e.g., [2, 4].
There is no canonical social ontology to be used with our approach. Basically any ontology
could be used as long as it provides concepts and instances for communication participants
like ”Author”, ”Publisher” or ”Reader”, or, most simply, ”Actor”. Of course, the following
approach would also work in case the participants are stated indirectly in form of the web
locations or resources they use to articulate themselves. But our approach suggests that infor-
mation sources shall even then be seen as autonomous, communicating actors. The following
example ontology fragment will do in this work:

Definition 1: Social ontology SO

Source v Actor, Addressee v Actor Source(tina), Source(tom), Source(tim),
Addressee(tina), Addressee(tom), Addressee(tim),
CA(assertion), CA(information), CA(publicIntention),
CA(fusedInformation), CA(fusedAssertion)

    At this, Source and Addressee are the classes of the participating actors, whereby Source
can be any kind of information source, like a person, an organization, a document, or a web
service, or the holder of an intention. assertion denotes an ostensibly positive (i.e., approv-
ing) communication attitude (CA) regarding a certain statement, and at the same time the
ostensible intention to make the addressee(-s) adopt a positive attitude also (e.g., ”This
product is the best-buy!”). This corresponds more or less to the communication act seman-
tics we have introduced in detail in [10, 11], and to Grice’s conceptualization of speech acts
as communications of intentions. information is the same as assertion, but without the in-
tention to make others approve the resp. statement too (e.g., ”Personally, I believe in god,
but I respect your agnosticism”). informations and assertions are also called ”opinions” in
this work. publicIntention finally is the communication attitude of ostensibly desiring that a
statement shall become true. The attitude of requesting something from another actor would
be a subtype of publicIntention. Likewise, the attitude of denying something can simply be
substituted by the positive attitude towards the negation of the denied statement. The three
attitudes defined in SO should be sufficient to represent most information, publishing and
desiring acts on the internet.
Note that assertion, information and publicIntention are no propositional attitudes in the
usual mentalistic sense, as they do not need to correspond to any sincere beliefs or intentions
of the actors. Instead, they are possibly insincere communication or social attitudes exhibited
in a synchronous or asynchronous communication process like that of the publishing of infor-
mation on the web. As a consequence, they can not be treated like their mental counterparts.
E.g., an actor might hold the opinion φ towards addressee one and at the same time ¬φ
informing addressee two (while believing neither φ nor ¬φ privately). As another example,
opinions can even be bought, in contrast to sincere beliefs: It is known that opinions uttered
in, e.g., web blogs have sometimes been payed for by advertising agencies. In some sense, even
all information on the web is ”just” opinion due to the absence of a commonly accepted truth
authority. fusedInformation and fusedAssertion will be described below.
2.2    Social contexts and social semantics
Contexts (aka microtheories) have been widely used in AI since the early nineties, originally
intended by McCarthy as a replacement of modal logic. [1, 2] propose a context operator
ist(context, statement) which denotes that statement is true (”ist”) within context. Build-
ing upon [2, 4], we will use the notation of context to express that certain statements are being
publicly asserted (informed about, ostensibly intended to become true, denied...) on the web
by some information-Source(s), optionally facing some specific Addressee(s) (the latter im-
plies that our use of the term ”public” optionally comprises ”limited publics” in form of closed
social groups also). Thus, such social contexts model the social semantics of the contextu-
alized information. Here, the term ”social semantics” has a twofold meaning itself: First, it
refers to the communicative function of information published on the web (essentially, our
contexts correspond to kinds of speech acts). Second, a social context can optionally denote
the meaning of a certain statement ascribed by multiple actors using some aggregation rule,
e.g., the degree of truth assigned via consensus finding or voting, or other kinds of social choice.

Definition 2: Social contexts

A social context is formally defined as a pair (id, c) with id ∈ Id,
Id = (SourceN × ... × SourceN , AddresseeN × ... × AdresseeN , CAN ), c being a set of mutu-
ally consistent OWL expressions (precisely: axioms and facts as defined in the next section),
and Source, Adressee and CA being the respective categories in SO. The X N denote sets
of names of the individuals within the respective categories (so we assume an extensional
denotation of social concepts like persons and groups, as far as the contained individuals are
required for context specification), e.g., SourceN = {”tina”, ”tom”, ”tim”}. id is called the
context identifier (sometimes called ”context” also for short). We use the following syntax for
context identifiers:
attitude
source 1 ,...,source n ½addresse1 ,...,addresseen

   As an abbreviation, we define attitude                  attitude
                                   source 1 ,...,source n =source 1 ,...,source n ½AddresseN , i.e., the atti-
tude is here addressed to the group of all possible addressees like it is the case with information
found on an ordinary web page. But note that at the same time a certain source can hold
mutually inconsistent attitudes even towards different members or subgroups of AddresseN
(but not towards the same addressee).

3     A description logic with social contexts

We settle on the SHOIN (D) description logic (over data types D), because ontology entail-
ment in the current quasi-standard OWL DL can be reduced to SHOIN (D) KB satisfiability
[14]. Since we don’t make use of any special features of this specific description language, our
approach could trivially be adapted to any other description language or OWL variant, or
even first-order logic and RDF(S).
Definition 3: SHOIN (D)-ontologies

The context-free grammar of SHOIN (D) is defined as follows:

C → A|¬C|C1 u C2 ||C1 t C2 |∃R.C|∀r.C
    | ≥ nS| ≤ nS|{a1 , ..., an }| ≥ nT | ≤ nT |∃T1 , ..., Tn .D|∀T1 , ..., Tn .D
D → d|{c1 , ..., cn }.

   At this, C denote concepts, A denote atomic concepts, R denote abstract roles (relation-
ships), S denote abstract simple roles, the Ti denote concrete roles, d denotes a concrete
domain predicate, and the ai , ci denote abstract / concrete individuals.
A SHOIN (D)−ontology is a finite set of TBox and ABox axioms/facts C1 v C2 (inclusion of
concepts), T rans(R) (transitivity), R v S, T v U (role inclusion), C(a) (concept assertion),
R(a, b) (role assertion), a = b (equality), and a 6= b (inequality). For lack of space, please find
the semantics of SHOIN (D) in [14].

Definition 4: SOC-OWL

    Introducing ontologies and at the same time description logic knowledge bases with social
contexts, we define SOC-OWL (Social-Context-OWL) similarly to C-OWL [4]. In Section 4,
an advanced language P-SOC-OWL will be introduced.
A SOC-OWL ontology/KB is a finite set {(id, s) : id ∈ Id, s ∈ AF } ∪ AF i ∪ B, with AF
being the set of all SHOIN (D) axioms/facts, AF i being such axioms/facts but with con-
cepts, individuals and roles directly indexed with social contexts (i.e., AF i = {(idi , Ch ) v
(idj , Ck ), (idi , ah ) = (idj , ak ), ... : idi , idj ∈ Id}), and B being a set of bridge rules (see 3.1).
Id is the set of all social context identifiers according to the social ontology SO (cf. Definition
1). The s within (id, s) (i.e., plain OWL-DL axioms/facts) are called inner statements which
are said to ”be true (or intended in case of publicIntention) within the respective context”.
Example (with multiple axioms/facts per row and (id, a) written as id a):

                                                       assertion
ControversialP erson(columbus)                         tina½tim,tom Hero(columbus)
assertion                                              assertion
tim,tom½tina (¬Hero)(columbus)                         tim,tom½tina Exploiter (columbus)


    This SOC-OWL ontology (modeling as a whole somewhat a neutral point of view, like
taken by an ideal Wikipedia article) expresses that the (fictive) persons Tim and Tom hold
the opinion towards Tina that Christopher Columbus was not a hero but an exploiter (of
the natives), while Tina does allegedly believe that the opposite is true. But there is con-
sensus of the whole group that Christopher Columbus is a controversial person. Notice that
without explicit further constraints such as the specified in 3.1 and 3.2, different social con-
texts are logically fully separated. E.g., from the above ontology it could not be deduced
that inf ormation
     tina½tim ControversialP erson(columbus), because ControversialP erson(columbus)
as an abbreviation of inf  ormation
                        tina,tim,tom½tina,tim,tom ControversialP erson(columbus) in the exam-
ple above is uttered/addressed exactly by/to the single social group of all participants. This
principle allows to model the realistic case that someone conforms with some group opinion,
but states some inconsistent opinion towards other groups (even a subgroup of the former
group). Of course the co-presence of two or more social contexts which indicate that a certain
actor is insincere (as it would be the case if assertion               assertion
                                                  tina½tim (¬C)(x) and tina½tom C(x) were con-
tained within the same SOC-OWL ontology, which would be perfectly legal) could usually not
be acquired directly from the web, since such actors would likely exhibit inconsistent opinions
using different nicknames. Instead, some social reasoning or social data mining techniques
would be required to obtain such SOC-OWL knowledge.
Obviously, each contextualized SOC-OWL statement (contextId, statement) corresponds to
the ”classic” [1, 2] context statement ist(context, statement). But unfortunately, such an ist
operator cannot simply be made a first-class object of our language (which would allow for the
nesting of context expressions), at least not without getting into trouble defining a semantics
of the language, or without making the semantics as shallow as that of RDF-style reification.
Instead, we allow for bridge rules and meta-axioms in order to interrelate social contexts.
The core idea underlying the following semantics of SOC-OWL is to group the axioms/facts
according their social contexts, and to give each context its own interpretation function and
domain within the model-based semantics, corresponding to the approach presented in [4].
In addition, we will provide meta-axioms (constraints) and bridge rules in order to state the
relationships among the various communication attitudes (somewhat similarly to modal logic
axiom schemes such as the well-known KD45 axioms of modal belief logic), and to allow for
the interrelation of different attitudes, even across different contexts. E.g., we would like to
express that a communication attitude such as assertion
                                                    tina½tim,tom (¬Exploiter )(columbus) implies
              publicIntention information
(intuitively) tina           (tim,tom½tina (¬Exploiter )(columbus)), i.e., that Tina not only ex-
presses her ostensible beliefs, but also ostensibly intends that others adopt her opinion.
Definition 5: Interpretation of SOC-OWL

    A SOC-OWL interpretation is a pair (I, {ei,j }i,j∈Id ) with I = {Iid } being a set of local
interpretations Iid , with each Iid = h4Iid , (.)Iid i, id ∈ Id. ei,j ⊆ 4Ii × 4Ij is a relation of two
local domains 4Iid (ei,j is required for the definition of bridge rules in B (Definition 4) as
explained later in 3.1). (.)Iid maps individuals, concepts and roles to elements (resp. subsets
or the products thereof) of the domain 4Iid .
To make use of this interpretation, contextualized statements of SOC-OWL impose a group-
ing of the concepts, roles and individuals within the inner statements into sets Cid , Rid and
cid [4]. This is done in order to ”localize” the names of concepts, individuals and roles, i.e.,
to attach to them the resp. local interpretation function Ii d corresponding to the resp. social
context denoted by id ∈ Id:
Concretely, the sets Cid , Rid and cid are defined inductively by assigning the concepts, indi-
viduals and role names appearing within the statement part of each SOC-OWL axiom/fact
(contextId , statement) to the respective set Cid , cid or Rid . With this, the interpretation of
concepts, individuals etc. is as follows:

C Iid = any subset of 4Iid for C ∈ Cid
(C1 u C2 )Iid = C1Iid ∩ C2Iid for C1 , C2 ∈ Cid
(C1 t C2 )Iid = C1Iid ∪ C2Iid for C1 , C2 ∈ Cid
(¬C)Iid = 4Iid \ C Iid for C ∈ Cid
(∃R.C)Iid = {x ∈ 4Iid : ∃y : (x, y) ∈ RIid ∧ y ∈ C Iid for C ∈ Cid , R ∈ Rid
(∀R.C)Iid = {x ∈ 4Iid : ∀y : (x, y) ∈ RIid → y ∈ C Iid for C ∈ Cid , R ∈ Rid
cIid = any element of 4Iid , for c ∈ cid
(Interpretation of concrete roles T analogously)

Satisfiability and decidability

    Given a SOC-OWL interpretation I, I is said to satisfy a (contextualized) statement φ
(I |= φ) if there exists an id ∈ Id such that Iid |= φ. A SOC-OWL ontology/KB (or statement
set) is then said to be satisfied if I satisfies each statement within the ontology/KB (or
statement set). Iid |= (id, C1 v C2 ) iff C1Iid ⊆ C2Iid etc., as with SHOIN (D) (but indexed).
Note that using this extension, the inherited semantics and decidability of SHOIN (D) and
C-OWL remain unaffected in SOC-OWL within each context, since the new interpretation
function simply decomposes the domain and the set of concepts etc. into local ”interpretation
modules” corresponding to the contexts.
3.1     Bridge rules and cross-context mappings
According to Definition 4, a SOC-OWL ontology can optionally comprise bridge rules [4] B
and various stronger relationships among classes, individuals and roles from different contexts.
As an example, consider
                   ≡
     (contexti , x)−→ (contextj , y) in B, with x, y being concepts, individuals or roles.
Informally, such a bridge rule states that the x and y denote corresponding elements even
though they belong to different contexts contexti , contextj .
With, e.g., (assertion
             tina      , columbus)−→≡ assertion
                                       (tim,tom , columbus) the interpretations of the ”two Colum-
buses” would abstractly refer to the same object. Analogously, v                  ⊥
                                                                         −→ and −→ state that the first
concept is more specific than the second resp. that both concepts are disjoint. These relation-
ships are given by the relation ei,j (Definition 5).
                                  ≡
Formally: I |= (contexti , x)−→      (contextj , y) iff ei,j (xIi ) = y Ij ) (resp. ei,j (xIi ) ⊆ y Ij and
       Ii    Ij
ei,j (x ) ∩ y = ∅).
For lack of space, please find details and analogously defined further bridge rules in [4].
A much stronger kind of relationship is stated by the syntax constructs where a concept, indi-
vidual or role is directly indexed with a social context, as, e.g., in (contexti , x) = (contextj , y),
with x, y being concepts, individuals or roles.
Formally: I |= (contexti , x) = (contextj , y) iff xIi = y Ij (analogously for v, ⊥ etc).

3.2     Meta-axioms
We state now some constraints regarding the social meaning of contexts, which will later be
extended with (PMA5).

(MA1) Actively asserting an opinion implies in our framework the intention of the source
that the addressee(-s) adopt the asserted statement. With nested social contexts, we could
                                                               publicIntention       information
formalize this using assertion s1 ,...,sn ½a1 ,...,am ϕ → (s1 ,...,sn ½a1 ,...,am (a1 ,...,am ½s1 ,...,sn ϕ). To avoid such
”strong” nesting in order to lower decidability complexity, we propose the following as a
significantly weaker replacement, which at least allows to keep track of the convictions the
information sources desire in a separate context:
assertion
s1 ,...,sn ½a1 ,...,am ϕ
→ ((publicIntention              assertion                         assertion
        s1 ,...,sn ½a1 ,...,am ta1 ,...,am ½s1 ,...,sn , e) = (s1 ,...,sn ½a1 ,...,am , e)) for each concept, individual
and role e occurring in ϕ.
Here, the first context is a concatenation of an intention and an assertion. The intuitive mean-
ing is that group s1 , ..., sn intends that group a1 , ..., am shall describe relevant individuals,
categories etc. within their social context in the same way as s1 , ..., sn does wrt. ”their” e’s.
In terms of SOC-OWL interpretation this is:
(Iassertion
   s1 ,...,sn ½a1 ,...,am
                           |= ϕ) → I |= ((publicIntention                assertion                      assertion
                                                 s1 ,...,sn ½a1 ,...,am ta1 ,...,am ½s1 ,...,sn , e) = (s1 ,...,sn ½a1 ,...,am , e)),
for each concept, individual and role e in ϕ.

   The next meta-axiom simply demands that assertions include the attitude of informing
the addressee:
                                  information
(MA2) assertion
       s1 ,...,sn ½a1 ,...,am ϕ → s1 ,...,sn ½a1 ,...,am ϕ


    In this work, we can not provide a meta-theory corresponding to the KD(45) axioms of
modal Belief-Desire-Intention logics (but see [10]). Instead, we only demand that the inner
statements of each context are mutually consistent:

(MA3) Each set a of statements such that for a specific context all (context, ai ), ai ∈ a are
axioms/facts of the same SOC-OWL ontology, has an interpretation.
    Furthermore, we demand - in accordance with many BDI-style logics - that the informa-
tion/assertion contexts of a certain actor on the one hand and his intention context on the
other do not overlap addressing the same set of addressees, i.e., an actor does not (ostensibly)
intent what he (ostensibly) believes to be the case already:

(MA4) For each a such that (publicIntention
                                 source½addressees , a) is part of an SOC-OWL ontology o, no
            information
axiom/fact (source½addressees , b), b ` a, is part of o (analogously for assertions).

   The following constraints are not demanded, but could be helpful    V in application domains
were mutual opinion consistency of subgroups is desired (we use           to abbreviate a set of
SOC-OWL statements).                 V
(MAx1) (attitude
         s1 ,...,sn ½addressees ϕ) ↔
                                                        attitude
                                       s∈2{s1 ,...,sn } s½addressees ϕ
         attitude
                                   V                 attitude
(MAx2) (sources½a1 ,...,an ϕ) ↔ a∈2{a1 ,...,an } sources½a ϕ

But we V
       can safely aggregate seemingly consented information in a separated fusion context:
(MA5) s∈{s1 ,...,sn } (Iinformation |= ϕ) → (IfusedInformation |= ϕ) (analogously for assertions).
                        s½addressees          s1 ,...,sn ½addressees
In general, such group opinions induce a ranking of multiple statements with the resp. rank
corresponding to the size of the biggest group which supports the resp. statement (this can
be used, e.g., for a majority voting on mutually inconsistent statements).


4    Social rating and social aggregation of subjective assertions
Building upon social contexts, the following extension of the previously presented logical
framework is optional. It makes use of uncertainty reasoning and techniques from judgement
aggregation. They allow for i) the representation of gradual strengths of uncertain opinions
held by individuals (corresponding to subjective probabilities) and social groups, and ii) the
probabilistic fusion of semantically heterogeneous opinions held by different actors (basically
by means of voting). Since the emergence of folksonomies can be seen as a social categoriza-
tion process (social choice in form of collaborative tagging), ii) amounts to a generalization
of folksonomies to social choice of ontology and knowledge base entries in general.
This feature is also useful in case traditional techniques to ontology integration fail, e.g., if
the resulting merged ontology shall be accepted by all sources, but a consensus about the
merging with traditional techniques to ontology mapping and alignment could not be found,
or if the complexity of a high amount of heterogeneous information needs to be reduced by
means of stochastic generalization. Probabilistic fusion is furthermore helpful in case state-
ments shall be socially ranked, i.e., put in an order according to the amount of their respective
social acceptance. In contrast to heuristical or surfer-centric ways of information ranking or
”knowledge ranking” such as those accomplished by most web search engines, the following
approach is based on semantic opinion pooling [13].
In [9], the probabilistic extension P − SHOQ(D) of the SHOQ(D) description logic has been
introduced. SHOQ(D) is very similar to SHOIN (D) and thus OWL DL, but does not allow
for inverse roles. [9] shows that reasoning with P − SHOQ(D) is - maybe surprisingly - de-
cidable. Instead of P − SHOQ(D), other Bayesian/probabilistic approaches to the Semantic
Web can likely also be used as a basis for our approach, e.g., [6]. P − SHOQ(D) is now used
to define a probabilistic variant of SOC-OWL.

Definition 6: P-SOC-OWL

    A P-SOC-OWL ontology is defined to be a finite subset of {([pl , pu ], id, ai )} ∪ {(id, ai )} ∪
{ai } ∪ AF i ∪ B, with pl , pu ∈ [0, 1], id ∈ Id, ai ∈ AF , AF being the set of all well-formed
SHOQ(D) ontology axioms/facts, and B and AF i as in the previous section.
The [pl , pu ] are probability intervals. Non-interval probabilities p are syntactical abbreviations
of [p, p]. If a probability is omitted, 1 is assumed. [9] further provides some consistency axioms
for probabilistic reasoning with the underlying P − SHOQ(D), which had to be omitted here.

Definition 7: Semantics of P-SOC-OWL

    The semantics of a P-SOC-OWL ontology is simply given as a family of P − SHOQ(D)
interpretations, each interpretation corresponding to a social context (please refer to [9] for
the definition of P − SHOQ(D) interpretations). Formally, a P-SOC-OWL interpretation is
a pair (PI , {ei,j }i,j∈Id ) with PI = {(PI id , µid ) : id ∈ Id} being a set of local probabilistic
interpretations (each denoted as P rid ), each corresponding to a probabilistic interpretation
of P − SHOQ(D) and a social context with identifier id. µid : ∆Iid → [0, 1] is a subjective
probability function, and the ∆Iid are the domains. The relation ei,j (required to state bridge
rules) is defined analogously to SOC-OWL. At least if reasoning is done only within each
context (using the resp. interpretation), P-SOC-OWL remains decidable. Example:

[0.5, 0.8]: assertion
            tim,tom½tina Exploiter (columbus)                   0.7: assertion
                                                                     tina      Hero(columbus)
0.9: assertion
     tim       Hero(columbus)

    This P-SOC-OWL ontology expresses inter alia that Tim and Tom (as a group, but not
necessarily individually) hold the opinion that with a probability in [0.5, 0.8], Columbus was
an exploiter, while Tina does (publicly) believe he was a hero with strength 0.7, and Tim be-
lieves so with strength 0.9 (i.e., his private opinion disagrees with the public group opinion of
him and Tom). In order to allow for a consistent fusion of opinions, we demand the following
fusion meta-axiom, which effectively states how the probabilities of social fusion contexts are
calculated. A social fusion context is a social context with more than one opinion source and
a probability which pools the probabilities which subsets of the group assign to the respective
statement. This allows to specify group opinions even if group members or subgroups do know-
ingly not agree wrt. the resp. assertion. We propose two versions of a resp. interpretation rule:
         V
(PMA5’) ( si ∈{s1 ,...,sn } (P rinformation          |= ϕ[pi , pi ])) → (P rinformation               |= ϕ[p, p])
                                   si ½addressees                            s1 ,...,sn ½addressees

with p = poolpoolingT ype ((p1 , ..., pn ), priorKnowledge). At this, P rid |= ϕ[l, u] attests
ϕ a probability within [l, u] in context id. (Analogously for the attitude assertion.)

    A problem with (PMA5’) is that it can lead to unsatisfiability in case the derived proba-
bility p is different than a probability assigned explicitly in the ontology by the resp. group
(remember that a group of agents is free to assign any truth value or probability to any
proposition, using any social choice procedure). A simple work around is to use a new context
fusedInformation (resp. fusedAssertion) (PMA5). Another possibility would be to introduce
some kind of priority reasoning which gives priority to explicitly assigned probabilities.
            V
(PMA5) (        si ∈{s1 ,...,sn } (P rinformation   |= ϕ[pi , pi ])) → (P rfusedInformation     |= ϕ[p, p]) (rest as
                                   i  s ½addressees                        s ,...,s ½addressees
                                                                             1     n
PMA5’).

   As for poolpoolingT ype , there are several possibilities: In the most simple case of ”demo-
cratic” Bayesian aggregation given theP absence of any opinion leader or ”supra-Bayesian” [13],
                                                 p
we define poolavg ((p1 , ..., pn ), ∅) = n i , i.e., poolavg averages over heterogeneous opinions.
Using this aggregation operator, we could deduce the following: 0.8: assertion
                                                                          tina,tim Hero(columbus).
Social aggregation operators are traditionally studied in the field of Bayesian belief aggregation
(e.g., [13, 3]) and judgement aggregation. The most common fusion operator extends poolavg
with expert weights (e.g., trustability or social power degrees of the information sources):
                                                         P                    P
poolLinOP ((p1 , ..., pn ), (weight1 , ..., weightn )) =   weighti pi , with weighti = 1. Also quite
often, a geometric mean is used:                          Qn
poolLogOP ((p1 , ..., pn ), (weight1 , ..., weightn )) = κ i=1 pweight
                                                                i
                                                                        i
                                                                          (κ for normalization).
All these operators have known shortcoming (see [13] for a discussion), most prominently the
so-called impossibility results which state that no pooling method can ever satisfy all desired
properties such as systematicy.
It is also noteworthy that the operators given above do not deal with the problem of igno-
rance directly (i.e., taking account the evidence the resp. information sources have obtained,
as researched in Dempster-Shafer theory). But such ignorance could be modeled using the
weighti of poolLinOP and poolLogOP , and possibly using probability intervals instead of single
probabilities. In case opinions with probability intervals [pli , pui ] shall be fused, the described
fusion operators need to be accordingly applied to the interval boundaries.
One application of such rating in form of aggregated or individual probabilities is to take
the probabilities (resp. the mean values of the bounds for each interval) in order to impose
an order of the axioms/facts of an ontology, so that inner statements can be directly ranked
regard their degree of social acceptance, as in

0.8: information
     voters       innerStatement1 (highest social rating)
[0.5, 0.8]: information
            voters      innerStatement2
     information
0.2: voters       innerStatement3 (lowest social rating)

    Again, such a ranking can also be easily used to transform inconsistent ordinary ontologies
into consistent ontologies by a voting on the statements of the inconsistent ontology: In case
there are inner statements which are mutually inconsistent, a ranking can be used to obtain
a consistent ordinary (i.e., OWL DL) ontology by removing from each smallest inconsistent
subset of inner statements the statements with the lowest rating until all remaining elements
of each subset are mutually consistent.

5    Related works and conclusion
The goal of this work is to provide a formal social semantics of possibly contradictory asser-
tions on the web, i.e., to state their amount of social support, their communicative emergence
and dissemination, and the consensus or dissent they give rise to. Doing so, we settle on
the “opinion level” where neither beliefs are visible (due to the mental opaqueness of the
information sources) nor criteria for the selection of useful knowledge or semantic mappings
from/among heterogenous information exist initially. This is in strong contrast to the tra-
ditional aim of information integration and evolution for the determination of some consis-
tent, reliable “truth” obtained from the contributions of multiple sources as in multiagent
belief representation and revision (e.g. [18]) and approaches to ontology alignment, merging
and mapping. Apart from the research field of knowledge and belief integration, the storage
of heterogeneous information from multiple sources also has some tradition in the fields of
data warehousing and federated databases, and view-generation for distributed and enterprise
database systems [8], whereby such approaches do not take a social or communication-oriented
perspective. Opinions are treated in the area of the (non-semantic) web (e.g., opinion min-
ing in natural language documents) and in (informal) knowledge management (e.g., KnowCat
[12]). The assignment of provenance information is mostly based on tagging and punning tech-
niques, or makes use of the very problematic reification facility found in RDF(S). Advanced
approaches to provenance (e.g., [17]) are already very useful if it is required to specify who
contributed some information artifact (which is also done with a similar intent on the basis
of social networks), but they do not provide a logic model of the meaning of being an opinion
source and other communication aspects. Precisely, they allow to specify that someone ”as-
serts” some information, but they do not tell what asserting (requesting, denying...) actually
means. [5] provides an approach to the grouping of RDF statements using contexts, but with-
out taking into account the social (i.e., communicative) aspect, and only informally. A mature
approach, focusing on the aggregation of RDF(S) graphs using contexts, was presented in [2],
and [4] provides a general formal account of contexts for OWL ontologies. Independently
from web-related approaches, contexts have been widely used for the modeling of distributed
knowledge and so-called federated databases, see e.g. [15, 16]. To further explore and work
out the new ”social” perspective on uncertain information on the web modeled using contexts
certainly constitutes a long-term scientific and practical endeavor of considerable complexity,
with this work hopefully being a useful starting point.

Acknowledgements: This work was funded by the German Research Foundation DFG
(Br609/13-1, research project ”Open Ontologies and Open Knowledge Bases”) and by the
Spanish National Plan of R+D, project no. TSI2005-08225-C07-06.

References
 1. J. L. McCarthy. Notes on formalizing context. In IJCAI, pages 555-562, 1993.
 2. R. V. Guha, R. McCool, R. Fikes. Contexts for the Semantic Web. Procs. of the Third Interna-
    tional Semantic Web Conference (ISWC-04), 2004.
 3. M. Richardson, P. Domingos. Building Large Knowledge Bases by Mass Collaboration. Technical
    Report UW-TR-03-02-04, Dept. of CSE, University of Washington, 2003.
 4. P. Bouquet, F. Giunchiglia, F. van Harmelen, L. Serafini, and H. Stuckenschmidt. C-OWL:
    Contextualizing Ontologies, Second International Semantic Web Conference (ISWC-03), LNCS
    vol. 2870, Springer Verlag, 2003.
 5. G. Klyne. Contexts for RDF Information Modelling.
    http://www.ninebynine.org/RDFNotes/RDFContexts.html, 2000.
 6. P. Costa, K. B. Laskey, K. J. Laskey. PR-OWL: A Bayesian Framework for the Semantic Web.
    In Procs. First Workshop on Uncertainty Reasoning for the Semantic Web (URSW-05), 2005.
 7. T. Froehner, M. Nickles, G. Weiss. Towards Modeling the Social Layer of Emergent Knowledge
    Using Open Ontologies. In Proceedings of The ECAI-04 Workshop on Agent-Mediated Knowl-
    edge Management (AMKM-04), 2004.
 8. J. Ullmann. Information Integration Using Logical Views. Proc. 6th Int’l Conference on Database
    Theory. Springer, 1997.
 9. R. Giugno, Th. Lukasiewicz. P-SHOQ(d): A Probabilistic Extension of SHOQ(d) for Probabilistic
    Ontologies in the Semantic Web. In JELIA ’02: Procs. of the European Conference on Logics in
    Artificial Intelligence. Springer, 2002.
10. F. Fischer, M. Nickles. Computational Opinions. Procs. of the 17th European Conference on
    Artificial Intelligence (ECAI’06), 2006. To appear.
11. B. Gaudou, A. Herzig, D. Longin, M. Nickles. A New Semantics for the FIPA Agent Communi-
    cation Language based on Social Attitudes. Procs. of the 17th European Conference on Artificial
    Intelligence (ECAI’06), 2006. To appear.
12. R. Cobos. Mechanisms for the Crystallisation of Knowledge, a Proposal Using a Collaborative
    System. Ph.D. thesis. Universidad Autonoma de Madrid, 2003.
13. R. M. Cooke. Experts in Uncertainty: Opinion and Subjective Probability in Science. Oxford
    University Press, 1991.
14. I. Horrocks, P. F. Patel-Schneider. Reducing OWL entailment to Description Logic Satisfiability.
    Journal of Web Semantics, Vol. 1(4), 2004.
15. M. Bonifacio, P. Bouquet, R. Cuel. Knowledge Nodes: The Building Blocks of a Distributed
    Approach to Knowledge Management. Journal for Universal Computer Science, Vol. 8/6, 2002.
16. A. Farquhar, A. Dappert, R. Fikes, W. Pratt. Integrating Information Sources using Context
    Logic. In Procs. of the AAAI Spring Symposium on Information Gathering from Distributed
    Heterogeneous Environments, 1995.
17. J. Carroll, Ch. Bizer, P. Hayes, P. Stickler. Named Graphs, Provenance and Trust. In Procs. of
    the 14th International World Wide Web Conference, 2005.
18. A. Dragoni, P. Giorgini. Revisining Beliefs Received from Multiple Sources. In Frontiers in Belief
    Revision, H. Roth and M. Williams, Eds. Kluwer Academic Publisher, 431–444, 2001.