=Paper=
{{Paper
|id=None
|storemode=property
|title=Satisfiability in EL with Sets of Probabilistic ABoxes
|pdfUrl=https://ceur-ws.org/Vol-745/paper_14.pdf
|volume=Vol-745
|dblpUrl=https://dblp.org/rec/conf/dlog/FingerWC11
}}
==Satisfiability in EL with Sets of Probabilistic ABoxes==
Satisfiability in EL with sets of Probabilistic
ABoxes⋆
Marcelo Finger, Renata Wassermann, and Fabio G. Cozman
University of São Paulo, São Paulo, Brazil
mfinger@ime.usp.br,renata@ime.usp.br,fgcozman@usp.br
Abstract. We present ELPA, a probabilistic extension of the lightweight
DL EL with a fixed TBox and a set of probabilistic ABoxes, and study
the problem of satisfiability in such context.
1 Introduction
This work studies an extension of Description Logic EL, called ELPA, that allows
for probabilistic assessments on ABoxes. We concentrate on the problem of veri-
fying the satisfiability of an ELPA-knowledge base, proposing algorithms for this
problem based on recent advances on probabilistic satisfiability (PSAT) [FB11].
Consider an example, adapted from [LS10].
Example 1.1 Two symptoms of Lyme disease are fever and fatigue. As these
symptoms are common and the disease is rare, the chance that they are indeed
caused by Lyme disease is small. Nevertheless, because the disease is of difficult
diagnosis, patients get treated if there is a chance that they have it.
The TBox T0 contains the following axioms: And the following ABox A0 :
Fatigue ⊑ Symptom Fever(s1 )
Fever ⊑ Symptom hasSymptom(john, s1 )
Lyme ⊑ Disease Fatigue(s2 )
Symptom ⊑ ∃hasCause.Disease hasSymptom(john, s2 )
Patient ⊑ ∃suspectOf.Disease Patient(john)
Patient ⊑ ∃hasSymptom.Symptom
∃hasSymptom.(∃hasCause.Lyme)⊑ ∃suspectOf.Lyme
Consider also the following probabilistic statements originating from medical
experience on symptoms that are caused by Lyme disease.
A1 = ∃hasCause.Lyme(s1 ), P (A1 ) ≥ 0.1
A2 = ∃hasCause.Lyme(s2 ), P (A2 ) ≥ 0.2
Now we want to know whether we can consistently assert an upper bound pub
for the probability of John having Lyme disease:
A3 = ∃suspectOf.Lyme(john), P (A3 ) ≤ pub
The set of statements in the example above is a probabilistic knowledge base.
It contains four probability assignments; A0 , which is the conjunction of 5 atomic
statements, is assigned probability 1; the other three atomic ABoxes A1 , A2 and
A3 get probabilities smaller than 1.
⋆
Work supported by Fapesp Thematic Project 2008/03995-5 (LOGPROB). Authors
supported by CNPq grants PQ 302553/2010-0, 304043/2010-9, 305395/2010-6.
Our method is an alternative to existing combinations of DL with probabili-
ties that impose deterministic restrictions on probabilities. For example, [LS10]
assigns probabilities to concepts over a fixed interpretation, forcing the proba-
bility of ABoxes to be either 0 or 1. No such deterministic “side effects” occur
in our method.
Our goal in this work is to formally define the notion of ELPA-knowledge
bases and its satisfiability problem and provide algorithms to verify it. Adding
probabilities to logic sentences usually adds complexity; e.g. probabilistic 2SAT
is NP-complete [GKP88]. We show that ELPA-satisfiability is in NP.
1.1 Related work
There have been several proposals to add probabilities to description logics in
recent years [LS08]; most of this work is based on various kinds of probabilistic
logic [GT07,Hal03].
Probabilistic description logics differ in several dimensions. Some approaches
associate probabilities with elements of TBoxes [Jae94,KLP97,CP09], while oth-
ers associate probabilities with ABoxes [DS05], and still others combine both
kinds of assessments [Luk08]. In this paper we focus on probabilities over ABoxes.
As an alternative classification scheme, some logics assign probabilities over
elements of the domain [DPP06,DS05,Hei94,Jae94,KLP97,Luk08], while others
assign probabilities over interpretations [CL06,DFL08,LS10], with some logics
in between [Seb94]. In this paper we focus on probabilities over interpretations.
Yet another classification is possible, as we have probabilistic description
logics that allow for assessments of stochastic independence, often organized
through graphs [CL06,DPP06,KLP97,CP09], and logics that do not allow for
assessments of stochastic independence [DS05,Hei94,Jae94,Luk08,LS10]. In this
paper we do not allow for stochastic independence.
In a sense, our work is a refinement of First Order Probabilistic Logic by
Jaumard et al [JFSS06]; however, we use the decidable and tractable logic EL,
and we show that our probabilistic version remains in NP. Note that probability
assignments remain external to the logic EL; this has the advantage of making it
capable of dealing with conditional probabilities of ABox statements in a classical
manner, as P (A(a)|B(b)) = P (A(a) ∧ B(b))/P (B(b). A complete treatment of
conditional probabilities remains outside the scope of this work. Another related
problem is probability inference; that is, determining the maximal and minimal
values for pub that leave the knowledge base satisfiable; this problem is also
outside the scope of this work.
1.2 Organization of the paper
The remainder of the paper is organized as follows. In the next section, we
introduce the logic ELPA. We first define the probabilistic assignments that are
allowed in the ABox, and then formalise the satisfiability problem for ELPA,
showing that it is in NP. We show that the probabilistic knowledge base in
ELPA can be translated into a normal form that is used in Section 3, where an
algorithm for testing the satisfiability of ELPA is presented.
2 The Probabilistic DL ELPA
We introduce ELPA, a probabilistic extension of the polynomial-time Description
Logic EL with a fixed TBox and a set of probabilistic ABoxes.
We first establish a regular EL vocabulary. Fix countably infinite sets NC , NR ,
and NI of concept names, role names, and individual names, respectively. The
set of EL-concepts is given by the following syntax rules:
C ::= A | C ⊓ D | ∃r.C
where A ranges over NC , C and D over EL-concepts and r over NR . No negation
or disjunction of concepts is expressible in this language.
A TBox is a finite set of concept inclusions (CIs) of the form C ⊑ D; TBoxes
usually represent an ontology. On the other hand, ABoxes represent instance
data and obey the following syntax rules
A ::= C(a) | r(a, b) | A ∧ A′
where C and r are as before, a, b ∈ NI and A, A′ range over ABoxes.
The standard EL semantics is used for TBoxes and ABoxes, based on inter-
pretations I = (∆I , ·I ), where ∆I is a non-empty set called domain and ·I is an
interpretation function that maps each each A ∈ NC to a subset AI ⊆ ∆I , each
r ∈ NR to a subset rI ⊆ ∆I × ∆I , and each a ∈ NI to an element aI ∈ ∆I ; see
[BBL05]. We extend the interpretation I for all concepts in the usual way. So
(C ⊓ D)I = C I ∩ DI and (∃r.C)I = {x ∈ ∆I |∃y ∈ ∆I : (x, y) ∈ rI ∧ y ∈ C I }.
Then concept inclusion C ⊑ D is satisfied by interpretation I, represented by
I |= C ⊑ D iff C I ⊆ DI . Similarly TBox T is satisfied by interpretation I,
I |= T , iff I |= C ⊑ D for every C ⊑ D ∈ T .
For ABoxes, we say that C(a) is satisfied by I, represented by I |= C(a) iff
aI ∈ C I ; similarly, I |= r(a, b) iff (aI , bI ) ∈ rI ; and I |= A ∧ A′ if both I |= A
and I |= A′ . If both a TBox T and an ABox A are true under I, we say that
the pair (T , A), called a (deterministic) knowledge base, is satisfied by I. The
problem of deciding if a deterministic knowledge base (T , A) is satisfiable in EL
can be solved in polynomial time [Bra04].
2.1 Probability Assignments to ABoxes
We now introduce probabilities in ABoxes. We deal with probability distribution
π over the set of interpretations endowed with a suitable algebra.
Let T be a TBox. Given a probability distribution π over the set of all
interpretations I, the probability of an ABox A in the context of T is given by
the probability of all interpretations that satisfy all of T and A, that is, the
probability of {I|I |= A and I |= T }.
A probability assignment to an ABox, is an expression of the form
P (A) ⊲⊳ p,
where A is an ABox, ⊲⊳∈ {≤, ≥, =} and p ∈ Q, 0 ≤ p ≤ 1. Note that probability
assignments are external to the logic EL, and are not statements in the logic.
Let PA be a set of k probability assignments
PA = {P (Ai ) ⊲⊳i pi |1 ≤ i ≤ k}.
Then the pair hT , PAi is a probabilistic knowledge base in the DL EL with sets
of probability assignments, ELPA. Clearly, all probability assignments in PA are
to be evaluated in the context T .
The main problem of this work is, given an ELPA probabilistic knowledge
base, determine whether there exists a probability distribution π such that, in
the context of the TBox T , π satisfies all the assignments in PA; this is the satis-
fiability problem for an ELPA-knowledge base. Before we formalize this problem,
we must “finitize” the set of all interpretations of a TBox T , as the set IT of all
interpretations of a TBox T is uncountably infinite.
For that, define an equivalence relation ≃PA ⊆ IT ×IT , where PA = {P (Ai ) =
pi |1 ≤ i ≤ k} is a set of probabilistic assignments, such that
I ≃PA I ′ iff for every k, 1 ≤ i ≤ k, I |= Ai if and only if I ′ |= Ai ;
that is, I and I ′ satisfy the same ABoxes in PA in a context of TBox T .
Lemma 2.1 Let n be the number of atomic elements in PA. The relation ≃PA
is an equivalence relation on IT × IT and the set of equivalence classes IT /≃PA
has at most 2n distinct equivalence classes.
Each equivalence class of I/≃PA will be represented by any interpretation I
in it. We can now formalise the satisfiability problem for an ELPA-knowledge
base hT , PAi. We will write I(A) = 1 for I |= A and I(A) = 0 for I 6|= A.
2.2 The Satisfiability of Probabilistic Knowledge Bases
Let n be the number of atomic elements in PA, |PA| = k. Consider I1 , . . . , In′ ,
n′ ≤ 2 be all the ≃PA -distinct interpretations that satisfy T . Consider a k × n′
matrix A = [aij ] such that aij = Ij (Ai ). The probabilistic satisfiability problem
for an ELPA-knowledge base K = hT , PAi is to decide if there is a probability
vector π of dimension n′ that obeys the ELPA-restrictions:
X
Aπ ⊲⊳ p, πi = 1, π ≥ 0. (1)
An ELPA-knowledge base K is satisfiable iff its associated ELPA-restrictions (1)
have a solution. If π is a solution to (1) we say that π satisfies K. The last two
conditions of (1) force π to be a probability distribution. It is convenient to
assume that first two conditions of (1) are joined, A is a (k + 1) × n′ matrix with
1’s at its first line, p1 = 1 in vector p(k+1)×1 , so ⊲⊳1 -relation is “=”; we will keep
this convention in the rest of the paper.
Example 2.2 Recall Example 1.1 with pub = 0.3, where the probability of
John having Lyme is at most 30%. We consider only the 3 ABox with prob-
abilistic assignments, and only interpretations of these atoms that are jointly
consistent with the fixed TBox and the fixed (probability 1) ABox formulas.
Consider the following probability distribution π and the probability it assigns
to the ABoxes in Example 1.1.
π A1 A2 A3
I1 0.75 0 0 0
I2 0.10 0 1 1
I3 0.03 1 0 1
I4 0.12 1 1 1
1.00 0.15 0.22 0.25
It is easy to verify that the interpretations I1 –I4 are all consistent with hT0 , A0 i,
so all probability relations are verified by π, so probabilistic database for pub =
0.3 is satisfiable.
Some important questions remain: how to compute a probability distribu-
tion when one exists, and whether that probabilistic knowledge base remains
satisfiable when pub = 0.05 or not, and how to verify it. This paper presents
algorithms for that.
An important result of [GKP88] guarantees that a satisfiable knowledge base
has a “small” witness:
Fact 2.3 If ELPA-restrictions (1) for knowledge base K = hT , PAi with PA =
{P (Ai ) = pi |1 ≤ i ≤ k} have a solution, then there are k + 1 columns of matrix
A such that the system A(k+1)×(k+1) π = p(k+1)×1 has a solution π ≥ 0.
This result is a consequence of Caratheodory’s Theorem [Eck93], which states
that if a k-dimensional point is a convex combination of m points, then it is a
convex combination of at most k + 1 points among them. Fact 2.3 gives an
NP-certificate for the satisfiability of an ELPA-knowledge basel; hence:
Corollary 2.4 The ELPA-satisfiability problem is in NP.
Finding polynomial-sized certificates is the heart of the matter. We will now
study algorithms that solve the ELPA-satisfiability problem. We start by defining
a normal form for ELPA-knowledge base. Note that Fact 2.3 is stated for equality
only, and we also allow inequalities; the normal form will be useful both for the
algorithms and for showing that all those cases can be reduced to equality only,
with all probabilistic assignments over atoms only.
2.3 A Normal Form for Probabilistic Knowledge Base
For the sake of providing a normal form, we add a few new convenient definitions.
Let N0 be a set of 0-ary atomic propositions. A propositional rule is an expression
of the form q → A1 or A2 → q, where q ∈ N0 and A1 , A2 ABoxes, with the
obvious semantic that I |= q → A1 iff I 6|= q or I |= A1 ; and I |= A2 → q iff
I |= q or I 6|= A2 . We extend the notion of ABox such that
A ::= C(a) | r(a, b) | q | A ∧ A′
such that q ∈ N0 and C(a), r(a, b), A, A′ are as before; we call q, C(a), r(a, b)
atomic ABoxes.
If R is a set of propositional rules and A an ABox, R ∪ A is a set of Horn
clauses, and thus has a polynomial-time computable minimal model; so the I-
satisfiability of R ∪ A reduces to the I-satisfiability of the atomic positive for-
mulas in its minimal model. Thus the satisfiability problem of EL with TBoxes,
ABoxes and sets of propositional rules can be achieved in polynomial time.
We also distinguish deterministic ABoxes, which are assigned probability 1,
from probabilistic ABoxes, which are assigned probabilities < 1.
We then extend previous definitions with the notion of a set of proposi-
tional rules. For the rest of this paper, an extended ELPA-knowledge base is a
4-tuple Ke = hT , R, A, PAi, in which the probabilistic assignment of ABoxes
PA is evaluated in an (deterministic) evaluation context consisting of a triple
C = hT , R, Ai of TBox, propositional rules and deterministic ABox A; we also
represent Ke = hC, PAi. Clearly, an ELPA-knowledge base K is a special case of
an extended ELPA-knowledge base Ke the previous view in which R = ∅ and A
is part of PA.
Now we define the normal form. A knowledge base Ke = hT , R, A, PAi is in
(atomic) normal form if PA is of the form
PA = {P (yi ) = pi | yi is an atom, 1 ≤ i ≤ k}, with 0 < pi < 1.
In this case, PA is an atomic probability assignment evaluated in context C =
hT , R, Ai. Clearly, C is a small generalisation of the deterministic knowledge
bases of, for instance, [BBL05].
By adding a small number of propositional rules, any knowledge base can be
brought into atomic normal form.
Theorem 2.5 (Normal Form) For every extended ELPA-knowledge base Ke
e
there exists an atomic normal form knowledge base Knf that is satisfiable iff Ke
is; the former can be obtained from the latter in polynomial time O(k × ℓ), where
k = |PA| and ℓ is the largest number of conjuncts in an ABox in PA.
Example 2.6 We transform the knowledge base(s) of Example 1.1 into the
normal form. The TBox T0 and deterministic ABox A0 remain the same. We
introduce atoms q1 , q2 , q3 for ABoxes A1 , A2 , A3 respectively, which generates
the following set or rules R0 :
q1 → ∃hasCause.Lyme(s1 ), q2 → ∃hasCause.Lyme(s2 ), ∃suspectOf.Lyme(john) → q3
C0 = hT0 , R0 , A0 i is the evaluation context; the atomic probability assignment is
PA0 = { P (q1 ) = 0.1, P (q2 ) = 0.2. P (q3 ) = pub }.
The normal form knowledge base is K0e = hC0 , PA0 i. Note that the probability
distribution of Example 2.2 does not satisfy K0e when pub = 0.3, but by Theo-
rem 2.5 there must exist other interpretations involving q1 , q2 , q3 and rules R0
and another probability distribution π that satisfies K0e .
The following result allows us to see a satisfiable normal form knowledge base
e
Knf as an interaction between a solution to assignments PA constrained by the
EL-decisions of context hT , R, Ai. An interpretation I is hT , R, Ai-consistent if
I jointly satisfies T , R and A. Recall that we represent the binary I-evaluation
of ABoxes such that I(A) = 1 iff I |= A. Lemma 2.7 is the basis for the ELPA-
satisfiability solving algorithm that we present in the next section.
Lemma 2.7 A normal form knowledge base Ke = hT , R, A, PAi is satisfiable
iff there is a binary (k + 1) × (k + 1)-matrix AKe , such that all of its {0, 1}-
columns represent interpretations that are hT , R, Ai-consistent and AKe · π = p
has a solution π ≥ 0.
3 An Algorithm for ELPA Satisfiability
We present a logic-algebraic algorithm to verify the satisfiability of a normal
form knowledge base Ke = hT , R, A, PAi and, if the answer is positive, present
a satisfying model in the form of a set of k+1 EL-interpretations, where k = |PA|,
and a probability distribution over them.
We first establish some terminology. If A is a matrix, Aj is its j-th column
and Ai is its i-th line; A(s) is the state of matrix A at step s. If A is a matrix and
b a column of compatible dimension, A[j := b] is obtained by replacing A’s j-th
column with b. A square matrix that has an inverse is non-singular. A matrix A
that satisfies conditions (2) is a feasible solution for PA.
1 ··· 1 π1 1
a1,1 · · · a1,k+1 π2 p1
.. . . .. · .. = .. ,
. . . . . (2)
ak,1 · · · ak,k+1 πk+1 pk
ai,j ∈ {0, 1}, A is non-singular, πj ≥ 0
We always assume that the lines of A are ordered such that the input probabilities
p1 , . . . , pk in (2) are in decreasing order. By Lemma 2.7, hC, PAi has a solution iff
there is a partial solution A satisfying (2) such that if πj > 0 then a1,j , . . . , ak,j
represent C-consistent interpretations for 1 ≤ i ≤ k, 1 ≤ j ≤ k + 1. We usually
abuse terminology calling Aj a C-consistent column.
This method is based on PSAT-solving method of [FB11], which is an im-
provement on the methods of [KP90,HJ00]; it consists of an algebraic optimisa-
tion problem in the form of a special linear program of the form
minimize objective functionh|J|,
P fi
subject to Aπ = p, π ≥ 0, f = j∈J πj and (3)
J = {j|Aj is C-inconsistent, πj > 0}
which is solved iteratively by the simplex algorithm [BT97]. Matrix A is a
(k + 1) × (k + 1) {0,1}-matrix, whose columns represents an EL-interpretations
and whose lines represent the atoms occurring in PA. An iterative step s re-
ceives a matrix A(s) and employs a column generation method that solves an
auxiliary problem; the latter is a logic-based satisfiability problem that em-
ploys EL-decision procedure, generates a column that replaces some column
in A(s) , obtains A(s+1) and decreases the objective function h|J|, f i, where
h|J1 |, f1 i > h|J2 |, f2 i iff 0 ≤ |J1 | < |J2 | or |J1 | = |J2 | and f1 < f2 , until its
minimum is reached. The objective function is discussed in Section 3.1.
In the iterative method, some columns are not C-consistent and the process
is done such that the number of C-consistent columns Aj associated to πj > 0
never decreases.
We now define A(0) , the starting feasible solution. For that, consider an empty
context C = ∅, that is a knowledge base h∅, PAi. As the elements of p are in
decreasing order, consider the {0, 1}-matrix I ∗ = [ai,j ]1≤i,j≤k+1 where ai,j = 1
iff i ≤ j, that is, I ∗ is all 1’s in and above the diagonal, 0’s elsewhere. As p is in
decreasing order, I ∗ satisfies h∅, PAi and is called a relaxed solution for hC, PAi.
Clearly, I ∗ is a feasible for PA. Make A(0) = I ∗ .
Example 3.1 Consider the form of knowledge base in Example 2.6 with pub =
0.3 (left) and pub = 0.05 (right). An initial feasible solution for it is A(0) ·π (0) = p,
with atoms ordered in decreasing probability, namely
1 1 1 1 0.7 1 1 1 1 1 0.8 1
0 1 1 1 0.1 0.3 q3 0 1 1 1 0.1 0.2 q2
0 0 1 1 · 0.1 = 0.2 q 0 0 1 1 · 0.05 = 0.1 q
2 1
0 0 0 1 0.1 0.1 q1 0 0 0 1 0.05 0.05 q3
On the left, all columns are C-consistent, so the problem is satisfiable with so-
lution A(0) and π as above. On the right, the second and third columns of A(0)
are C-inconsistent, so the decision is not yet made.
The relaxed solution is the initial feasible solution of our method. Further
feasible solutions are obtained by generating new {0, 1}-columns and substituting
them into a feasible solution, as shown by the following.
It is well known from the pivoting in the simplex algorithm that given any
{0, 1}-column of the form b = [1 b1 · · · bk ]′ , A[j := b] is a feasible solution.
So we simply assume there is a function merge(A, b) that computes it. Our
method moves through feasible solutions, at each step generating a column b
that decreases the value of the objective function.
3.1 The Objective Function
In a feasible solution A such that Aπ = p and π ≥ 0, some columns may not be
C-consistent. Let J = {j|Aj is C-inconsistent and πj > 0}; J is the set of column
indexes in A corresponding to C-inconsistent columns with non-null associated
probability; clearly |J| ≤ k + 1. If J = ∅, we call A a solution. As |J| = 0 when
a solution is found, it is one component of the objective function. However, it is
Algorithm 3.1 ELPA-satisfiability solver
Input: A normal form ELPA knowledge base hT , R, A, PAi.
Output: Solution A; or “No”, if unsatisfiable.
1: p := sortDecrescent({1} ∪ {pi |P (yi ) = pi ∈ PA};
2: A(0) := I ∗ ; s := 0; compute h|J(s) |, f(s) i;
3: while h|J(s) |, f(s) i =
6 h0, 0i do
4: b(s) = GenerateColumn(A(s) , p, C);
(s)
5: return “No” if b1 < 0; /* instance is unsat */
6: A(s+1) = merge(A(s) , b(s) );
7: increment s; compute h|J(s) |, f(s) i;
8: end while
9: return A(s) ; /* PSAT instance is satisfiable */
not guaranteed that, if a solution exists, we can find a sequence of iterations in
which |J| decreases at every step s.
The second component of the P objective function is the sum of probabilities
of C-inconsistent columns, f = j∈J πj . Note that f and |J| become 0 at the
same time, which occurs iff a positive decision is reached. The simplex algorithm
with appropriate column generation ensures that, if there is a solution, it can
be obtained with finitely many steps of non-increasing f -values. We thus use a
combined objective function h|J|, f i ordered lexicographically.
We first try to minimise the number of C-inconsistent columns; if this is not
possible, then minimise f , keeping J constant. So a knowledge base instance
hC, PAi associated to program (3) is satisfiable iff the objective function is min-
imal at h0, 0i.
Assume there is a function GenerateColumn(A, p, C), presented at Section 3.2,
that generates a C-consistent column that decreases the objective function, if one
exists; otherwise it returns an illegal column of the form [−1 · · · ]. Algorithm 3.1
presents a method that decides a PSAT instance by solving problem (3).
Algorithm 3.1 starts with a relaxed solution for hC, PAi (line 2), and via
column generation (line 4) generates another feasible solution (line 6), decreasing
the objective function, until either the search fails (line 5) or a solution is found;
the latter only occurs with the termination of the loop in lines 3–8, when the
objective function reaches h0, 0i.
3.2 Column Generation for ELPA
Algorithm 3.1 is, unsurprisingly, almost the same algorithm for PSAT solving
in [FB11]; the only difference between the two rests in the column generation
method GenerateColumn(A, p, C).
It has been shown in [FB11] that to eliminate a C-inconsistent column Aj
associated to πj > 0, a new C-consistent column b = [1 y1 . . . yk ]′ to substitute
Aj must satisfy the set of linear inequalities:
(LRij ) (A−1 −1 ′
j πi − Ai πj )[1 y1 . . . yk ] ≥ 0, 1≤i≤k+1 (4)
Such a column is here obtained by a combination of a SAT solver, which guar-
antees that (4) is verified, with an EL-solver to guarantee that b is C-consistent.
This combination can be done in several ways.
(a) By coding the polynomial time EL-decision in a SAT solver.
(b) By using EL-theories as an SMT (SAT Modulo Theories) engine.
(c) By coupling an EL-solver at the end of the SAT solver, rejecting C-inconsistent
answers, and proceeding with the SAT solver after the rejection.
The latter option is perhaps the most straightforward and is the one we
employ here.
Example 3.2 Recall the matrix A(0) in Example 3.1 on the right, whose sec-
ond and third columns were C inconsistent. Applying Algorithm 3.1 with column
generation as above, all 3 columns generated by the SAT solver were rejected
by the EL-solver, so no column could be generated that minimised the objective
function in h0, 0i. Therefore the corresponding ELPA-knowledge base is unsatis-
fiable.
Theorem 3.3 Algorithm 3.1 with column generation as above is correct and
always terminates.
4 Conclusions and Further Work
We have introduced the notion of ELPA-knowledge bases and its satisfiability
problem, and we have shown that the problem has a finite version that can
be tacked by algorithms that resemble PSAT solvers. We have also provided
complexity upper bounds for these algorithms.
Algorithm 3.1 has the theoretical possibility of generating an exponential
number of steps. It remains an open problem to find an example in which such
an exponential number of steps occur. It also remains an open problem whether
a polynomial time algorithm exists for ELPA-satisfiability. Our plan for future
work is to investigate the practical behavior of our algorithms, and to explore
logics that allow for probability over TBoxes and for stochastic independence.
References
[BBL05] Franz Baader, Sebastian Brandt, and Carsten Lutz. Pushing the EL envelope.
In IJCAI05, 19th International Joint Conference on Artificial Intelligence,
pages 364–369, 2005.
[Bra04] Sebastian Brandt. Polynomial time reasoning in a description logic with
existential restrictions, gci axioms, and - what else? In Proceedings of the
16th Eureopean Conference on Artificial Intelligence, ECAI’2004, pages 298–
302, 2004.
[BT97] Dimitris Bertsimas and John N. Tsitsiklis. Introduction to linear optimization.
Athena Scientific, 1997.
[CP09] FG Cozman and RB Polastro. Complexity analysis and variational inference
for interpretation-based probabilistic description logics. In Proceedings of
the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI),
2009.
[Eck93] J. Eckhoff. Helly, Radon, and Caratheodory type theorems. In P. M. Gru-
ber and J. M. Wills, editors, Handbook of Convex Geometry, pages 389–448.
Elsevier Science Publishers, 1993.
[FB11] Marcelo Finger and Glauber De Bona. Probabilistic satisfiability: Logic-based
algorithms and phase transition. In To appear on IJCAI2011, 2011.
[GKP88] G. Georgakopoulos, D. Kavvadias, and C. H. Papadimitriou. Probabilistic
satisfiability. J. of Complexity, 4(1):1–11, 1988.
[HJ00] P. Hansen and B. Jaumard. Probabilistic satisfiability. In Handbook of Defea-
sible Reasoning and Uncertainty Management Systems, vol.5. Springer, 2000.
[JFSS06] B. Jaumard, A. Fortin, I. Shahriar, and R Sultana. First order probabilistic
logic. In NAFIPS 2006, pages 341–346, 2006.
[KP90] D. Kavvadias and C. H. Papadimitriou. A linear programming approach to
reasoning about probabilities. AMAI, 1:189–205, 1990.
[LS10] Carsten Lutz and Lutz Schröder. Probabilistic description logics for subjec-
tive uncertainty. In KR 2010, 12th International Conference of Knowledge
Representation and Reasoning. AAAI Press, 2010.
[Luk08] T Lukasiewicz. Expressive probabilistic description logics. Artificial Intelli-
gence, 172(6-7):852–883, 2008.
[GM10] O. Gries, and R. Möller. Gibbs sampling in probabilistic description logics
with deterministic dependencies. First International Workshop on Uncer-
tainty in Description Logics (Uni-DL), 2010.
[CL06] P. C. G. Costa and K. B. Laskey. PR-OWL: A framework for probabilistic
ontologies. Conf. on Formal Ontology in Information Systems, 2006.
[DFL08] C. D’Amato, N. Fanizzi, and T. Lukasiewicz. Tractable reasoning with
Bayesian description logics. Int. Conf. on Scalable Uncertainty Management
(LNAI 5291), pages 146–159, 2008.
[DPP06] Z. Ding, Y. Peng, and R. Pan. BayesOWL: Uncertainty modeling in semantic
web ontologies. Soft Computing in Ontologies and Semantic Web, pages 3–29.
Springer, Berlin/Heidelberg, 2006.
[DS05] M. Dürig and T. Studer. Probabilistic ABox reasoning: preliminary results.
Description Logics,pages104–111,2005.
[GT07] L. Getoor and B. Taskar. Introduction to Statistical Relational Learning. MIT
Press, 2007.
[Hal03] J. Y. Halpern. Reasoning about Uncertainty. MIT Press, Cambridge, Mas-
sachusetts, 2003.
[Hei94] J. Heinsohn. Probabilistic description logics. Conf. Uncertainty in AI, page
311-318, 1994.
[Jae94] M. Jaeger. Probabilistic reasoning in terminological logics. Principles of
Knowledge Representation, pages 461–472, 1994.
[KLP97] D. Koller, A. Y. Levy, and A. Pfeffer. P-CLASSIC: A tractable probablistic
description logic. AAAI Conf. on AI, pages 390–397, 1997.
[LS08] T. Lukasiewicz and U. Straccia. Managing uncertainty and vagueness in de-
scription logics for the semantic web. Journal of Web Semantics, 6(4):291–
308, November 2008.
[Seb94] F. Sebastiani. A probabilistic terminological logic for modelling information
retrieval. 17th Conf. on Research and Development in Information Retrieval,
pages 122–130, Dublin, Ireland, 1994. Springer-Verlag.