=Paper=
{{Paper
|id=None
|storemode=property
|title=
On the Probabilistic Logical Modelling of Quantum and Geometrically-Inspired IR
|pdfUrl=https://ceur-ws.org/Vol-704/28.pdf
|volume=Vol-704
|dblpUrl=https://dblp.org/rec/conf/iir/SmeraldiMFR11
}}
==
On the Probabilistic Logical Modelling of Quantum and Geometrically-Inspired IR
==
On the Probabilistic Logical Modelling of
Quantum and Geometrically-Inspired IR
Fabrizio Smeraldi1 , Miguel Martinez-Alvarez1 ,
Ingo Frommholz2 , and Thomas Roelleke1
1
School of Electronic Engineering and Computer Science,
Queen Mary University of London, UK
2
School of Computing Science, University of Glasgow, UK
Abstract. Information Retrieval approaches can mostly be classed into
probabilistic, geometric or logic-based. Recently, a new unifying frame-
work for IR has emerged that integrates a probabilistic description within
a geometric framework, namely vectors in Hilbert spaces. The geomet-
ric model leads naturally to a predicate logic over linear subspaces, also
known as quantum logic. In this paper we show the relation between
this model and classic concepts such as the Generalised Vector Space
Model, highlighting similarities and differences. We also show how some
fundamental components of quantum-based IR can be modelled in a de-
scriptive way using a well-established tool, i.e. Probabilistic Datalog.
1 Introduction
Three main branches of IR are probabilistic, geometric or logic-based IR. [13]
discusses the relationship between these branches, showing that geometric ap-
proaches can have a probabilistic or logic-based interpretation, as it is known
from quantum probabilities and quantum logics. Subsequent work discusses the
prospect of such an interpretation for context-based or interactive IR [2] and spe-
cific retrieval tasks like diversity and novelty [7, 5]. On the other hand, logic-based
approaches combine concepts from databases and IR and offer advanced means
to flexibly create sophisticated retrieval functions and for structured queries.
Combining logic-based approaches with geometric ones is thus a straightforward
step that has been started in [11].
In this paper, we contribute a first step in extending with concepts from quantum
mechanics a well-established logic-based framework (probabilistic Datalog) that
has been used for modelling several IR tasks such as Information Extraction,
being reported that it produces programs than are easy to understand, debug
and modify [12]. After introducing main geometrical concepts from quantum
mechanics, we show how a well-known geometric retrieval approach, the gener-
alised vector space model (GVSM), relates to quantum probabilities and the total
probability. We then explain how geometric concepts (e.g. Euclidean normali-
sation) can be realised in probabilistic Datalog. In particular, we address how
traditional maximum-likelihood estimates (L1-normalisation) and Euclidean es-
timates (L2-normalisation) are expressed and related in PDatalog. The main
technical contribution of this paper is the probabilistic logical modelling of the
mathematical concept of GIR, and the theorems and proofs to show the correct-
ness of the PDatalog programs to model L1 and L2 probabilities.
2 Geometric IR (GIR)
2.1 Background
Quantum logic, i.e. logic on Hilbert spaces, allows us to cast information retrieval
in a geometric context. A Hilbert space is a vector space endowed with an inner
product - in the finite-dimensional case, we can think of the Euclidean space.
In quantum logic predicates are represented by linear subspaces. If V and W
are two subspaces (predicates), conjunction is given by their intersection (also a
subspace) and alternation by the span of their union [13]. Negation is represented
by the orthogonal. With this in mind, projections and orthogonality become
important notions, and a special notation (the bra-ket notation) is introduced
to facilitate computation.
2.2 Notation and computation
Given a Hilbert space H, vectors are denoted by greek letters in an angled
bracket, eg |ψi. This is called a “ket”. The corresponding element of the dual is
denoted by hψ|, a “bra”. The inner product of two vectors |φi and |ψi is given
by hψ|φi, the aptly-named “bracket” of the two vectors.
In Euclidean spaces the scalar product establishes a natural correspondence be-
tween the bra and ket spaces as follows:
hψK |·i := K(ψ, ·) (1)
where K(·, ·) is the scalar product and ψ is a (ket) vector in H. This correspon-
dence is invariant up to a rotation of the basis. We can therefore use the scalar
product in the space to compute brackets, essentially thinking of |φi as a column
vector, hψ| as a row vector, and hψ|φi as a scalar product ψ t φ (observables H
below would be symmetric matrices)
2.3 Representing probabilities
As we have seen, a subspace identifies a predicate. We can represent probabil-
ities by a state vector normalised to one, so that the square of the norm of its
projection onto the subspace represents the probability of the predicate being
true [13]. It is natural in the light of what we have seen above to use a Euclidean
normalisation hφ|φi=1. Remembering the analogy with the dot product above,
this is expressed in components as follows: let {|ei i} be the basis vectors, then
X X X
hφ|φi = hφ|ei i hei |φi = | hei |φi |2 = φ2i (2)
i i i
where by φi we indicate the component of φ along ei . The projection of a state
|φi onto |ei i is obtained by applying the projection operator |ei ihei | to |φi, which
following the mechanics of the notation gives
(|ei i hei |) |φi = (hei |φi) |ei i = φi |ei i . (3)
If |φi is normalised, it is immediate that φ2i can be interpreted as a probability.
2.4 Representing retrieval
Documents are represented as vectors in a Hilbert space. In a two-dimensional
space with basis vectors |drivei and |schooli, a document about driving schools
might be seen as normalised coherent mixture of the basis states, taken for
instance with equal weight:
√ √
2 2
|ψi = |drivei + |schooli (4)
2 2
|ψi is a so-called superposition of the two states |drivei and |schooli. This differs
from the case in which we do not know if the document is purely about driving or
purely about schools. In quantum mechanics, such a condition is called a mixed
state which is represented by a density operator
1 1
ρ= |drivei hdrive| + |schooli hschool| . (5)
2 2
This is the analogous of classical density matrices, see Equation 19.
These two alternative descriptions seem similar if we are interested for instance
in the probability that the document is about driving:
√ √
2 2 2 1
| hdrive|ψi | = | hdrive|drivei + hdrive|schooli |2 = (6)
2 2 2
because |drivei and |schooli are orthogonal. Similarly,
T r(ρ |drvi hdrv|) =
1 1
hdrv| |schi hsch| + |drvi hdrv| (|drvi hdrv|) |drvi +
2 2
1 1
hsch| |schi hsch| + |drvi hdrv| (|drvi hdrv|) |schi =
2 2
1 1 1
hdrv| |schi hsch| + |drvi hdrv| |drvi = (7)
2 2 2
where T r denotes the trace function, the sum of the diagonal elements of a ma-
trix. However the superposition case Equation 4 expresses the extent to which
the document part-takes of both concepts |drivei and |schooli. This is made ev-
ident by the following example. Suppose we are interesting in finding documents
about driving schools. We can then define the following observable:
H = |drivei hschool| + |schooli hdrive| (8)
In matrix notation it is represented by a symmetric matrix (one of Pauli)
H = ( 01 10 ) (9)
It is easy to see that the average value of the observable H on |ψi is as follows:
hψ| H |ψi =
√ √
2 2
(hdrv| + hsch|) (|drvi hsch| + |schi hdrv|) (|drvi + |schi) =
2 2
1 1
hsch| (|drvi + |schi) = (10)
2 2
Actually one can show that H has Eigenvalues +1 and -1 and that the corre-
sponding Eigenvectors are respectively |ψi as in Equation 4 and
√ √
2 2
|φi = |drivei − |schooli (11)
2 2
Now if we interpret the components ψi of the vectors ψ as TF-IDF frequencies,
then we obtain:
ψi = TF(ti , ψ) · IDF(ti ) (12)
Hereby, ti is the term corresponding to dimension i, TF(ti , ψ) is a frequency
component, and IDF(ti ) is a measure to reflect the inverse document frequency
(e.g. IDF(ti ) = − log(nD (ti )/(ND − nD (ti ))), where nD (ti ) is the number of
documents in which ti occurs, and ND is the total number of documents). Note
that the vector component is negative for the case nD (ti ) > N2D .
Of course, any other frequency or probabilistically motivated measure can be
chosen as a vector component.
We can see that Span(|ψi) represents the subspace on which the frequencies
of the terms are positively correlated while Span(|φi) is the subspace on which
they are negatively correlated.
3 Relationships between GIR and Traditional Concepts
This section outlines the relationships between “Geometric IR” and traditional
concepts, namely the GVSM (section 3.1) and the total probability (section 3.2).
3.1 Generalised Vector-Space Model (GVSM)
The scalar product of two vectors can be re-written using the identity matrix as
the intermediate between the vectors:
d T · q := d T · I · q (13)
where by T we indicate transposition.
In more general, the VSM is based on the idea to use the matrix G (a term-
times-term) matrix between document and query.
d T · q := d T · G · q (14)
The matrix G may be used to associate semantically related terms. For example,
setting g12 = 1 leads to the following equation:
d T · G · q = d1 g11 q1 + d1 g12 q2 + d2 g22 q2
Now, the first term (e.g. the term “dog”) is related to the second term (e.g. the
term “animal”), i.e. a query containing “animal” will retrieve documents that
contain “dog”. Here, g21 may be not equal to g12 , if we wish to model a gener-
alisation of terms. For synonyms, e.g. “classification” and “categorisation”, we
have gij = gji , i.e. the matrix G is symmetric for synonyms.
In a real-valued Hilbert space, a symmetric matrix G is a Hermitian operator
and corresponds exactly to the observable H we introduced in Equation 8.
3.2 Total Probability
The total probability theorem is as follows:
X
P (q|d) = P (q|t) · P (t|d) (15)
t∈T
Here, q and d are events, and T is a set of disjoint events. In the context of IR,
let q be a query, d be a document, and t be a term. Using Bayes’ theorem for
P (t|d), the theorem can be rewritten:
1 X
P (q|d) = · P (q|t) · P (d|t) · P (t) (16)
P (d) t
This form relates the total probability to the GVSM, as is demonstrated in the
following:
d = (P (d|t1 ), . . . , P (d|tn )) (17)
q = (P (q|t1 ), . . . , P (q|tn )) (18)
P (t1 ) · · · 0
G = 0 ... 0 (19)
0 · · · P (tn )
Matrix G thus defined is a representation of a document vector d in terms of
probabilities of disjoint events. In the quantum framework, disjoint events are
orthogonal subspaces; thus G corresponds to the density matrix ρ introduced in
Equation 5. We note that there is no classical analogy for the coherent quantum
superposition as introduced in Equation 4.
4 Probabilistic Datalog: A Language for Probabilistic
Logical Modelling
Probablistic Logical Modelling (PLM) is a modelling approach that is based on
probability theory and logic. In principle, PLM is a theoretical framework com-
posed of possible world semantics (logic and probability theory). Probabilistic
extensions of standard languages (e.g. Probabilistic Datalog, [3], probabilistic
SQL, [10]) are instances of probabilistic modelling languages. We present a brief
overview over Probabilistic Datalog and show its application for the probabilistic
logical modelling of the GVSM, the total probability, and TF-IDF.
Probabilistic Datalog (PDatalog) combines Datalog (query language used in de-
ductive databases) and probability theory [3, 9]. It was extended to improve its
expressiveness and scalability for modelling ranking models [10]. In addition, it
is a flexible platform that has been used as an intermediate processing layer for
semantic/terminological logics in different IR tasks such as ad-hoc retrieval [4,
6] or annotated document retrieval [1].It allows Bayesian goals and subgoals
(for the modelling of probability estimation and conditional probabilities) and
assumptions like SUM, PROD, SQRT or ROOT for combining probabilities of
tuples. For example, given the information about grades for a given degree.
P (grade|person) can be inferred with the model shown in Figure 1.
4.1 Probabilistic Logical Modelling of the GVSM
Figure 2 shows the modelling of an scalar product and the GVSM model in
PDatalog using the relation ”g” as the matrix representing relationship between
different terms.
4.2 Probabilistic Logical Modelling of the Total Probability
The modelling of the Total Probability is illustrated in figure 3. P (term|doc)
and P (query|term) are obtained using the Bayesian assumption of PDatalog.
4.3 Probabilistic Logical Modelling of TF-IDF
Figure 4 shows a PD program illustrating the main ingredients for the proba-
bilistic logical modelling of TF-IDF.
1 p grade degree SUM(Grade, Degree) :− grade(Student, Grade, Degree) | (Degree);
3 # Extensional evidence:
4 grade(john, ”B”, art) ; grade(anna, ”A”, art);
5 grade(mary, ”B”, maths); grade(peter, ”B”, maths); grade(paul, ”C”, maths);
7 ?− p grade degree(Grade, Degree);
8 # 0.5 (”B”, art); 0.5 (”A”, art); 0.667 (”B”, maths); 0.333 (”C”, maths)
10 # For a person Mr. X that has joined both arts and maths, what is the probability
of ”A”, i.e. P(grade|person)?
11 0.5 register (mr x, maths); 0.5 register (mr x, arts) ;
13 # P(degree|person)
14 p degree person (Degree, Person) :− register (Person, Degree) | (Person)
16 # P(grade|person): Using Total Probability
17 p grade person SUM(Grade, Person) :−
18 p grade degree(Grade, Degree) & p degree person(Degree, Person);
Fig. 1. PDatalog example: Estimation of P (grade|degree)
1 # vec(d) ∗ vec(q) and vec(d) ∗ G ∗ vec(q)
2 scalar product SUM(D, Q) :− vec q(T, Q) & vec d(T, D);
3 g scalar product SUM(D, Q) :− vec q(T1, Q) & g(T1, T2) & vec d(T2, D);
5 # Example: Main diagonal of G:
6 g( sailing , sailing ) ; g(boats,boats) ;...
8 # Lower and upper triangles:
9 g( sailing ,boats); # For a query with ”sailing” retrieve docs containing ”boats”
10 g(boats, sailing ) ; # For q query with ”boats” ...
Fig. 2. GVSM in PDatalog
1 #P(term|doc)
2 p t d(Term, DocId) :− term(Term, DocId) | (DocId);
4 #P(query|term)
5 p q t(Term, QueryId) :− qterm(Term, QueryId) & pidf(Term);
7 #P(query|doc)
8 p q d SUM(DocId, QueryId) :− p q t(Term, QueryId) & p t d(term, DocId);
Fig. 3. Total Probability in PDatalog
1 # Within−document term probability:
2 # P(t|d):
3 p t d SUM(T, D) :− term(T, D) | DISJOINT(D);
5 # Collection−wide IDF−based term probability:
6 # Probabilistic IDF:
7 pidf(T) | MAX IDF() :− term(T, D);
9 # Query term weighting:
10 w qterm(T, Q) :− qterm(T, Q) & pidf(T);
12 # Normalisation of query term probabilities :
13 norm w qterm(T, Q) :− w qterm(T, Q) | DISJOINT(Q);
15 # Retrieval:
16 retrieve SUM(D, Q) :− norm w qterm(T, Q) & p t d(T, D);
Fig. 4. Probabilistic Logical Modelling of TF-IDF
For “pidf(T)”, the probability estimation is based on idf(t)/ maxidf, and this
value between 0 and 1 has a probabilistic semantics (see [8]), namely the prob-
ability to occur (P (t occurs)) is equal to being not informative in maxidf trials,
where the probability of being informative is P (t informs) := idf(t)/ maxidf.
The details of the meaning of TF and IDF-based probabilities is beyond the focus
of this paper; however, important is that the TF and IDF-based probabilities
described in the PDatalog program have a proper probabilistic semantics, and
this leads to a probabilistic interpretation of the TF-IDF score.
The rule for “w qterm(T,Q)” models IDF-based query term weighting. This is
followed by a normalisation. The normalised tuple probabilities are then used
for obtaining a probabilistic TF-IDF-based score in “retrieve(D,Q)”.
In summary, this example illustrating the probabilistic logical modelling of TF-
IDF highlighted that TF-based and IDF-based probabilities are combined to
obtain a probabilistic TF-IDF-based score.
5 Probabilistic Logical Modelling of Geometric IR
Geometric IR can be viewed as a perspective for IR where the modelling of
documents and queries is based on vectors. Quantum-inspired IR may be viewed
as a modelling approach that combines geometric IR and probability theory.
Essentially, the vector components are probabilities, and the combination of
vectors (and/or matrices) yields probabilities.
The following sections present the modelling of GIR. Each section is related to
the respective GIR section in which the mathematical foundations were reviewed.
5.1 Modelling probabilities (GIR 2.3)
As pointed out above, a central property of Quantum-inspired IR is related to
the Euclidean norm, also referred to as the L2 norm.
p X 2
L2 (x) := ( xi ) (20)
i
The L1 norm is simply the sum over the vector components:
X
L1 (x) := xi (21)
i
With respect to probabilistic logical modelling, the L1 norm corresponds to the
assumption ’DISJOINT’ (corresponds to maximum-likelihood (ML) estimate),
and the L2 norm is covered by the assumption ’EUCLIDEAN’.
Figure 5 shows the modelling of ML-based and Euclidean-based probabilities.
1 # Maximum−Likelihood P(t|d): based on the L1−norm.
2 # P L1(t|d) = (sum {t,d in term} P term(t,d)) / (sum {t’ in d} P term(t’,d))
3 p L1 t d SUM(T, D) :− term(T, D) | DISJOINT(D);
5 # Example for doc2[sailing sailing boats ]: docLen(doc2)=3
6 # L1(doc2) = 3 = 2 + 1
7 # 0.667 (sailing ,doc2) # 2/3: Since 2 occurrences of sailing in doc2
8 # 0.333 (boats,doc2) # 1/3: Since 1 occurrence of boats in doc2
10 # Euclidean P(t|d); based on the L2−norm.
11 # P L2(t|d) = P L1(t|d) / sqrt ( sum {t’ in d} square(P L1(t’|d)) )
12 p L2 t d(T, D) :− p L1 t d(T, D) | EUCLIDEAN(D);
14 # Example:
15 # L2(doc2) = sqrt(5) = sqrt(2ˆ2 + 1ˆ2)
16 # 0.894 (sailing ,doc2) # 2 / sqrt(5)
17 # 0.447 (boats,doc2) # 1 / sqrt(5)
Fig. 5. Maximum-Likelihood and Euclidean Probabilities
Theorem 1. The rule for “p L1 t d” is correct, i.e. the tuple probabilities in
“p L1 t d” correspond to ML-probabilities of the form n/N where n is the sum
of tuple probabilities in “term(t,d)”, and N is the sum of tuple probabilities
“term(·,d)”, i.e. the sum of document tuples.
Proof. The L1-based probability PL1 (t|d) is modelled in the rule for relation
“p L1 t d”. The rule body generates a probabilistic relation in which each rule
probability (from relation “term”) is divided by the evidence probability, i.e. the
sum of the tuple probabilities of the tuples that share the same evidence key.
Here, “(D)” is the evidence key, i.e. the document id constitutes the evidence key.
Therefore, thePtuple probabilities generated by the rule body have the semantics
Pterm ((t, d))/ t0 ∈d Pterm ((t0 , d)).
The aggregation assumption in the rule head, i.e. SUM in p t d SUM(T,D),
aggregates the tuple probabilities of non-distinct tuples.
For a non-probabilistic relation “term”,P “p t d” is the normalised within-
document term frequency, i.e. nL (t, d)/ t0 ∈d nL (t0 , d), where nL (t, d) is the total
occurrence of t (also denoted as tf d := nL (t, d)).
Theorem 2. The rule for “p L2 t d” is correct, i.e. the tuple probabilities in
“p L2 t d” correspond to the probabilities as required for GIR.
Proof. Let xi be the vector component for the i-th dimension. The Euclidean
normalisation is:
x
qPi
2
j xj
P
Let PL1 (t) := xt / t0 xt0 be the L1-based probability, and according to theo-
rem 1, we find this probability in relation “p L1 t d”.
The norm EUCLIDEAN in the subgoal of “p L2 t d” forms the sum of the
squares of the probabilities that share the same evidence key. Then, for each
tuple, the tuple probability is computed as follows:
PL1 (t|d)
PL2 (t|d) = qP
0 2
t0 ∈d (PL1 (t |d))
Given the computation of PL1 (t|d), the following equation holds:
PL1 (t|d) xt
qP = pP (22)
t0 xt
2
t0 ∈d (PL1 (t0 |d)) 0
Thus, the tuple probabilities in name p L2 t d are correct in the sense that they
are based on the Euclidean normalisation.
5.2 Modelling retrieval (GIR 2.4)
The following PDatalog program illustrates the modelling of the GIR-based ap-
proach where L2-norm-based probabilities are combined in the rule body.
The PD program shows some rules to illustrate the modelling of various retrieval
models. The rules shown underline that the models share the same pattern: a
query representation is joined (matched) with a document representation, and
the evidence from this match is aggregated.
1 # Euclidean P(t|d) and P(t|q) as defined previously :
2 p L2 t d(T, D) :− p L1 t d(T, D) | EUCLIDEAN(D);
3 p L2 t q(T, Q) :− p L1 t q(T, Q) | EUCLIDEAN(Q);
5 # Geometric IR:
6 gir retrieve SUM(D, Q) :− p L2 t q idf(T, Q) & p L2 t d idf(T, D);
8 # TF−IDF:
9 tf idf retrieve SUM(D, Q) :− tf q(T, Q) & pidf(T) & tf d(T, D);
11 # VSM:
12 vec q(T, Q) :− tf q(T, Q) & pidf(T);
13 vec d(T, D) :− tf d(T, D) & pidf(T);
14 vsm retrieve SUM(D, Q) :− vec q(T, Q) & vec d(T, D);
16 # GVSM:
17 gvsm retrieve SUM(D, Q) :− vec q(T q, Q) & g(T q, T d) & vec d(T d, D);
Fig. 6. GIR Retrieval in PDatalog
6 Conclusions
This paper reviewed the basic concepts of a geometric and probabilistic approach
to IR. In essence, vector and matrix components correspond to probabilities, and
multiplications in the underlying vector spaces generate probabilities.
This paper has two main contributions. Firstly, section 3 relates the terminology
of “Geometric IR” to traditional concepts such as the generalised vector space
model (GVSM) and the total probability theorem, whereby we also underline
the relationship between the GVSM and the total probability.
Secondly, section 4 reviews Probabilistic Datalog, and shows the probabilistic
logical modelling of some standard models (TF-IDF, GVSM). Thirdly, section 5
added the modelling of selected concepts of “Geometric IR” (Euclidean-based
probability estimation, modelling of retrieval models to outline the dualities
between IR models).
This paper advocates probabilistic logic (PDatalog and descriptive modelling in
general) as a potential platform to model IR. The overall finding of this paper is
that the expressiveness of PDatalog is sufficient to model traditional IR models
and concepts of “Geometric IR”.
Future work will include to investigate quality and scalability of the shown ap-
proach. Given that PDatalog scales for TF-IDF, VSM, PIN, and language mod-
elling, the hypothesis is that Euclidean-based normalisations and other concepts
of GIR can be transferred to large-scale. Having discussed Euclidean proba-
bility estimations in this work, we will include other components of quantum
mechanics, like projection, state vector ensembles (mixed states) and compound
systems expressed as tensor spaces, together with dynamics like state changes
due to observation, to probabilistic Datalog.
7 Acknowledgements
We acknowledge the support of the UK EPSRC Project EP/F015984/1 Renais-
sance.
References
1. I. Frommholz and N. Fuhr. Probabilistic, object-oriented logics for annotation-
based retrieval in digital libraries. In Proceedings of Joint Conference on Digital
Libraries (JCDL’06), pages 55–64, 2006.
2. I. Frommholz, B. Larsen, B. Piwowarski, M. Lalmas, P. Ingwersen, and K. van
Rijsbergen. Supporting Polyrepresentation in a Quantum-inspired Geometrical
Retrieval Framework. In Proceedings of IIiX 2010, pages 115–124, New Brunswick,
Aug. 2010. ACM.
3. N. Fuhr. Probabilistic Datalog - a logic for powerful retrieval methods. In Pro-
ceedings of the 18th ACM SIGIR Conference on Research and development in in-
formation retrieval (SIGIR’95), pages 282–290, 1995.
4. C. Meghini, F. Sebastiani, U. Straccia, and C. Thanos. A model of information
retrieval based on a terminological logic. In ACM SIGIR conference on Research
and development in information retrieval, pages 298–307, 1993.
5. M. Melucci. A basis for information retrieval in context. ACM Transactions on
Information Systems (TOIS), 26(3), 2008.
6. H. Nottelmann. PIRE: An Extensible IR Engine Based on Probabilistic Datalog.
In Proceedings of the European Conference on Information Retrieval (ECIR’05),
pages 260–274, 2005.
7. B. Piwowarski, I. Frommholz, M. Lalmas, and K. Van Rijsbergen. What can
Quantum Theory Bring to Information Retrieval? In Proc. 19th International
Conference on Information and Knowledge Management, pages 59–68, Oct. 2010.
8. T. Roelleke. A frequency-based and a Poisson-based probability of being informa-
tive. In ACM SIGIR, pages 227–234, Toronto, Canada, 2003.
9. T. Roelleke and N. Fuhr. Information retrieval with probabilistic Datalog. In
F. Crestani, M. Lalmas, and C. J. Rijsbergen, editors, Uncertainty and Logics -
Advanced models for the representation and retrieval of information. Kluwer Aca-
demic Publishers, 1998.
10. T. Roelleke, H. Wu, J. Wang, and H. Azzam. Modelling retrieval models in a
probabilistic relational algebra with a new operator: The relational Bayes. VLDB
Journal, 17(1):5–37, January 2008.
11. I. Schmitt. QQL : A DB&IR Query Language. The VLDB journal, 17(1):39–56,
2008.
12. W. Shen, A. Doan, J. F. Naughton, and R. Ramakrishnan. Declarative informa-
tion extraction using datalog with embedded extraction predicates. In VLDB ’07:
International conference on Very large data bases, pages 1033–1044, 2007.
13. C. J. van Rijsbergen. The Geometry of Information Retrieval. Cambridge Univer-
sity Press, New York, NY, USA, 2004.