=Paper= {{Paper |id=None |storemode=property |title=A Probabilistic Abduction Engine for Media Interpretation based on Ontologies |pdfUrl=https://ceur-ws.org/Vol-613/invited.pdf |volume=Vol-613 |dblpUrl=https://dblp.org/rec/conf/cade/GriesMNRSW10 }} ==A Probabilistic Abduction Engine for Media Interpretation based on Ontologies== https://ceur-ws.org/Vol-613/invited.pdf
    A Probabilistic Abduction Engine for Media Interpretation
                       based on Ontologies

      Oliver Gries, Ralf Möller, Anahita Nafissi, Maurice Rosenfeld, Kamil Sokolski, Michael Wessel

                                 Hamburg University of Technology, Germany


        Abstract. For multimedia interpretation, and in particular for the combined interpretation
        of information coming from different modalities, a semantically well-founded formalization is
        required in the context of an agent-based scenario. Low-level percepts, which are represented
        symbolically, define the observations of an agent, and interpretations of content are defined as
        explanations for the observations.
        We propose an abduction-based formalism that uses description logics for the ontology and Horn
        rules for defining the space of hypotheses for explanations (i.e., the space of possible interpre-
        tations of media content), and we use Markov logic to define the motivation for the agent to
        generate explanations on the one hand, and for ranking different explanations on the other.
        This work has been funded by the European Community with the project CASAM (Contract
        FP7-217061 CASAM) and by the German Science Foundation with the project PRESINT (DFG
        MO 801/1-1).




1      Introduction

For multimedia interpretation in the context of an agent-based scenario, and for the combined inter-
pretation of information coming from different modalities in particular, a semantically well-founded
formalization is required. Low-level percepts, which are represented symbolically, define the observa-
tions of an agent w.r.t. some content, and interpretations of the content are defined as explanations
for the observations.
    We propose an abduction-based formalism that uses description logics for the ontology and Horn
rules for defining the space of hypotheses for explanations (i.e., the space of possible interpretations
of media content). Additionally, we propose the use of Markov logic to define the motivation for the
agent to generate explanations on the one hand, and for ranking different explanations on the other.
    Based on a presentation of the most important preliminaries in Section 2, the abduction and inter-
pretation procedures are discussed in detail in Section 3. Optimization techniques for the probabilistic
abduction engine are pointed out. In Section 4, a complete example is given, showing the main approach
using intermediate steps. Section 7 summarizes this paper.


2      Preliminaries

In this chapter, the most important preliminaries are specified in order to make this document self-
contained.


2.1     Preliminaries on Description Logics

For specifying the ontology used to describe low-level analysis results as well as high-level interpretation
results, a less expressive description logic is applied to facilitate fast computations. We decided to
represent the domain knowledge with the DL ALHf − (D) (restricted attributive concept language
with role hierarchies, functional roles and concrete domains). We shortly describe our nomenclature in
order to make this paper self-contained. For details see [Baader et al., 2003].
    In logic-based approaches, atomic representation units have to be specified. The atomic represen-
tation units are fixed using a so-called signature. A DL signature is a tuple S = (CN, RN, IN),
where CN = {A1 , ..., An } is the set of concept names (denoting sets of domain objects) and RN =
{R1 , ..., Rm } is the set of role names (denoting relations between domain objects). The signature also
contains a component IN indicating a set of individuals (names for domain objects).
    In order to relate concept names and role names to each other (terminological knowledge) and
to talk about specific individuals (assertional knowledge), a knowledge base has to be specified. An
ALHf − knowledge base ΣS = (T , A), defined with respect to a signature S, is comprised of a termi-
nological component T (called Tbox ) and an assertional component A (called Abox ). In the following
we just write Σ if the signature is clear from context. A Tbox is a set of so-called axioms, which are
restricted to the following form in ALHf − :

             (I) Subsumption                              A1 v A2 , R1 v R2
             (II) Disjointness                            A1 v ¬A2
             (III)Domain and range restrictions for roles ∃R.> v A, > v ∀R.A
             (IV)Functional restriction on roles          > v (≤ 1 R)
             (V) Local range restrictions for roles       A1 v ∀R.A2
             (VI)Definitions with value restrictions      A ≡ A0 u ∀R1 .A1 u ... u ∀Rn .An

With axioms of form (I), concept (role) names can be declared to be subconcepts (subroles) of each
other. Axioms of form (II) denote disjointness between concepts. Axioms of type (III) introduce domain
and range restrictions for roles. Axioms of the form (IV) introduce so-called functional restrictions on
roles, and axioms of type (V) specify local range restrictions (using value restrictions, see below). With
axioms of kind (VI) so-called definitions (with necessary and sufficient conditions) can be specified
for concept names found on the left-hand side of the ≡ sign. In the axioms, so-called concepts are
used. Concepts are concept names or expressions of the form > (anything), ⊥ (nothing), ¬A (atomic
negation), (≤ 1 R) (role functionality), ∃R.> (limited existential restriction), ∀R.A (value restriction)
and (C1 u ... u Cn ) (concept conjunction).
    Knowledge about individuals is represented in the Abox part of Σ. An Abox A is a set of expressions
of the form A(a) or R(a, b) (concept assertions and role assertions, respectively) where A stands for
a concept name, R stands for a role name, and a, b stand for individuals. Aboxes can also contain
equality (a = b) and inequality assertions (a 6= b). We say that the unique name assumption (UNA) is
applied, if a 6= b is added for all pairs of individuals a and b.
    In order to understand the notion of logical entailment , we introduce the semantics of ALHf − . In
DLs such as ALHf − , the semantics is defined with interpretations I = (4I , ·I ), where 4I is a non-
empty set of domain objects (called the domain of I) and ·I is an interpretation function which maps
individuals to objects of the domain (aI ∈ 4I ), atomic concepts to subsets of the domain (AI ⊆ 4I )
and roles to subsets of the cartesian product of the domain (RI ⊆ 4I × 4I ). The interpretation of
arbitrary ALHf − concepts is then defined by extending ·I to all ALHf − concept constructors as
follows:

            >I                = 4I
              I
            ⊥                 =∅
            (¬A)I             = 4 I \ AI
                    I
            (≤ 1 R)           = {u ∈ 4I | (∀v1 , v2 ) [((u, v1 ) ∈ RI ∧ (u, v2 ) ∈ RI ) → v1 = v2 ]
                    I
            (∃R.>)            = {u ∈ 4I | (∃v) [(u, v) ∈ RI ]}
                    I
            (∀R.C)            = {u ∈ 4I | (∀v) [(u, v) ∈ RI → v ∈ C I ]}
            (C1 u ... u Cn ) = C1I ∩ ... ∩ CnI
                            I


    In the following, the satisfiability condition for axioms and assertions of an ALHf − -knowledge
base Σ in an interpretation I are defined. A concept inclusion C v D (concept definition C ≡ D)
is satisfied in I, if C I ⊆ DI (resp. C I = DI ) and a role inclusion R v S (role definition R ≡ S), if
RI ⊆ S I (resp. RI = S I ). Similarly, assertions C(a) and R(a, b) are satisfied in I, if aI ∈ C I resp.
(a, b)I ∈ RI . If an interpretation I satisfies all axioms of T resp. A it is called a model of T resp. A.
If it satisfies both T and A it is called a model of Σ. Finally, if there is a model of Σ (i.e., a model for
T and A), then Σ is called satisfiable.
     We are now able to define the entailment relation |=. A DL knowledge base Σ logically entails an
assertion α (symbolically Σ |= α) if α is satisfied in all models of Σ. For an Abox A, we say Σ |= A if
Σ |= α for all α ∈ A.


2.2   Substitutions, Queries, and Rules

Sequences, Variable Substitutions and Transformations A variable is a name of the form
String where String is a string of characters from {A. . .Z}. In the following definitions, we denote
places where variables can appear with uppercase letters.
    Let V be a set of variables, and let X, Y1 , . . . , Yn be sequences h. . .i of variables from V . z denotes
a sequence of individuals. We consider sequences of length 1 or 2 only, if not indicated otherwise, and
assume that (hXi) is to be read as (X) and (hX, Y i) is to be read as (X, Y ) etc. Furthermore, we
assume that sequences are automatically flattened. A function as set turns a sequence into a set in
the obvious way.
    A variable substitution σ = [X ← i, Y ← j, . . .] is a mapping from variables to individuals. The
application of a variable substitution σ to a sequence of variables hXi or hX, Y i is defined as hσ(X)i
or hσ(X), σ(Y )i, respectively, with σ(X) = i and σ(Y ) = j. In this case, a sequence of individuals is
defined. If a substitution is applied to a variable X for which there exists no mapping X ← k in σ
then the result is undefined. A variable for which all required mappings are defined is called admissible
(w.r.t. the context).

Grounded Conjunctive Queries Let X, Y1 , . . . , Yn be sequences of variables, and let Q1 , . . . , Qn
denote concept or role names.
   A query is defined by the following syntax.

                                          {(X) | Q1 (Y1 ), . . . , Qn (Yn )}

The sequence X may be of arbitrary length but all variables mentioned in X must also appear in at
least one of the Y1 , · · · , Yn : as set(X) ⊆ as set(Y1 ) ∪ · · · ∪ as set(Yn ).
    Informally speaking, Q1 (Y1 ), . . . , Qn (Yn ) defines a conjunction of so-called query atoms Qi (Yi ). The
list of variables to the left of the sign | is called the head and the atoms to the right of are called the
query body. The variables in the head are called distinguished variables. They define the query result.
The variables that appear only in the body are called non-distinguished variables and are existentially
quantified.
    Answering a query with respect to a knowledge base Σ means finding admissible variable substi-
tutions σ such that Σ |= {σ(Q1 (Y1 )), . . . , σ(Qn (Yn ))}. We say that a variable substitution σ = [X ←
i, Y ← j, . . .] introduces bindings i, j, . . . for variables X, Y, . . .. Given all possible variable substitutions
σ, the result of a query is defined as {(σ(X))}. Note that the variable substitution σ is applied before
checking whether Σ |= {Q1 (σ(Y1 )), . . . , Qn (σ(Yn ))}, i.e., the query is grounded first.
    For a query {(?y) | P erson(?x), hasP articipant(?y, ?x)} and the Abox Γ1 = {HighJump(ind1 ),
P erson(ind2 ), hasP articipant(ind1 , ind2 )}, the substitution [?x ← ind2 , ?y ← ind1 ] allows for an-
swering the query, and defines bindings for ?y and ?x.
    A boolean query is a query with X being of length zero. If for a boolean query there exists a variable
substitution σ such that Σ |= {σ(Q1 (Y1 )), . . . , σ(Qn (Yn ))} holds, we say that the query is answered
with true, otherwise the answer is false.
    Later on, we will have to convert query atoms into Abox assertions. This is done with the func-
tion transform. The function transform applied to a set of query atoms {γ1 , . . . γn } is defined as
{transform(γ1 , σ), . . . , transform(γn , σ)} where
transform(P (X), σ) := P (σ(X)).
Rules A rule r has the following form P (X) ← Q1 (Y1 ), . . . , Qn (Yn ) where P, Q1 , . . . , Qn denote
concept or role names with the additional restriction (safety condition) that as set(X) ⊆ as set(Y1 ) ∪
· · · ∪ as set(Yn ).
Rules are used to derive new Abox assertions, and we say that a rule r is applied to an Abox A. The
function call apply(Σ, P (X) ← Q1 (Y1 ), . . . , Qn (Yn ), A) returns a set of Abox assertions {σ(P (X))} if
there exists an admissible variable substitution σ such that the answer to the query
                                      {() | Q1 (σ(Y1 )), . . . , Qn (σ(Yn ))}
                                1
is true with respect to Σ ∪ A. If no such σ can be found, the result of the call to apply(Σ, r, A) is the
empty set. The application of a set of rules R = {r1 , . . . rn } to an Abox is defined as follows.
                                                   [
                                 apply(Σ, R, A) =         apply(Σ, r, A)
                                                         r∈R

The result of forward chain(Σ, R, A) is defined to be ∅ if apply(Σ, R, A) ∪ A = A holds. Otherwise
the result of forward chain is determined by the recursive call
apply(Σ, R, A) ∪ forward chain(Σ, R, A ∪ apply(Σ, R, A)).
    For some set of rules R we extend the entailment relation by specifying that (T , A) |=R A0 iff
(T , A ∪ forward chain((T , ∅), R, A)) |= A0 .

2.3    Probabilistic Knowledge Representation
The basic notion of probabilistic knowledge representation formalisms is the so-called random experi-
ment. A random variable X is a function assigning a value to the result of a random experiment. The
random experiment itself is not represented, so random variables are functions without arguments,
which return different values at different points of time. Possible values of a random variable comprise
the so-called domain of the random variable. In the sequel, we will use boolean random variables, whose
values can be either 1 or 0 (true or f alse, respectively).
    Let X~ = {X1 , ..., Xn } be the ordered set of all random variables of a random experiment. An event
(denoted X ~ = ~x) is an assignment X1 = x1 , ..., Xn = xn to all random variables. In case n = 1 we call
the event simple, otherwise the event is called complex. A certain vector of values ~x is referred to as
a possible world. A possible world can be associated with a probability value or probability for short.
Hence, the notion of a possible world can be used as a synonym for an event, and depending on the
context we use the former or the latter name. In case of an event with a boolean random variable X,
we write x as an abbreviation for X = true and ¬x as an abbreviation for X = f alse.
    Mappings of events to probabilities (or assignment of probabilities to events) are specified with
so-called probability assertions of the following syntax: P (X   ~ = ~x) = p, where X
                                                                                    ~ is a vector of random
variables, and p is a real value between 0 and 1 (it is assumed that the reader is familiar with Kol-
mogorov’s axioms of probability). In the special case of a simple event (single random variable, n = 1)
we write P (X = x) = p. The probability value p of an event is denoted as P (X        ~ = ~x) (or P (X = x)
in the simple case). In its raw form a set of probabilistic assertions is called a probabilistic knowledge
base (with signature X).  ~
    A mapping from the domain of a random variable X to probability values [0, 1] is called a dis-
tribution. For distributions we use the notation P(X). Distributions can be defined for (ordered)
sets of random variables as well. In this case we use P(X1 , . . . , Xn ) as a denotation for a map-
ping to the n-dimensional cross product of [0, 1]. For specifying a distribution, probability asser-
tions for all domain values must be specified, and the values p must sum up to 1. In case all ran-
dom variables of a random experiment are involved, we speak of a (full) joint probability distribu-
tion (JPD), otherwise the expression is said to denote a marginal distribution (projection of the n-
dimensional space of probability values to a lower-dimensional space with m dimensions). The expres-
sion P(X1 , . . . , Xm , Xm+1 = xm+1 , . . . , Xl = xl ) denotes an m-dimensional distribution with known
1
    We slightly misuse notation in assuming (T , A) ∪ ∆ = (T , A ∪ ∆). If Σ ∪ A is inconsistent the result is
    well-defined but useless. It will not be used afterwards.
values xm+1 , . . . , xl . In slight misuse of notation, we sometimes write ~e for these known values (e stands
for evidence). The fragment ~e need not necessarily be written at the end in the parameter list of P.
    A conditional probability for a set of random variables X1 , ..., Xm is denoted with P (X1 = x1 , ..., Xm =
xm | ~e) or, in distribution form, we write P(X1 , ..., Xm | ~e) (conditional probability distribution). This
                                         ~
distribution can be also written as P(P(~
                                       X,~e)
                                        e) .
    For a probabilistic knowledge base, formal inference problems are defined. We restrict our attention
to the two most convenient probabilistic inference problems: A conditional probability query is the
computation of the joint distribution of a set of m random variables conditioned on ~e and is denoted
with
                                        PX (x1 ∧ ... ∧ xm | ~e) =?.
where vars(x1 , . . . , xm ) ∩ vars(~e) = ∅ and vars(x1 , . . . , xm ) ∪ vars(~e) ⊆ X with vars specified in the
obvious way. Note that xi indicates Xi = xi . In the following we have the distribution form of the
above query:

                                             PX (X1 , ..., Xm | ~e) =?.
If the set of random variables X is known from the context, the subscript X is often omitted.
    The Maximum A Posteriori (MAP) inference returns the most-likely state of query atoms given the
evidence. Based on the MAP inference, the “most probable world” given the evidence is determined
as a set of events. The MAP inference problem given a distribution for a set of random variables X is
formalized as follows:
                                  M APX (~e) := ~e ∪ argmax~x P (~x|~e)                           (1)
where vars(~x) ∩ vars(~e) = ∅ and vars(~x) ∪ vars(~e) = X.
    For both inference problems, conditional probability queries as well as the MAP problem, different
kinds of algorithms exist, which possibly exploit additional assertions (such as, e.g., conditional inde-
pendence assumptions in so-called Bayesian networks, or factored probability distribution specifications
as in so-called Markov networks). In the next subsection, we focus on the latter formalism.


2.4   Markov Logic

The formalism of Markov logic [Domingos and Richardson, 2007] provides a means to combine the
expressivity of first-order logic augmented with the formalism of Markov networks [Pearl, 1988]. The
Markov logic formalism uses first-order logic to define “templates” for constructing Markov networks.
The basic notion for this is a called a Markov logic network.
    A Markov logic network MLN = (FMLN , WMLN ) consists of an ordered multiset of first-order
formulas FMLN = {F1 , ..., Fm } and an ordered multiset of real number weights W = {w1 , ..., wm }.
The association of a formula to its weight is by position in the ordered sets. For a formula F ∈ FM LN
with associated weight w we also write wF (weighted formula). Thus, a Markov logic network can
also be defined as a set of weighted formulas. Both views can be used interchangeably. As a notational
convenience, for ordered sets we nevertheless sometimes write X,   ~ Y~ instead of X ~ ∪Y~.
    In contrast to standard first-order logics such as predicate logic, relational structures not satisfying
a formula Fi are not ruled out as models. If a relational structure does not satisfy a formula associated
with a large weight it is just considered to be quite unlikely the ”right” one.
    Let C = {c1 , ..., cm } be the set of all constants mentioned in FM LN . A grounding of a formula
Fi ∈ FM LN is a substitution of all variables in the matrix of Fi with constants from C. From all
groundings, the (finite) set of grounded atomic formulas (also referred to as ground atoms) can be
obtained. Grounding corresponds to a domain closure assumption. The motivation is to get rid of the
quantifiers and reduce inference problems to the propositional case.
    Since a ground atom can either be true or false in an interpretation (or world), it can be considered
as a boolean random variable X. Consequently, for each MLN with associated random variables X,            ~
there is a set of possible worlds ~x. In this view, sets of ground atoms are sometimes used to denote
worlds. In this context, negated ground atoms correspond to false and non-negated ones to true. We
denote worlds using a sequence of (possibly negated) atoms.
    When a world ~x violates a weighted formula (does not satisfy the formula) the idea is to ensure
that this world is less probable rather than impossible as in predicate logic. Note that weights do not
directly correspond to probabilities (see [Domingos and Richardson, 2007] for details).
    For each possible world of a Markov logic network MLN = (FM LN , WM LN ) there is a proba-
bility for its occurrence. Probabilistic knowledge is required to obtain this value. As usual, proba-
bilistic knowledge is specified using a probability distribution. In the formalism of Markov networks
the full joint probability distribution of a Markov logic network M LN is specified in symbolic form
as PM LN (X) ~ = (P (X  ~ = ~x1 ), . . . , P (X
                                              ~ = ~xn )), for every possible ~xi ∈ {true, f alse}n , n = |X|
                                                                                                          ~
and P (X ~ = ~x) := log linM LN (~x) (for a motivation of the log-linear form, see, e.g., [Domingos and
Richardson, 2007]), with log lin being defined as
                                                           |FM LN |
                                                   1         X
                                log linM LN (~x) =   exp (          wi ni (~x))
                                                   Z         i=1

According to this definition, the probability of a possible world ~x is determined by the exponential
of the sum of the number of true groundings (ni ) formulas Fi ∈ FM LN in ~x, multiplied with their
corresponding weights wi ∈ WM LN , and finally normalized with
                                                      |FM LN |
                                            X            X
                                      Z=         exp (           wi ni (~x)),                            (2)
                                             ~
                                           x∈X
                                           ~             i=1


the sum of the probabilities of all possible worlds. Thus, rather than specifying the full joint distribution
directly in symbolic form as we have discussed before, in the Markov logic formalism, the probabilistic
knowledge is specified implicitly by the weights associated with formulas. Determining these formulas
and their weights in a practical context is all but obvious, such that machine learning techniques are
usually employed for knowledge acquisition.
    A conditional probability query for a Markov logic network M LN is the computation of the joint
distribution of a set of m events involving random variables conditioned on ~e and is denoted with

                                         PM LN (x1 ∧ . . . ∧ xm | ~e)

The semantics of this query is given as:

                  Prand vars(M LN ) (x1 ∧ . . . ∧ xm | ~e) w.r.t. PM LN (rand vars(M LN ))
where vars(x1 , . . . , xm ) ∩ vars(~e) = ∅ and vars(x1 , . . . , xm ) ⊆ rand vars(M LN ). and the function
rand vars function is defined as follows: rand vars((F, W)) := {P (C) | P (C) is mentioned in some
grounded formula F ∈ F }. Grounding is accomplished w.r.t. all constants that appear in F. An
algorithm for answering queries of the above form is investigated in [Gries and Möller, 2010].
    In the case of Markov logic, the definition of the M AP problem given in (1) can be rewritten as
follows. The conditional probability term P (~x|~e) is replaced with with the Markovian formula:
                                                                                        !
                                                            1          X
                            M APM LN (~e) := ~e ∪ argmax~x       exp      wi ni (~x, ~e)                (3)
                                                           Ze           i

Thus, for describing the most-probable world, M AP returns a set of events, one for each random
variable used in the Markov network derived from M LN . In the above equation, ~x denotes the hidden
variables, and Ze denotes the normalization constant which indicates that the normalization process
is performed over possible worlds consistent with the evidence ~e. In the next equation, Ze is removed
since it is constant and it does not affect the argmax operation. Similarly, in order to optimize the
M AP computation the exp function is left out since it is a monotonic function and only its argument
has to be maximized:                                       X
                          M APM LN (~e) := ~e ∪ argmax~x      wi ni (~x, ~e)                     (4)
                                                                    i

The above equation shows that the MAP problem in Markov logic formalism is reduced to a new
problem which maximizes the sum of weights of satisfied clauses.
    Since the MAP determination in Markov networks is an NP-hard problem [Domingos and Richard-
son, 2007], it is performed by exact and approximate solvers. The most commonly used approximate
solver is MaxWalkSAT algorithm, a weighted variant of the WalkSAT local-search satisfiability solver.
The MaxWalkSAT algorithm attempts to satisfy clauses with positive weights and keep clauses with
negative weights unsatisfied.
    It has to be mentioned that there might be several worlds with the same maximal probability. But
at this step, only one of them is chosen non-deterministically.


2.5     Combining Markov Logic and Description Logics

Since ALHf − is a fragment of first-order logic, its extension to the Markovian style of formalisms is
specified in a similar way as for predicate logic in the section before. The formulas in Markov logic
correspond to Tbox axioms and Abox assertions. Weights in Markov description logics are associated
with axioms and assertions.
     Groundings of Tbox axioms are defined analogously to the previous case.2 Abox assertions do not
contain variables and are already grounded. Note that since an ALHf − Abox represents a relational
structure of domain objects, it can be directly seen as a possible world itself if assertions not contained
in the Abox are assumed to be false.
     For appropriately representing domain knowledge in CASAM, weights are possibly used only for
a subset of the axioms of the domain ontology. The remaining axioms can be assumed to be strict,
i.e., assumed to be true in any case. A consequence of specifying strict axioms is that lots of possible
worlds ~x can be ruled out (i.e., will have probability 0 by definition).
     A Markov DL knowledge base ΣM is a tuple (T , A), where T is comprised of a set Ts of strict
axioms and a set Tw of weighted axioms and A is comprised of a set As of strict assertions and a set
Aw of weighted assertions. Referring to axioms, a proposal for CASAM is to consider strictness for the
domain ontology patterns (I)–(IV):

                        (I) subsumption                            A1 v A2 , R1 v R2
                        (II) disjointness                          A1 v ¬A2
                        (III) domain and range restrictions        ∃R.> v A, > v ∀R.A
                        (IV) functional roles                      > v (≤ 1 R)

    The main justification treating axioms as strict is that the subsumption axioms, disjointness axioms,
domain and range restrictions as well as functional role axioms (in combination with UNA) are intended
to be true in any case such that there is no need to assign large weights to them.
    In [Gries and Möller, 2010] we show that Gibbs sampling with deterministic dependencies spec-
ified in an appropriate fragment remains correct, i.e., probability estimates approximate the correct
probabilities. We have investigated a Gibbs sampling method incorporating deterministic dependencies
and conclude that this incorporation can speed up Gibbs sampling significantly. For details see [Gries
and Möller, 2010]. The advantage of this probabilistic approach is that initial ontology engineering
is done as usual with standard reasoning support and with the possibility to add weighted axioms
and weighted assertions on top of the strict fundament. Since lots of possible worlds do not have to
be considered because their probability is known to be 0, probabilistic reasoning will be significantly
faster.
2
    For this purpose, the variable-free syntax of axioms can be first translated to predicate logic.
3     Probabilistic Interpretation Engine

In this chapter, the abduction procedure is defined by the abduction algorithm CAE. Additionally, a
media interpretation agent is described by defining a probabilistic interpretation algorithm Interpret.


3.1    Computing Explanations

In general, abduction is formalized as Σ ∪ ∆ |=R Γ where background knowledge (Σ), rules (R), and
observations (Γ ) are given, and explanations (∆) are to be computed. In terms of DLs, ∆ and Γ are
Aboxes and Σ is a pair of Tbox and Abox.
    Abox abduction is implemented as a non-standard retrieval inference service in DLs. In contrast
to standard retrieval inference services where answers are found by exploiting the ontology, Abox
abduction has the task of acquiring what should be added to the knowledge base in order to answer
a query. Therefore, the result of Abox abduction is a set of hypothesized Abox assertions. To achieve
this, the space of abducibles has to be previously defined and we do this in terms of rules.
    We assume that a set of rules R as defined above (see Section 2.2) are specified, and define a
non-deterministic function compute explanation as follows.

 – compute explanation(Σ, R, A, P (z)) = transform(Φ, σ) if there exists a rule

                                      r = P (X) ← Q1 (Y1 ), . . . , Qn (Yn ) ∈ R

   that is applied to an Abox A such that a minimal set of query atoms Φ and an admissible variable
   substitution σ with σ(X) = z can be found, and the query Q := {() | expand(P (z), r, R, σ) \ Φ} is
   answered with true.
 – If no such rule r exists in R it holds that compute explanation(Σ, R, A, P (z)) = ∅.

The goal of the function compute explanation is to determine what must be added (Φ) such that
an entailment Σ ∪ A ∪ Φ |=R P (Z) holds. Hence, for compute explanation, abductive reasoning is
used. The set of query atoms Φ defines what must be hypothesized in order to answer the query
Q with true such that Φ ⊆ expand(P (X), r, R, σ) holds. The definition of compute explanation is
non-deterministic due to several possible choices for Φ.
   The function application expand(P (Z), P (X) ← Q1 (Y1 ), . . . , Qn (Yn ), R) is also defined in a non-
deterministic way as

                       expand0 (Q1 (σ 0 (Y1 )), R, σ) ∪ · · · ∪ expand0 (Qn (σ 0 (Yn )), R, σ)

with expand0 (P (Z), R, σ) being expand(P (σ 0 (z)), r, R, σ 0 ) if there exist a rule r = P (X) ← . . . ∈ R
and hP (X)i otherwise. The variable substitution σ 0 is an extension of σ such that:

                                         σ 0 = [X1 ← z1 , X2 ← z2 , . . .]                              (5)

The above equation shows the mapping of the free variables if it is not already defined. This means
the free variables in the body of each rule are mapped to individuals with unique IDs.

We say the set of rules is backward-chained, and since there might be multiple rules in R, backward-
chaining is non-deterministic as well. Thus, multiple explanations are generated.3
3
    In the expansion process, variables have to be renamed. We neglect these issues here.
3.2     The Abduction Procedure

In the following, we devise an abstract computational engine for “explaining” Abox assertions in terms
of a given set of rules. Explanation of Abox assertions w.r.t. a set of rules is meant in the sense that
using the rules some high-level explanations are constructed such that the Abox assertions are entailed.
The explanation of an Abox is again an Abox. For instance, the output Abox represents results of the
content interpretation process. The presentation in slightly extended compared to the one in [Castano
et al., 2008]. Let the agenda A be a set of Aboxes Γ and let Γ be an Abox of observations whose
assertions are to be explained. The goal of the explanation process is to use a set of rules R to derive
“explanations” for elements in Γ . The explanation algorithm implemented in the CASAM abduction
engine works on a set of Aboxes I.
The complete explanation process is implemented by the CAE function:

      Function CAE(Ω, Ξ, Σ, R, S, A):
      Input: a strategy function Ω, a termination function Ξ, a background knowledge Σ, a set of
      rules R, a scoring function S, and an agenda A
      Output: a set of interpretation Aboxes I0
      I0 := {assign level(l, A)};
      repeat
          I := I0 ;
          (A, α) := Ω(I) // A ∈ I, α ∈ A s.th. requires f iat(αl ) holds;
          l = l + 1;
          I0 := (A \ {A}) ∪ assign level(l, explanation step(Σ, R, S, A, α));
      until Ξ(I) or no A and α can be selected such that I0 6= I ;
      return I0
where assign level(l, A) is defined by a lambda calculus term as follows:

                          assign level(l, A) = map(λ(A) • assign level(l, A), A)                         (6)

assign level(l, A) takes as input a superscript l and an agenda A.

In the following, assign level(l, A) is defined which superscripts each assertion α of the Abox A
with l if the assertion α does not already have a superscript:

                           assign level(l, A) = αl | α ∈ A, α 6= β i , i ∈ N
                                                
                                                                                               (7)

Note that l is a global variable, its starting value is zero and it is incremented in the CAE function.
The map4 function is defined as follows:
                                                       [
                                        map(f, X) =        {f (x)}                                  (8)
                                                       x∈X

It takes as parameters a function f and a set X and returns a set consisting of the values of f applied
to every element x of X.
CAE function applies the strategy function Ω in order to decide which assertion to explain, uses a
termination function Ξ in order to check whether to terminate due to resource constraints and a
scoring function S to evaluate an explanation.
    The function Ω for the explanation strategy and Ξ for the termination condition are used as an
oracle and must be defined in an application-specific way. In our multimedia interpretation scenario
we assume that the function requires f iat(αl ) is defined as follows:
4
    Please note that in this report, the expression map is used in two different contexts. The first one M AP
    denotes the Maximum A Posteriori approach which is a sampling method whereas the second one map is a
    function used in the assign level(l, A) function.
                            (
                     l    true if l = 0
      requires fiat(α ) =
                          f alse if l 6= 0
      The function explanation step is defined as follows.
explanation step(Σ, R, S, A, α):
                        [
                                                   consistent completed explanations(Σ, R, A, ∆).
           ∆∈compute all explanations(Σ,R,S,A,α)

We need two additional auxiliary functions.
consistent completed explanations(Σ, R, A, ∆):
                   {∆0 | ∆0 = ∆ ∪ A ∪ forward chain(Σ, R, ∆ ∪ A), consistentΣ (∆0 )}

compute all explanations(Σ, R, S, A, α):
          maximize(Σ, R, A, {∆ | ∆ = compute explanation(Σ, R, α), consistentΣ∪A (∆)}, S).

The function maximize(Σ, R, A, ∆s, S) selects those explanations ∆ ∈ ∆s for which the score S(Σ, R, A, ∆)
is maximal, i.e., there exists no other ∆0 ∈ ∆s such that S(Σ, R, A, ∆0 ) > S(Σ, R, A, ∆). The function
consistent(T ,A) (A0 ) determines if the Abox A ∪ A0 has a model which is also a model of the Tbox T .
    Note the call to the nondeterministic function compute explanation. It may return different values,
all of which are collected.
    In the next Section we explain how probabilistic knowledge is used to (i) formalize the effect of the
“explanation”, and (ii) formalize the scoring function S used in the CAE algorithm explained above.
In addition, it is shown how the termination condition (represented with the parameter Ξ in the above
procedure) can be defined based on the probabilistic conditions.


3.3     The Interpretation Procedure

The interpretation procedure is completely discussed in this section by explaining the interpretation
problem and presenting a solution to this problem. The solution is presented by a probabilistic in-
terpretation algorithm which calls the CAE function described in the previous section. In the given
algorithm, a termination function, and a scoring function are defined. The termination function de-
termines if the interpretation process can be stopped since at some point during the interpretation
process it makes no sense to continue the process. The reason for stopping the interpretation process
is that no significant changes can be seen in the results. The defined scoring function in this section
assigns scores to interpretation Aboxes.

Problem The objective of the interpretation component is the generation of interpretations for the
observations. An interpretation is an Abox which contains high level concept assertions. Since in the
artificial intelligence, the agents are used for solving the problems, in the following the same problem
is formalized in the perspective of an agent:
Consider an intelligent agent and some percepts in an environment where the percepts are the analysis
results of KDMA and HCI. The objective of this agent is finding explanations for the existence of
percepts. The question is how the interpretation Aboxes are determined and how long the interpretation
process must be performed by the agent. The functionality of this Media Interpretation Agent is
presented in the M I Agent algorithm in Section 3.4.

Solution In the following, an application for a probabilistic interpretation algorithm is presented which
gives a solution to the mentioned problem. This solution illustrates a new perspective to the interpre-
tation process and the reason why it is performed. Assume that the media interpretation component
receives a weighted Abox A from KDMA and HCI which contains observations. In the following, the
applied operation P (A, A0 , R, WR, T ) in the algorithm is explained:
The P (A, A0 , R, WR, T ) function determines the probability of the Abox A with respect to the Abox
A0 , a set of rules R, a set of weighted rules WR, and the Tbox T where A ⊆ A0 . Note that R is a
set of forward and backward chaining rules. The probability determination is performed based on the
Markov logic formalism as follows:
                                                                     ~
                      P (A, A0 , R, WR, T ) = PM LN (A,A0 ,R,WR,T ) (Q(A) | ~e(A0 ))                 (9)

~
Q(A)  denotes an event composed of all assertions which appear in the Abox A. Assume that Abox A
contains n assertions α1 , . . . , αn . Consequently, the event for the Abox A is defined as follows:
                                ~
                                Q(A) = hα1 = true ∧ . . . ∧ αn = truei                              (10)

where a1 , . . . , an denote the random variables of the MLN. Assume that an Abox A contains m
assertions α1 , . . . , αm . Then, the evidence vector ~e(A) defined as follows.

                                  ~e(A) = ha1 = true, . . . , am = truei                            (11)

                                                    ~
In order to answer the query PM LN (A,A0 ,R,WR,T ) (Q(A) | ~e(A0 )) the function M LN (A, A0 , R, WR, T )
is called. This function builds the Markov logic network M LN based on the Aboxes A and A0 , the
rules R, the weighted rules WR and the Tbox T which is a time consuming process. Note that in
theory the above function is called not only once but several times. In a practical system, this might
be handled more efficiently. This function returns a Markov logic network (FM LN , WM LN ), which we
define here as follows using a tuple notation:
                                  
                                  
                                   {(α, w)} if hw, αi ∈ A
                                  {(α, ∞)} if α ∈ A
                                  
                                  
                                  
                                  
                                                              0
                                  {(α, w)} if hw, αi ∈ A
                                  
                                  
                                  
                0                                      0
                                S
    M LN (A, A , R, WR, T ) =       {(α, ∞)} if α ∈ A
                                  
                                    {(α, ∞)} if α ∈ R
                                  
                                  
                                  
                                  
                                  
                                  
                                  
                                  
                                   {(α, w)} if hw, αi ∈ WR
                                    {(α, ∞)} if α ∈ T
                                  


In the following, the interpretation algorithm Interpret is presented:

   Function Interpret(A, CurrentI, Γ , T , FR, BR, WR, )
   Input: an agenda A, a current interpretation Abox CurrentI, an Abox of observations Γ , a
   Tbox T , a set of forward chaining rules FR, a set of backward chaining rules BR, a set of
   weighted rules WR, and the desired precision of the results 
   Output: an agenda A0 , a new interpretation Abox N ewI, and Abox differences for additions
   ∆1 and omissions ∆2
   i := 0 ;
   p0 := P (Γ, Γ,R, WR, T ) ;
   Ξ := λ(A) • i := i + 1; pi := maxA∈A P (Γ, A ∪ A0 , R, WR, T ); return | pi − pi−1 |< i ;
   Σ := (T , ∅);
   R := FR ∪ BR;
   S := λ((T , A0 )), R, A, ∆) • P (Γ, A ∪ A0 ∪ ∆, R, WR, T );
   A0 := CAE(Ω, Ξ, Σ, R, S, A);
   N ewI = argmaxA∈A0 (P (Γ, A, R, WR, T ));
   ∆1 = AboxDiff (N ewI, CurrentI); // additions
   ∆2 = AboxDiff (CurrentI, N ewI); // omissions
   return (A0 , N ewI, ∆1 , ∆2 );
In the above algorithm, the termination function Ξ and the scoring function S are defined by lambda
calculus terms. The termination condition Ξ of the algorithm is that no significant changes can be
seen in the successive probabilities pi and pi−1 (scores) of the two successive generated interpretation
Aboxes in two successive levels i − 1 and i. In this case, the current interpretation Abox CurrentI is
preferred to the new interpretation Abox N ewI. In the next step, the CAE function is called which
returns agenda A0 . Afterwards, the interpretation Abox NewI with the maximum score among the
Aboxes A of A0 is selected. Additionally, the Abox differences ∆1 and ∆2 respectively for additions
and omissions among the interpretation Aboxes CurrentI and N ewI are calculated. In the following,
the strategy condition Ω is defined which is one of the parameters of CAE function:

      Function Ω(I)
      Input: a set of interpretation Aboxes I
          n an Abox A and a fiat assertion
      Output:
                                             0
                                               α               o
                                          0l
                          0      0
      A := A ∈ I | ¬∃A ∈ I, A 6= A : ∃α ∈ A0 : ∀αl ∈ A : l0 < l ;
      A := random
               n select(A);
                            l0        l0
                                                     o
      min αs = αl ∈ A | ¬∃α0 ∈ A0 , α0 6= αl , l0 < l ;
      return (A, random select({min αs }));

In the above strategy function Ω, the agenda A is a set of Aboxes A such that the assigned superscripts
to their assertions are minimum. In the next step, an Abox A from A is randomly selected. Afterwards,
the min αs set is determined which contains the assertions α from A whose superscripts are minimum.
These are the assertions which require explanations. The strategy function returns as output an Abox
A and an assertion α which requires explanation.

3.4     The Media Interpretation Agent
In the following, the M I Agent function is presented which calls the Interpret function:
      Function MI Agent(Q, P artners, Die, (T , A0 ), FR, BR, WR, )
      Input: a queue of percept results Q, a set of partners P artners, a function Die for the
      termination process, a background knowledge set (T , A0 ), a set of forward chaining rules FR, a
      set of backward chaining rules BR, a set of weighted rules WR, and the desired precision of the
      results 
      Output: –
      CurrentI = ∅;
      A00 = {∅};
      repeat
          Γ := extractObservations(Q);
          W := M AP (Γ, WR, T ) ;
          Γ 0 := Select(W, Γ );
          A0 := filter (λ(A)•consistentΣ (A), map(λ(A)•Γ 0 ∪A∪A0 ∪forward chain(Σ, FR, Γ 0 ∪A∪A0 ),
                {select(M AP (Γ 0 ∪ A ∪ A0 , WR, T ), Γ 0 ∪ A ∪ A0 ) | A ∈ A00 }));
          (A , N ewI, ∆1 , ∆2 ) := Interpret(A0 , CurrentI, Γ 0 , T , FR, BR, WR ∪ Γ, );
             00

          CurrentI := N ewI;
          Communicate(∆1 , ∆2 , P artners);
          A00 := manage agenda(A00 );
      until Die() ;
where the filter function is defined as follows:
                                                     (
                                               [      {x}   if f (x) = true
                                 filter (f, X) =                                                   (12)
                                                 x∈X
                                                      ∅     else
The filter function takes as parameters a function f and a set X and returns a set consisting of the
values of f applied to every element x of X.

In the M I Agent function, the current interpretation CurrentI and the agenda A00 are initialized
to empty set. Since the agent performance is an incremental process, it is defined by a repeat − until
loop. The percept results Γ are sent by KDMA and HCI to the queue Q. In order to take the obser-
vations Γ from the queue Q, the M I Agent calls the extractObservations function.
The M AP (Γ ∪ A, WR, T ) function determines the most probable world of observations Γ with re-
spect to a set of weighted rules WR and the Tbox T . This function performs actually the mentioned
MAP process in Chapter 2. It returns a vector W which consists of a set of zeros and ones assigned
to the ground atoms of the considered world. The assertions with assigned zeros and ones are called
respectively, negative and positive assertions.
The Select(W, Γ ) function selects the positive assertions from the bit vector W in the input Abox
Γ . The selected positive assertions which require explanations are also known as fiat assertions. This
operation returns as output an Abox Γ 0 which has the following characteristic: Γ 0 ⊆ Γ .
In the next step, a set of forward chaining rules FR is applied to all the Aboxes of A00 . The generated
assertions in this process are added to the to the Abox A. In the next step, only the consistent Aboxes
are selected and the other inconsistent Aboxes are not considered for the next steps.

In the next step, the Interpret function is called to determine the new agenda A00 , the new inter-
pretation N ewI and the Abox differences ∆1 and ∆2 for additions and omissions among CurrentI
and N ewI. Afterwards, the CurrentI is set to the N ewI and the M I Agent function communicates
the Abox differences ∆1 and ∆2 to the partners. Additionally, the Tbox T , the set of forward chaining
rules FR, the set of backward chaining rules BR, and the set of weighted rules WR can be learnt by the
Learn function. The termination condition of the M I Agent function is that the Die() function is true.

Note that the M I Agent waits at the function call extractObservations(Q) if Q = ∅.

After presenting the above algorithms, the mentioned unanswered questions can be discussed. A reason
for performing the interpretation process and explaining the fiat assertions is that the probability of
P (A, A0 , R, WR, T ) will increase through the interpretation process. In other words, by explaining
the observations the agent’s belief to the percepts will increase. This shows a new perspective for
performing the interpretation process.

The answer to the question whether there is any measure for stopping the interpretation process,
is indeed positive. This is expressed by | pi − pi−1 |< i which is the termination condition Ξ of the
algorithm. The reason for selecting i and not  as the upper limit for the termination condition is to
terminate the oscillation behaviour of the results. In other words, the precision interval is tightened
step by step during the interpretation process. The function manage agenda is explained Section 6.
Before, we dicuss an example for interpreting a single video shot in Section 4, and a scene in Section 5.


4   Video Shot Interpretation

One of the main innovation introduced in the previous section, namely the introduction of a proba-
bilistic preference measure to control the space of possible interpretations, is exemplified here using
examples inspired by the environmental domain used in the project CASAM.
    We have to mention that this example is not constructed to show the possible branchings through
the interpretation process. The purpose of this example is to show how the probabilities of the most
probable world of observations P (A0 , A, R, WR, T ) behave during the interpretation process.
    At the beginning of this example, the signature of the knowledge base is presented. The set of all
concept names CN is divided into two disjoint sets Events and PhysicalThings such that
    CN = Events ∪ PhysicalThings where these two sets are defined as follows:
    Events = {CarEntry, EnvConference, EnvP rot, HealthP rot}
    PhysicalThings = {Car, DoorSlam, Building, Environment, Agency}
EnvConference, EnvP rot and HealthP rot denote respectively environmental conference, environmen-
tal protection and health protection.
    The set of role names RN is defined as follows:
   RN = {Causes, OccursAt, HasAgency, HasT opic, HasSubject, HasObject, HasEffect,
         HasSubEvent, HasLocation}
In the following, the set of individual names IN is given:
    IN = {C1 , DS1 , ES1 , Ind42 , Ind43 , Ind44 , Ind45 , Ind46 , Ind47 , Ind48 }
    Note that the notations in this example are based on Alchemy notations i.e. the instance-, concept-
and role names begin with capital letters. In the following, the set of forward chaining rules FR is
defined:
    FR = {∀x CarEntry(x) → ∃y Building(y), OccursAt(x, y),
            ∀x EnvConference(x) → ∃y Environment(y), HasT opic(x, y),
            ∀x EnvP rot(x) → ∃y Agency(y), HasAgency(x, y)}
Similarly, the set of backward chaining rules BR is given as follows:
   BR = {Causes(x, y) ← CarEntry(z), HasObject(z, x), HasEffect(z, y), Car(x), DoorSlam(y),
OccursAt(x, y) ← EnvConference(z), HasSubEvent(z, x), HasLocation(z, y), CarEntry(x), Building(y),
HasT opic(x, y) ← EnvP rot(z), HasSubEvent(z, x), HasObject(z, y), EnvConference(x), Environment(y),
HasAgency(x, y) ← HealthP rot(z), HasObject(z, x), HasSubject(z, y), EnvP rot(x), Agency(y)}
   In the following, a set of weighted rules WR is defined where all rules have the same high weight
equal to 5:
    WR = {5 ∀x, y, z CarEntry(z)∧HasObject(z, x)∧HasEffect(z, y) → Car(x)∧DoorSlam(y)∧Causes(x, y),
5 ∀x, y, z EnvConference(z)∧HasSubEvent(z, x)∧HasLocation(z, y) → CarEntry(x)∧Building(y)∧OccursAt(x, y),
5 ∀x, y, z EnvP rot(z) ∧ HasSubEvent(z, x) ∧ HasObject(z, y) → EnvConference(x) ∧ Environment(y) ∧
HasT opic(x, y),
5 ∀x, y, z HealthP rot(z)∧HasObject(z, x)∧HasSubject(z, y) → EnvP rot(x)∧Agency(y)∧HasAgency(x, y)}
   Note that the weighted rules WR and their weights can be learnt by the machine learning com-
ponent. The selected initial value for  in this example is 0.05. In the following, ∆1 and ∆2 denote
respectively the set of assertions hypothesized by a forward chaining rule and the set of assertions
generated by a backward chaining rule at each interpretation level.

Let us assume that the media interpretation agent receives the following weighted Abox A:
    A = {1.3 Car(C1 ), 1.2 DoorSlam(DS1 ), −0.3 EngineSound(ES1 ), Causes(C1 , DS1 )}
The first applied operation to A is the M AP function which returns the bit vector W = h1, 1, 0, 1i.
This vector is composed of positive and negative events (bits). By applying the Select function to
W and the input Abox A, the assertions from the input Abox A are selected that correspond to the
positive events in W . Additionally, the assigned weights to the positive assertions are also taken from
the input Abox A. In the following, Abox A0 is depicted which contains the positive assertions:
    A0 = {1.3 Car(C1 ), 1.2 DoorSlam(DS1 ), Causes(C1 , DS1 )}
At this step, p0 = P (A0 , A0 , R, WR, T ) = 0.755. Since no appropriate forward chaining rule from FR
is applicable to Abox A0 , ∆1 = ∅ and as a result A0 = A0 ∪ ∅. The next step is the performance of
backward chain function where the next backward chaining rule from BR can be applied to Abox A0 :
    Causes(x, y) ← CarEntry(z), HasObject(z, x), HasEffect(z, y), Car(x), DoorSlam(y)
    Consequently, by applying the above rule the next set of assertions is hypothesized:
    ∆2 = {CarEntry(Ind42 ), HasObject(Ind42 , C1 ), HasEffect(Ind42 , DS1 )}
    which are considered as strict assertions. Consequently, A1 is defined as follows: A1 = A0 ∪ ∆2 .
In the above Abox, p1 = P (A0 , A1 , R, WR, T ) = 0.993. As it can be seen, p1 > p0 i.e.
P (A0 , Ai , R, WR, T ) increases by adding the new hypothesized assertions. This shows that the new
assertions are considered as additional support. The termination condition of the algorithm is not
fulfilled therefore the algorithm continues processing. At this level, it is still not known whether Abox
A1 can be considered as the final interpretation Abox. Thus, this process is continued with another
level. Consider the next forward chaining rule:
    ∀x CarEntry(x) → ∃y Building(y), OccursAt(x, y)
    By applying the above rule, the next set of assertions is generated namely:
    ∆1 = {Building(Ind43 ), OccursAt(Ind42 , Ind43 )}
    The new generated assertions are also considered as strict assertions. In the following, the expanded
Abox A1 is defined as follows: A1 = A1 ∪ ∆1 .
    Let us assume the next backward chaining rule from BR:
   OccursAt(x, y) ← EnvConference(z), HasSubEvent(z, x), HasLocation(z, y), CarEntry(x), Building(y)
   Consequently, by applying the above abduction rule the next set of assertions is hypothesized:
   ∆2 = {EnvConference(Ind44 ), HasSubEvent(Ind44 , Ind42 ), HasLocation(Ind44 , Ind43 )}
   which are considered as strict assertions. Consequently, A2 = A1 ∪ ∆2 .
   In the above Abox, p2 = P (A0 , A2 , R, WR, T ) = 0.988. As it can be seen, p2 < p1 i.e.
P (A0 , Ai , R, WR, T ) decreases slightly by adding the new hypothesized assertions. Since the termi-
nation condition of the algorithm is fulfilled, Abox A1 can be considered as the final interpretation
Abox. To realize how the further behaviour of the probabilities is, this process is continued. Consider
the next forward chaining rule from FR:
   ∀x EnvConference(x) → ∃y Environment(y), HasT opic(x, y)
   By applying the above rule, new assertions are generated.
   ∆1 = {Environment(Ind45 ), HasT opic(Ind44 , Ind45 )}
   In the following, the expanded Abox A2 is defined: A2 = A2 ∪ ∆1 .
Consider the next backward chaining rule from BR:
   HasT opic(x, y) ← EnvP rot(z), HasSubEvent(z, x), HasObject(z, y), EnvConference(x), Environment(y)
By applying the above abduction rule, the following set of assertions is hypothesized:
    ∆2 = {EnvP rot(Ind46 ), HasSubEvent(Ind46 , Ind44 ), HasObject(Ind46 , Ind45 )}
    which are considered as strict assertions. In the following, A3 is defined as follows A3 = A2 ∪ ∆2 .
In the above Abox A3 , p3 = P (A0 , A3 , R, WR, T ) = 0.99. As it can be seen, p3 > p2 , i.e.
    P (A0 , Ai , R, WR, T ) increases slightly by adding the new hypothesized assertions.
    Consider the next forward chaining rule:
    ∀x EnvP rot(x) → ∃y Agency(y), HasAgency(x, y)
    By applying the above rule, the next assertions are generated:
    ∆1 = {Agency(Ind47 ), HasAgency(Ind46 , Ind47 )}
    As a result, the expanded Abox A3 is presented as follows: A3 = A3 ∪ ∆1 .
    Let us consider the next backward chaining rule from BR:
   HasAgency(x, y) ← HealthP rot(z), HasObject(z, x), HasSubject(z, y), EnvP rot(x), Agency(y)
   Consequently, new assertions are hypothesized by applying the above abduction rule, namely:
   ∆2 = {HealthP rot(Ind48 ), HasObject(Ind48 , Ind46 ), HasSubject(Ind48 , Ind47 )}
   which are considered as strict assertions. Consequently, A4 is defined as follows: A4 = A3 ∪ ∆2 .
   In the above Abox, p4 = P (A0 , A4 , R, WR, T ) = 0.985. As it can be seen, p4 < p3 , i.e.
   P (A0 , Ai , R, WR, T ) decreases slightly by adding the new hypothesized assertions.
Evaluation of the Results:
   The determined probability values P (A0 , Ai , R, WR, T ) of this example are summarized in the next
table which shows clearly the behaviour of the probabilities stepwise after performing the interpretation
process:
                                  i Abox Ai pi = P (A0 , Ai , R, WR, T )
                                  0   A0            p0 = 0.755
                                  1   A1            p1 = 0.993
                                  2   A2            p2 = 0.988
                                  3   A3            p3 = 0.99
                                  4   A4            p4 = 0.985
                               Table 1. Summary of the probability values



    In the above table, variable i denotes the successive levels of the interpretation process. In this
example, the interpretation process is consecutively performed four times. As it can be seen, through
the first interpretation level the probability p1 increases strongly in comparison to p0 . By performing the
second, third and the forth interpretation levels, the probability values decrease slightly in comparison
to p1 . This means no significant changes can be seen in the results. In other words, the determination
of A3 and A4 were not required at all. But the determination of A2 was required to realize the slight
difference |p2 − p1 | < 2 . Consequently, Abox A1 is considered as the final interpretation Abox.


5    Preference-based Scene Interpretation

In this example, we discuss how the video shot interpretation process can be performed by consid-
ering the results of two consecutive video shots. For the interpretation of each video shot we require
information about the previous video shots otherwise the interpretation process does not work cor-
rectly. The question is which assertions have to be considered from the previous video shots. As it
was discussed in this paper we would like to consider the assertions from the previous video shots
which increase P (A0 , Ai , R, WR, T ). At the beginning of this example, the signature of the knowledge
base is presented. The set of the concept names CN is divided into two disjoint sets Events and
PhysicalThings which are described as follows:
    Events = {CarEntry, CarExit, CarRide}
    PhysicalThings = {Car, DoorSlam}
    Additionally, the set of the role names RN and the set of the individual names IN are represented
as follows:
    RN = {Causes, HasObject, HasEffect, Before, HasStartEvent, HasEndEvent}
    IN = {C1 , C2 , DS1 , DS2 , Ind41 , Ind42 , Ind44 }
    The Tbox T contains the axiom CarEntry v ¬CarExit. In the following, the set of forward
chaining rules FR is given:
    FR = {
    ∀x, xl, y, yl, w, z AudioSeg(x), HasSegLoc(x, xl), V ideoSeg(y), HasSegLoc(y, yl), IsSmaller(xl, yl),
                        Depicts(x, w), Depicts(y, z), CarEntry(w), CarEntry(z) → Before(z, w),
    ∀x, xl, y, yl, w, z AudioSeg(x), HasSegLoc(x, xl), V ideoSeg(y), HasSegLoc(y, yl), IsSmaller(xl, yl),
                        Depicts(x, w), Depicts(y, z), CarEntry(w), CarExit(z) → Before(z, w),
    ∀x, xl, y, yl, w, z AudioSeg(x), HasSegLoc(x, xl), V ideoSeg(y), HasSegLoc(y, yl), IsSmaller(xl, yl),
                        Depicts(x, w), Depicts(y, z), CarExit(w), CarEntry(z) → Before(z, w),
    ∀x, xl, y, yl, w, z AudioSeg(x), HasSegLoc(x, xl), V ideoSeg(y), HasSegLoc(y, yl), IsSmaller(xl, yl),
                        Depicts(x, w), Depicts(y, z), CarExit(w), CarExit(z) → Before(z, w)}
where AudioSeg, HasSegLoc and V ideoSeg denote AudioSegment, HasSegmentLocator and V ideoSegment
respectively. Note that the concepts and roles in FR which are not given in CN and RN appear only
in the multimedia content ontology. The multimedia content ontology determines the structure of the
multimedia document. Additionally, it determines whether the concpts are originated from video, au-
dio or text. The above rules mean that the concept assertion CarEntry or CarExit from the first
shot appear chronologically before the concept assertion CarEntry or CarExit from the second video
shot. The set of backward chaining rules BR is presented as follows:
   BR = {Causes(x, y) ← CarEntry(z), HasObject(z, x), HasEffect(z, y), Car(x), DoorSlam(y),
         Causes(x, y) ← CarExit(z), HasObject(z, x), HasEffect(z, y), Car(x), DoorSlam(y),
         Before(x, y) ← CarRide(z), HasStartEvent(z, x), HasEndEvent(z, y), CarEntry(x), CarExit(y)}
Additionally, the set of weighted rules is defined as follows:
   WR = {5 ∀x, y, z CarEntry(z)∧HasObject(z, x)∧HasEffect(z, y) ⇒ Car(x)∧DoorSlam(y)∧Causes(x, y),
         5 ∀x, y, z CarExit(z)∧HasObject(z, x)∧HasEffect(z, y) ⇒ Car(x)∧DoorSlam(y)∧Causes(x, y),
         5 ∀x, y, z, k, m CarRide(z) ∧ HasStartEvent(z, x) ∧ HasEndEvent(z, y) ∧ HasObject(x, k) ∧
                          HasObject(y, m) ⇒ CarEntry(x) ∧ CarExit(y) ∧ Car(k) ∧ Car(m) ∧ k = m}
Consider       the           next      figure     as     the      first   video      shot      of    a     video:
                         Let us assume that the analysis results of the first video shot represented in the Abox
                         A1 are sent to the queue Q:
                             A1 = {1.3 Car(C1 ), 1.2 DoorSlam(DS1 ), Causes(C1 , DS1 )}
                             For the interpretation of the first video shot, we will call the function
                         M I Agent(Q, P artners, Die, (T , A0 ), FR, BR, WR, ). At the beginning of this
                         function, there are initializations for some variables, namely CurrentI = ∅ and
                         A00 = {∅}. Afterwards extracting observations from the queue Q is performed,
                         which leads to Γ = A1 . Determination of the most probable world W = h1, 1, 1i
                         is performed in the next step and selecting the positive assertions and their re-
                         lated weights determines Γ 0 = Γ . At this step, A = ∅ since A00 = {∅}. Addi-
                         tionally, A0 = ∅. Consequently, M AP (Γ 0 , WR, T ) = W and Select(W, Γ 0 ) = Γ 0 .
f orward Chain(Σ, FR, Γ 0 ) = ∅ since there is no forward chaining rule applicable to Γ 0 . A0 = Γ 0 .
The Interpret(A0 , CurrentI, Γ 0 , T , FR, BR, WR ∪ Γ, ) is called in the next step which determines
p0 = P (Γ 0 , Γ 0 , R, WR, T ) = 0.733. The Interpret function calls CAE function which returns
A0 = {Γ 0 ∪ ∆1 , Γ 0 ∪ ∆2 } where the two possible explanations ∆0 and ∆00 are defined as follows:
    ∆1 = {CarEntry(Ind41 ), HasObject(Ind41 , C1 ), HasEffect(Ind41 , DS1 )}
    ∆2 = {CarExit(Ind41 ), HasObject(Ind41 , C1 ), HasEffect(Ind41 , DS1 )}
    Each of the above interpretation Aboxes have scoring values:
    p1 = P (Γ 0 , Γ 0 ∪ ∆1 , R, WR, T ) = 0.941 and p1 = P (Γ 0 , Γ 0 ∪ ∆2 , R, WR, T ) = 0.935. N ewI =
  0
Γ ∪ ∆1 since this is the interpretation Abox with the maximum scoring value. The termination
condition is not fulfilled since p1 − p0 = 0.208 > 0.05. The Abox difference for additions is defined as
follows: ∆add = N ewI − CurrentI = N ewI − ∅ = N ewI. Simiarly, ∆omi = ∅ is the Abox difference
for omissions. The CAE function returns N ewI, A0 and the Abox differences ∆add and ∆omi to the
Interpret function. Consider the next figure depicts the second video shot:
                             Assume that the analysis results of the second video shot given in the next Abox
                         are sent to the queue Q:
                             A2 = {1.3 Car(C2 ), 1.2 DoorSlam(DS2 ), Causes(C2 , DS2 )}
                         Similarly, for the interpretation of the second video shot we will call the function
                         M I Agent(Q, P artners, Die, (T , A0 ), FR, BR, WR, ). The observation extracttion
                         process from Q leads to Γ = A2 . Afterwards, the most probable world W = h1, 1, 1i
                         is determined and applying Select function on W gives Γ 0 = A2 .
                             Consider A ∈ A00 where A00 = {A1 ∪ ∆1 , A1 ∪ ∆2 }. Γ 0 ∪ A =
                         {A2 ∪ A1 ∪ ∆1 , A2 ∪ A1 ∪ ∆2 }. Applying M AP (Γ 0 ∪A, WR, T ) gives W = h1, . . . , 1i
                         and applying the Select(W, Γ 0 ∪ A) function gives {A2 ∪ A1 ∪ ∆1 , A2 ∪ A1 ∪ ∆2 }.
Since no forward chaining rule is applicable to the above set and this set contains
consistent Aboxes A0 = {A2 ∪ A1 ∪ ∆1 , A2 ∪ A1 ∪ ∆2 }. In the next step, the function
Interpret(A0 , CurrentI, Γ 0 , T , FR, BR, WR ∪ Γ, ) is called which determines P (Γ 0 , Γ 0 , R, WR, T ) =
0.733. Afterwards, the CAE function is called which determines the next exaplanations:
    ∆3 = {CarEntry(Ind42 ), HasObject(Ind41 , C2 ), HasEffect(Ind41 , DS2 )}
    ∆4 = {CarExit(Ind42 ), HasObject(Ind41 , C2 ), HasEffect(Ind41 , DS2 )}
    The CAE function generates the following agenda which contains all possible interpretation Aboxes:
    {I1 , I2 , I3 , I4 }
    where:
                I1 = A2 ∪ A1 ∪ ∆1 ∪ ∆3                I2 = A2 ∪ A1 ∪ ∆1 ∪ ∆4
                I3 = A2 ∪ A1 ∪ ∆2 ∪ ∆3                I4 = A2 ∪ A1 ∪ ∆2 ∪ ∆4
    Afterwards applies the forward chaining rules on the above agenda. A new assertion Before(Ind41 , Ind42 )
is generated and added to the four interpretation Aboxes. In the following, the possible four interpre-
tation Aboxes are given:
                I1 = A2 ∪ A1 ∪ ∆1 ∪ ∆3 ∪ {Before(Ind41 , Ind42 )}
                I2 = A2 ∪ A1 ∪ ∆1 ∪ ∆4 ∪ {Before(Ind41 , Ind42 )}
                I3 = A2 ∪ A1 ∪ ∆2 ∪ ∆3 ∪ {Before(Ind41 , Ind42 )}
                I4 = A2 ∪ A1 ∪ ∆2 ∪ ∆4 ∪ {Before(Ind41 , Ind42 )}
Afterwards the backward chaining rule is applied which generates the following set only for the inter-
pretation Abox I2 :
    ∆ = {CarRide(Ind44 ), HasStartEvent(Ind44 , Ind41 ), HasEndEvent(Ind44 , Ind42 )}
Consequently I2 = I2 ∪ ∆. The interpretation Aboxes have the next scoring values:
                P (A1 ∪ A2 , I1 , R, WR, T ) = 0.964
                P (A1 ∪ A2 , I2 , R, WR, T ) = 0.978
                P (A1 ∪ A2 , I3 , R, WR, T ) = 0.952
                P (A1 ∪ A2 , I4 , R, WR, T ) = 0.959
    The above values show that the interpretation Abox I2 has a higher scoring value than the other
interpretation Aboxes. Therefore the final interpretation Abox is N ewI = I2 . The Abox differences
for additions and omissions are defined as follows:
    ∆add = A2 ∪ ∆4 ∪ ∆ ∪ {Before(Ind41 , Ind42 )}         ∆omi = ∅
    For the next interpretation steps the agenda can continue with I2 and eliminate the other inter-
pretation Aboxes since this Abox has a higher scoring value.


6   Manage Agenda

In this section, we introduce some techniques which improves the performance of the agenda.

 – Elimination of the interpretation Aboxes: This technique is applied if there are multiple interpre-
   tation Aboxes with different scoring values where one of the Aboxes has a higher scoring value.
   At this step, we can select this Abox, eliminate the remaining interpretation Aboxes and continue
   the interpretation process with the selected Abox.
 – Combining the interpretation Aboxes: Consider the interpretation Aboxes I1 , . . . , In . In order to
   determine the final interpretation Abox, the MAP process can be applied to the union of all
   interpretation Aboxes I1 ∪ . . . ∪ In .
 – Shrinking the interpretation Aboxes: By applying this technique, we can decide which assertions
   from the previous video shots have to be considered for the interpretation process of the following
   video shots since considering all assertions of the previous video shots will slow down the inter-
   pretation process. We believe that only the high level concept assertions from the previous video
   shots play an important role and not the low level concept assertions.


7   Summary

For multimedia interpretation, a semantically well-founded formalization is required. In accordance
with previous work, in CASAM a well-founded abduction-based approach is pursued. Extending pre-
vious work, abduction is controlled by probabilistic knowledge, and it is done in terms of first-order
logic. Rather than merely using abduction for computing explanation with which observations are
entailed, the approach presented in this paper, uses a probabilistic logic to motivate the explanation
endeavor by increasing the belief in the observations. Hence, there exists a certain utility for an agent
for the computational resources it spends for generating explanations. Thus, we have presented a first
attempt to more appropriately model a media interpretation agent.
References
[Baader et al., 2003] Baader, F., Calvanese, D., McGuinness, D., Nardi, D., and Patel-Schneider, P. F., editors
  (2003). The Description Logic Handbook: Theory, Implementation and Applications. Cambridge University
  Press.
[Castano et al., 2008] Castano, S., Espinosa, S., Ferrara, A., Karkaletsis, V., Kaya, A., Möller, R., Montanelli,
  S., Petasis, G., and Wessel, M. (2008). Multimedia interpretation for dynamic ontology evolution. In Journal
  of Logic and Computation. Oxford University Press.
[Domingos and Richardson, 2007] Domingos, P. and Richardson, M. (2007). Markov logic: A unifying frame-
  work for statistical relational learning. In Getoor, L. and Taskar, B., editors, Introduction to Statistical
  Relational Learning, pages 339–371. Cambridge, MA: MIT Press.
[Gries and Möller, 2010] Gries, O. and Möller, R. (2010). Gibbs sampling in probabilistic description logics
  with deterministic dependencies. In Proc. of the First International Workshop on Uncertainty in Description
  Logics, Edinburgh.
[Pearl, 1988] Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.
  Morgan Kaufmann, San Mateo, CA.