<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Probabilistic Abduction Engine for Media Interpretation based on Ontologies</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Oliver</forename><surname>Gries</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Hamburg University of Technology</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ralf</forename><surname>Möller</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Hamburg University of Technology</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Anahita</forename><surname>Nafissi</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Hamburg University of Technology</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Maurice</forename><surname>Rosenfeld</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Hamburg University of Technology</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Kamil</forename><surname>Sokolski</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Hamburg University of Technology</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michael</forename><surname>Wessel</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Hamburg University of Technology</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A Probabilistic Abduction Engine for Media Interpretation based on Ontologies</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">4F874812C6B4EBF829CDCABDC03DD4AF</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T01:16+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>For multimedia interpretation, and in particular for the combined interpretation of information coming from different modalities, a semantically well-founded formalization is required in the context of an agent-based scenario. Low-level percepts, which are represented symbolically, define the observations of an agent, and interpretations of content are defined as explanations for the observations. We propose an abduction-based formalism that uses description logics for the ontology and Horn rules for defining the space of hypotheses for explanations (i.e., the space of possible interpretations of media content), and we use Markov logic to define the motivation for the agent to generate explanations on the one hand, and for ranking different explanations on the other.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>For multimedia interpretation in the context of an agent-based scenario, and for the combined interpretation of information coming from different modalities in particular, a semantically well-founded formalization is required. Low-level percepts, which are represented symbolically, define the observations of an agent w.r.t. some content, and interpretations of the content are defined as explanations for the observations.</p><p>We propose an abduction-based formalism that uses description logics for the ontology and Horn rules for defining the space of hypotheses for explanations (i.e., the space of possible interpretations of media content). Additionally, we propose the use of Markov logic to define the motivation for the agent to generate explanations on the one hand, and for ranking different explanations on the other.</p><p>Based on a presentation of the most important preliminaries in Section 2, the abduction and interpretation procedures are discussed in detail in Section 3. Optimization techniques for the probabilistic abduction engine are pointed out. In Section 4, a complete example is given, showing the main approach using intermediate steps. Section 7 summarizes this paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Preliminaries</head><p>In this chapter, the most important preliminaries are specified in order to make this document selfcontained.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Preliminaries on Description Logics</head><p>For specifying the ontology used to describe low-level analysis results as well as high-level interpretation results, a less expressive description logic is applied to facilitate fast computations. We decided to represent the domain knowledge with the DL ALH f − (D) (restricted attributive concept language with role hierarchies, functional roles and concrete domains). We shortly describe our nomenclature in order to make this paper self-contained. For details see <ref type="bibr" target="#b0">[Baader et al., 2003]</ref>.</p><p>In logic-based approaches, atomic representation units have to be specified. The atomic representation units are fixed using a so-called signature. A DL signature is a tuple S = (CN, <ref type="bibr">RN, IN)</ref>, where CN = {A 1 , ..., A n } is the set of concept names (denoting sets of domain objects) and RN = {R 1 , ..., R m } is the set of role names (denoting relations between domain objects). The signature also contains a component IN indicating a set of individuals (names for domain objects).</p><p>In order to relate concept names and role names to each other (terminological knowledge) and to talk about specific individuals (assertional knowledge), a knowledge base has to be specified. An ALH f − knowledge base Σ S = (T , A), defined with respect to a signature S, is comprised of a terminological component T (called Tbox ) and an assertional component A (called Abox ). In the following we just write Σ if the signature is clear from context. A Tbox is a set of so-called axioms, which are restricted to the following form in ALH f − :</p><formula xml:id="formula_0">(I) Subsumption A 1 A 2 , R 1 R 2 (II) Disjointness</formula><p>A 1 ¬A 2 (III)Domain and range restrictions for roles ∃R.</p><p>A, ∀R.A (IV)Functional restriction on roles (≤ 1 R) (V) Local range restrictions for roles A 1 ∀R.A 2 (VI)Definitions with value restrictions</p><formula xml:id="formula_1">A ≡ A 0 ∀R 1 .A 1 ... ∀R n .A n</formula><p>With axioms of form (I), concept (role) names can be declared to be subconcepts (subroles) of each other. Axioms of form (II) denote disjointness between concepts. Axioms of type (III) introduce domain and range restrictions for roles. Axioms of the form (IV) introduce so-called functional restrictions on roles, and axioms of type (V) specify local range restrictions (using value restrictions, see below). With axioms of kind (VI) so-called definitions (with necessary and sufficient conditions) can be specified for concept names found on the left-hand side of the ≡ sign. In the axioms, so-called concepts are used. Concepts are concept names or expressions of the form (anything), ⊥ (nothing), ¬A (atomic negation), (≤ 1 R) (role functionality), ∃R. (limited existential restriction), ∀R.A (value restriction) and (C 1 ... C n ) (concept conjunction). Knowledge about individuals is represented in the Abox part of Σ. An Abox A is a set of expressions of the form A(a) or R(a, b) (concept assertions and role assertions, respectively) where A stands for a concept name, R stands for a role name, and a, b stand for individuals. Aboxes can also contain equality (a = b) and inequality assertions (a = b). We say that the unique name assumption (UNA) is applied, if a = b is added for all pairs of individuals a and b.</p><p>In order to understand the notion of logical entailment , we introduce the semantics of ALH f − . In DLs such as ALH f − , the semantics is defined with interpretations I = ( I , • I ), where I is a nonempty set of domain objects (called the domain of I) and • I is an interpretation function which maps individuals to objects of the domain (a I ∈ I ), atomic concepts to subsets of the domain (A I ⊆ I ) and roles to subsets of the cartesian product of the domain (R I ⊆ I × I ). The interpretation of arbitrary ALH f − concepts is then defined by extending • I to all ALH f − concept constructors as follows:</p><formula xml:id="formula_2">I = I ⊥ I = ∅ (¬A) I = I \ A I (≤ 1 R) I = {u ∈ I | (∀v 1 , v 2 ) [((u, v 1 ) ∈ R I ∧ (u, v 2 ) ∈ R I ) → v 1 = v 2 ] (∃R. ) I = {u ∈ I | (∃v) [(u, v) ∈ R I ]} (∀R.C) I = {u ∈ I | (∀v) [(u, v) ∈ R I → v ∈ C I ]} (C 1 ... C n ) I = C I 1 ∩ ... ∩ C I n</formula><p>In the following, the satisfiability condition for axioms and assertions of an If it satisfies both T and A it is called a model of Σ. Finally, if there is a model of Σ (i.e., a model for T and A), then Σ is called satisfiable.</p><p>We are now able to define the entailment relation |=. A DL knowledge base Σ logically entails an assertion α (symbolically Σ |= α) if α is satisfied in all models of Σ. For an Abox A, we say Σ |= A if Σ |= α for all α ∈ A.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Substitutions, Queries, and Rules</head><p>Sequences, Variable Substitutions and Transformations A variable is a name of the form String where String is a string of characters from {A. . .Z}. In the following definitions, we denote places where variables can appear with uppercase letters.</p><p>Let V be a set of variables, and let X, Y 1 , . . . , Y n be sequences . . . of variables from V . z denotes a sequence of individuals. We consider sequences of length 1 or 2 only, if not indicated otherwise, and assume that ( X ) is to be read as (X) and ( X, Y ) is to be read as (X, Y ) etc. Furthermore, we assume that sequences are automatically flattened. A function as set turns a sequence into a set in the obvious way.</p><p>A variable substitution σ = [X ← i, Y ← j, . . .] is a mapping from variables to individuals. The application of a variable substitution σ to a sequence of variables X or X, Y is defined as σ(X) or σ(X), σ(Y ) , respectively, with σ(X) = i and σ(Y ) = j. In this case, a sequence of individuals is defined. If a substitution is applied to a variable X for which there exists no mapping X ← k in σ then the result is undefined. A variable for which all required mappings are defined is called admissible (w.r.t. the context).</p><p>Grounded Conjunctive Queries Let X, Y 1 , . . . , Y n be sequences of variables, and let Q 1 , . . . , Q n denote concept or role names.</p><p>A query is defined by the following syntax.</p><formula xml:id="formula_3">{(X) | Q 1 (Y 1 ), . . . , Q n (Y n )}</formula><p>The sequence X may be of arbitrary length but all variables mentioned in X must also appear in at least one of the</p><formula xml:id="formula_4">Y 1 , • • • , Y n : as set(X) ⊆ as set(Y 1 ) ∪ • • • ∪ as set(Y n ). Informally speaking, Q 1 (Y 1 ), . . . , Q n (Y n ) defines a conjunction of so-called query atoms Q i (Y i ).</formula><p>The list of variables to the left of the sign | is called the head and the atoms to the right of are called the query body. The variables in the head are called distinguished variables. They define the query result. The variables that appear only in the body are called non-distinguished variables and are existentially quantified.</p><p>Answering a query with respect to a knowledge base Σ means finding admissible variable substitutions σ such that</p><formula xml:id="formula_5">Σ |= {σ(Q 1 (Y 1 )), . . . , σ(Q n (Y n ))}.</formula><p>We say that a variable substitution σ = [X ← i, Y ← j, . . .] introduces bindings i, j, . . . for variables X, Y, . . .. Given all possible variable substitutions σ, the result of a query is defined as {(σ(X))}. Note that the variable substitution σ is applied before checking whether</p><formula xml:id="formula_6">Σ |= {Q 1 (σ(Y 1 )), . . . , Q n (σ(Y n ))}, i.e.</formula><p>, the query is grounded first.</p><p>For a query {(?y) | P erson(?x), hasP articipant(?y, ?x)} and the Abox Γ 1 = {HighJump(ind 1 ), P erson(ind 2 ), hasP articipant(ind 1 , ind 2 )}, the substitution [?x ← ind 2 , ?y ← ind 1 ] allows for answering the query, and defines bindings for ?y and ?x.</p><p>A boolean query is a query with X being of length zero. If for a boolean query there exists a variable substitution σ such that Σ |= {σ(Q 1 (Y 1 )), . . . , σ(Q n (Y n ))} holds, we say that the query is answered with true, otherwise the answer is false.</p><p>Later on, we will have to convert query atoms into Abox assertions. This is done with the function transform. The function transform applied to a set of query atoms {γ 1 , . . . γ n } is defined as {transform(γ 1 , σ), . . . , transform(γ n , σ)} where transform(P (X), σ) := P (σ(X)).</p><p>Rules A rule r has the following form P (X) ← Q 1 (Y 1 ), . . . , Q n (Y n ) where P, Q 1 , . . . , Q n denote concept or role names with the additional restriction (safety condition) that as set(X) ⊆ as set(Y 1 ) ∪</p><formula xml:id="formula_7">• • • ∪ as set(Y n ).</formula><p>Rules are used to derive new Abox assertions, and we say that a rule r is applied to an Abox A. The function call apply(Σ, P (X) ← Q 1 (Y 1 ), . . . , Q n (Y n ), A) returns a set of Abox assertions {σ(P (X))} if there exists an admissible variable substitution σ such that the answer to the query</p><formula xml:id="formula_8">{() | Q 1 (σ(Y 1 )), . . . , Q n (σ(Y n ))}</formula><p>is true with respect to Σ ∪ A.<ref type="foot" target="#foot_0">1</ref> If no such σ can be found, the result of the call to apply(Σ, r, A) is the empty set. The application of a set of rules R = {r 1 , . . . r n } to an Abox is defined as follows.</p><formula xml:id="formula_9">apply(Σ, R, A) = r∈R apply(Σ, r, A)</formula><p>The result of forward chain(Σ, R, A) is defined to be ∅ if apply(Σ, R, A) ∪ A = A holds. Otherwise the result of forward chain is determined by the recursive call apply(Σ, R, A) ∪ forward chain(Σ, R, A ∪ apply(Σ, R, A)).</p><p>For some set of rules R we extend the entailment relation by specifying that (T , A) |= R A 0 iff (T , A ∪ forward chain((T , ∅), R, A)) |= A 0 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Probabilistic Knowledge Representation</head><p>The basic notion of probabilistic knowledge representation formalisms is the so-called random experiment. A random variable X is a function assigning a value to the result of a random experiment. The random experiment itself is not represented, so random variables are functions without arguments, which return different values at different points of time. Possible values of a random variable comprise the so-called domain of the random variable. In the sequel, we will use boolean random variables, whose values can be either 1 or 0 (true or f alse, respectively).</p><p>Let X = {X 1 , ..., X n } be the ordered set of all random variables of a random experiment. An event (denoted X = x) is an assignment X 1 = x 1 , ..., X n = x n to all random variables. In case n = 1 we call the event simple, otherwise the event is called complex. A certain vector of values x is referred to as a possible world. A possible world can be associated with a probability value or probability for short. Hence, the notion of a possible world can be used as a synonym for an event, and depending on the context we use the former or the latter name. In case of an event with a boolean random variable X, we write x as an abbreviation for X = true and ¬x as an abbreviation for X = f alse.</p><p>Mappings of events to probabilities (or assignment of probabilities to events) are specified with so-called probability assertions of the following syntax: P ( X = x) = p, where X is a vector of random variables, and p is a real value between 0 and 1 (it is assumed that the reader is familiar with Kolmogorov's axioms of probability). In the special case of a simple event (single random variable, n = 1) we write P (X = x) = p. The probability value p of an event is denoted as P ( X = x) (or P (X = x) in the simple case). In its raw form a set of probabilistic assertions is called a probabilistic knowledge base (with signature X).</p><p>A mapping from the domain of a random variable X to probability values [0, 1] is called a distribution. For distributions we use the notation P(X). Distributions can be defined for (ordered) sets of random variables as well. In this case we use P(X 1 , . . . , X n ) as a denotation for a mapping to the n-dimensional cross product of [0, 1]. For specifying a distribution, probability assertions for all domain values must be specified, and the values p must sum up to 1. In case all random variables of a random experiment are involved, we speak of a (full) joint probability distribution (JPD), otherwise the expression is said to denote a marginal distribution (projection of the ndimensional space of probability values to a lower-dimensional space with m dimensions). The expression P(X 1 , . . . , X m , X m+1 = x m+1 , . . . , X l = x l ) denotes an m-dimensional distribution with known values x m+1 , . . . , x l . In slight misuse of notation, we sometimes write e for these known values (e stands for evidence). The fragment e need not necessarily be written at the end in the parameter list of P.</p><p>A conditional probability for a set of random variables X 1 , ..., X m is denoted with P (X 1 = x 1 , ..., X m = x m | e) or, in distribution form, we write P(X 1 , ..., X m | e) (conditional probability distribution). This distribution can be also written as P( X, e)  P( e) . For a probabilistic knowledge base, formal inference problems are defined. We restrict our attention to the two most convenient probabilistic inference problems: A conditional probability query is the computation of the joint distribution of a set of m random variables conditioned on e and is denoted with</p><formula xml:id="formula_10">P X (x 1 ∧ ... ∧ x m | e) =?.</formula><p>where vars(x 1 , . . . , x m ) ∩ vars( e) = ∅ and vars(x 1 , . . . , x m ) ∪ vars( e) ⊆ X with vars specified in the obvious way. Note that x i indicates X i = x i . In the following we have the distribution form of the above query:</p><formula xml:id="formula_11">P X (X 1 , ..., X m | e) =?.</formula><p>If the set of random variables X is known from the context, the subscript X is often omitted. The Maximum A Posteriori (MAP) inference returns the most-likely state of query atoms given the evidence. Based on the MAP inference, the "most probable world" given the evidence is determined as a set of events. The MAP inference problem given a distribution for a set of random variables X is formalized as follows:</p><p>M AP X ( e) := e ∪ argmax x P ( x| e)</p><p>where vars( x) ∩ vars( e) = ∅ and vars( x) ∪ vars( e) = X.</p><p>For both inference problems, conditional probability queries as well as the MAP problem, different kinds of algorithms exist, which possibly exploit additional assertions (such as, e.g., conditional independence assumptions in so-called Bayesian networks, or factored probability distribution specifications as in so-called Markov networks). In the next subsection, we focus on the latter formalism.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4">Markov Logic</head><p>The formalism of Markov logic <ref type="bibr" target="#b0">[Domingos and Richardson, 2007]</ref> provides a means to combine the expressivity of first-order logic augmented with the formalism of Markov networks <ref type="bibr" target="#b2">[Pearl, 1988]</ref>. The Markov logic formalism uses first-order logic to define "templates" for constructing Markov networks. The basic notion for this is a called a Markov logic network.</p><p>A Markov logic network MLN = (F MLN , W MLN ) consists of an ordered multiset of first-order formulas F MLN = {F 1 , ..., F m } and an ordered multiset of real number weights W = {w 1 , ..., w m }. The association of a formula to its weight is by position in the ordered sets. For a formula F ∈ F M LN with associated weight w we also write wF (weighted formula). Thus, a Markov logic network can also be defined as a set of weighted formulas. Both views can be used interchangeably. As a notational convenience, for ordered sets we nevertheless sometimes write X, Y instead of X ∪ Y .</p><p>In contrast to standard first-order logics such as predicate logic, relational structures not satisfying a formula F i are not ruled out as models. If a relational structure does not satisfy a formula associated with a large weight it is just considered to be quite unlikely the "right" one.</p><p>Let C = {c 1 , ..., c m } be the set of all constants mentioned in F M LN . A grounding of a formula F i ∈ F M LN is a substitution of all variables in the matrix of F i with constants from C. From all groundings, the (finite) set of grounded atomic formulas (also referred to as ground atoms) can be obtained. Grounding corresponds to a domain closure assumption. The motivation is to get rid of the quantifiers and reduce inference problems to the propositional case.</p><p>Since a ground atom can either be true or false in an interpretation (or world), it can be considered as a boolean random variable X. Consequently, for each MLN with associated random variables X, there is a set of possible worlds x. In this view, sets of ground atoms are sometimes used to denote worlds. In this context, negated ground atoms correspond to false and non-negated ones to true. We denote worlds using a sequence of (possibly negated) atoms.</p><p>When a world x violates a weighted formula (does not satisfy the formula) the idea is to ensure that this world is less probable rather than impossible as in predicate logic. Note that weights do not directly correspond to probabilities (see <ref type="bibr" target="#b0">[Domingos and Richardson, 2007]</ref> for details).</p><p>For each possible world of a Markov logic network MLN = (F M LN , W M LN ) there is a probability for its occurrence. Probabilistic knowledge is required to obtain this value. As usual, probabilistic knowledge is specified using a probability distribution. In the formalism of Markov networks the full joint probability distribution of a Markov logic network M LN is specified in symbolic form as P M LN ( X) = (P ( X = x 1 ), . . . , P ( X = x n )), for every possible x i ∈ {true, f alse} n , n = | X| and P ( X = x) := log lin M LN ( x) (for a motivation of the log-linear form, see, e.g., <ref type="bibr" target="#b0">[Domingos and Richardson, 2007]</ref>), with log lin being defined as</p><formula xml:id="formula_13">log lin M LN ( x) = 1 Z exp ( |F M LN | i=1 w i n i ( x))</formula><p>According to this definition, the probability of a possible world x is determined by the exponential of the sum of the number of true groundings (n i ) formulas F i ∈ F M LN in x, multiplied with their corresponding weights w i ∈ W M LN , and finally normalized with</p><formula xml:id="formula_14">Z = x∈ X exp ( |F M LN | i=1 w i n i ( x)),<label>(2)</label></formula><p>the sum of the probabilities of all possible worlds. Thus, rather than specifying the full joint distribution directly in symbolic form as we have discussed before, in the Markov logic formalism, the probabilistic knowledge is specified implicitly by the weights associated with formulas. Determining these formulas and their weights in a practical context is all but obvious, such that machine learning techniques are usually employed for knowledge acquisition.</p><p>A conditional probability query for a Markov logic network M LN is the computation of the joint distribution of a set of m events involving random variables conditioned on e and is denoted with</p><formula xml:id="formula_15">P M LN (x 1 ∧ . . . ∧ x m | e)</formula><p>The semantics of this query is given as:</p><formula xml:id="formula_16">P rand vars(M LN ) (x 1 ∧ . . . ∧ x m | e) w.r.t. P M LN (rand vars(M LN ))</formula><p>where vars(x 1 , . . . , x m ) ∩ vars( e) = ∅ and vars(x 1 , . . . , x m ) ⊆ rand vars(M LN ). and the function rand vars function is defined as follows: rand vars((F, W)) := {P (C) | P (C) is mentioned in some grounded formula F ∈ F }. Grounding is accomplished w.r.t. all constants that appear in F. An algorithm for answering queries of the above form is investigated in <ref type="bibr" target="#b1">[Gries and Möller, 2010]</ref>.</p><p>In the case of Markov logic, the definition of the M AP problem given in (1) can be rewritten as follows. The conditional probability term P ( x| e) is replaced with with the Markovian formula:</p><formula xml:id="formula_17">M AP M LN ( e) := e ∪ argmax x 1 Z e exp i w i n i ( x, e)<label>(3)</label></formula><p>Thus, for describing the most-probable world, M AP returns a set of events, one for each random variable used in the Markov network derived from M LN . In the above equation, x denotes the hidden variables, and Z e denotes the normalization constant which indicates that the normalization process is performed over possible worlds consistent with the evidence e. In the next equation, Z e is removed since it is constant and it does not affect the argmax operation. Similarly, in order to optimize the M AP computation the exp function is left out since it is a monotonic function and only its argument has to be maximized:</p><formula xml:id="formula_18">M AP M LN ( e) := e ∪ argmax x i w i n i ( x, e)<label>(4)</label></formula><p>The above equation shows that the MAP problem in Markov logic formalism is reduced to a new problem which maximizes the sum of weights of satisfied clauses.</p><p>Since the MAP determination in Markov networks is an NP-hard problem [Domingos and <ref type="bibr" target="#b0">Richardson, 2007]</ref>, it is performed by exact and approximate solvers. The most commonly used approximate solver is MaxWalkSAT algorithm, a weighted variant of the WalkSAT local-search satisfiability solver. The MaxWalkSAT algorithm attempts to satisfy clauses with positive weights and keep clauses with negative weights unsatisfied.</p><p>It has to be mentioned that there might be several worlds with the same maximal probability. But at this step, only one of them is chosen non-deterministically.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.5">Combining Markov Logic and Description Logics</head><p>Since ALH f − is a fragment of first-order logic, its extension to the Markovian style of formalisms is specified in a similar way as for predicate logic in the section before. The formulas in Markov logic correspond to Tbox axioms and Abox assertions. Weights in Markov description logics are associated with axioms and assertions.</p><p>Groundings of Tbox axioms are defined analogously to the previous case.<ref type="foot" target="#foot_1">2</ref> Abox assertions do not contain variables and are already grounded. Note that since an ALH f − Abox represents a relational structure of domain objects, it can be directly seen as a possible world itself if assertions not contained in the Abox are assumed to be false.</p><p>For appropriately representing domain knowledge in CASAM, weights are possibly used only for a subset of the axioms of the domain ontology. The remaining axioms can be assumed to be strict, i.e., assumed to be true in any case. A consequence of specifying strict axioms is that lots of possible worlds x can be ruled out (i.e., will have probability 0 by definition).</p><p>A Markov DL knowledge base Σ M is a tuple (T , A), where T is comprised of a set T s of strict axioms and a set T w of weighted axioms and A is comprised of a set A s of strict assertions and a set A w of weighted assertions. Referring to axioms, a proposal for CASAM is to consider strictness for the domain ontology patterns (I)-(IV):</p><formula xml:id="formula_19">(I) subsumption A 1 A 2 , R 1 R 2 (II) disjointness A 1 ¬A 2 (III) domain and range restrictions ∃R. A, ∀R.A (IV) functional roles (≤ 1 R)</formula><p>The main justification treating axioms as strict is that the subsumption axioms, disjointness axioms, domain and range restrictions as well as functional role axioms (in combination with UNA) are intended to be true in any case such that there is no need to assign large weights to them.</p><p>In <ref type="bibr" target="#b1">[Gries and Möller, 2010]</ref> we show that Gibbs sampling with deterministic dependencies specified in an appropriate fragment remains correct, i.e., probability estimates approximate the correct probabilities. We have investigated a Gibbs sampling method incorporating deterministic dependencies and conclude that this incorporation can speed up Gibbs sampling significantly. For details see <ref type="bibr" target="#b1">[Gries and Möller, 2010]</ref>. The advantage of this probabilistic approach is that initial ontology engineering is done as usual with standard reasoning support and with the possibility to add weighted axioms and weighted assertions on top of the strict fundament. Since lots of possible worlds do not have to be considered because their probability is known to be 0, probabilistic reasoning will be significantly faster.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Probabilistic Interpretation Engine</head><p>In this chapter, the abduction procedure is defined by the abduction algorithm CAE. Additionally, a media interpretation agent is described by defining a probabilistic interpretation algorithm Interpret.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Computing Explanations</head><p>In general, abduction is formalized as Σ ∪ ∆ |= R Γ where background knowledge (Σ), rules (R), and observations (Γ ) are given, and explanations (∆) are to be computed. In terms of DLs, ∆ and Γ are Aboxes and Σ is a pair of Tbox and Abox.</p><p>Abox abduction is implemented as a non-standard retrieval inference service in DLs. In contrast to standard retrieval inference services where answers are found by exploiting the ontology, Abox abduction has the task of acquiring what should be added to the knowledge base in order to answer a query. Therefore, the result of Abox abduction is a set of hypothesized Abox assertions. To achieve this, the space of abducibles has to be previously defined and we do this in terms of rules.</p><p>We assume that a set of rules R as defined above (see Section 2.2) are specified, and define a non-deterministic function compute explanation as follows.</p><p>compute explanation(Σ, R, A, P (z)) = transform(Φ, σ) if there exists a rule</p><formula xml:id="formula_20">r = P (X) ← Q 1 (Y 1 ), . . . , Q n (Y n ) ∈ R</formula><p>that is applied to an Abox A such that a minimal set of query atoms Φ and an admissible variable substitution σ with σ(X) = z can be found, and the query Q := {() | expand(P (z), r, R, σ) \ Φ} is answered with true.</p><p>-If no such rule r exists in R it holds that compute explanation(Σ, R, A, P (z)) = ∅.</p><p>The goal of the function compute explanation is to determine what must be added (Φ) such that an entailment Σ ∪ A ∪ Φ |= R P (Z) holds. Hence, for compute explanation, abductive reasoning is used. The set of query atoms Φ defines what must be hypothesized in order to answer the query Q with true such that Φ ⊆ expand(P (X), r, R, σ) holds. The definition of compute explanation is non-deterministic due to several possible choices for Φ.</p><p>The function application expand(P (Z), P (X)</p><formula xml:id="formula_21">← Q 1 (Y 1 ), . . . , Q n (Y n ), R</formula><p>) is also defined in a nondeterministic way as</p><formula xml:id="formula_22">expand (Q 1 (σ (Y 1 )), R, σ) ∪ • • • ∪ expand (Q n (σ (Y n )), R, σ)</formula><p>with expand (P (Z), R, σ) being expand(P (σ (z)), r, R, σ ) if there exist a rule r = P (X) ← . . . ∈ R and P (X) otherwise. The variable substitution σ is an extension of σ such that:</p><formula xml:id="formula_23">σ = [X 1 ← z 1 , X 2 ← z 2 , . . .]<label>(5)</label></formula><p>The above equation shows the mapping of the free variables if it is not already defined. This means the free variables in the body of each rule are mapped to individuals with unique IDs.</p><p>We say the set of rules is backward-chained, and since there might be multiple rules in R, backwardchaining is non-deterministic as well. Thus, multiple explanations are generated.<ref type="foot" target="#foot_2">3</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">The Abduction Procedure</head><p>In the following, we devise an abstract computational engine for "explaining" Abox assertions in terms of a given set of rules. Explanation of Abox assertions w.r.t. a set of rules is meant in the sense that using the rules some high-level explanations are constructed such that the Abox assertions are entailed. The explanation of an Abox is again an Abox. For instance, the output Abox represents results of the content interpretation process. The presentation in slightly extended compared to the one in <ref type="bibr" target="#b0">[Castano et al., 2008]</ref>. Let the agenda A be a set of Aboxes Γ and let Γ be an Abox of observations whose assertions are to be explained. The goal of the explanation process is to use a set of rules R to derive "explanations" for elements in Γ . The explanation algorithm implemented in the CASAM abduction engine works on a set of Aboxes I. The complete explanation process is implemented by the CAE function: where assign level(l, A) is defined by a lambda calculus term as follows:</p><formula xml:id="formula_24">Function CAE(Ω, Ξ, Σ, R</formula><formula xml:id="formula_25">assign level(l, A) = map(λ(A) • assign level(l, A), A)<label>(6)</label></formula><p>assign level(l, A) takes as input a superscript l and an agenda A.</p><p>In the following, assign level(l, A) is defined which superscripts each assertion α of the Abox A with l if the assertion α does not already have a superscript:</p><formula xml:id="formula_26">assign level(l, A) = α l | α ∈ A, α = β i , i ∈ N (7)</formula><p>Note that l is a global variable, its starting value is zero and it is incremented in the CAE function.</p><p>The map<ref type="foot" target="#foot_3">4</ref> function is defined as follows:</p><formula xml:id="formula_27">map(f, X) = x∈X {f (x)} (8)</formula><p>It takes as parameters a function f and a set X and returns a set consisting of the values of f applied to every element x of X. CAE function applies the strategy function Ω in order to decide which assertion to explain, uses a termination function Ξ in order to check whether to terminate due to resource constraints and a scoring function S to evaluate an explanation. The function Ω for the explanation strategy and Ξ for the termination condition are used as an oracle and must be defined in an application-specific way. In our multimedia interpretation scenario we assume that the function requires f iat(α l ) is defined as follows:</p><formula xml:id="formula_28">requires fiat(α l ) = true if l = 0 f alse if l = 0</formula><p>The function explanation step is defined as follows.</p><p>explanation step(Σ, R, S, A, α): ∆∈compute all explanations(Σ,R,S,A,α) consistent completed explanations(Σ, R, A, ∆).</p><p>We need two additional auxiliary functions.</p><p>consistent completed explanations(Σ, R, A, ∆):</p><formula xml:id="formula_29">{∆ | ∆ = ∆ ∪ A ∪ forward chain(Σ, R, ∆ ∪ A), consistent Σ (∆ )} compute all explanations(Σ, R, S, A, α): maximize(Σ, R, A, {∆ | ∆ = compute explanation(Σ, R, α), consistent Σ∪A (∆)}, S).</formula><p>The function maximize(Σ, R, A, ∆s, S) selects those explanations ∆ ∈ ∆s for which the score S(Σ, R, A, ∆) is maximal, i.e., there exists no other ∆ ∈ ∆s such that S(Σ, R, A, ∆ ) &gt; S(Σ, R, A, ∆). The function consistent (T ,A) (A ) determines if the Abox A ∪ A has a model which is also a model of the Tbox T .</p><p>Note the call to the nondeterministic function compute explanation. It may return different values, all of which are collected.</p><p>In the next Section we explain how probabilistic knowledge is used to (i) formalize the effect of the "explanation", and (ii) formalize the scoring function S used in the CAE algorithm explained above. In addition, it is shown how the termination condition (represented with the parameter Ξ in the above procedure) can be defined based on the probabilistic conditions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">The Interpretation Procedure</head><p>The interpretation procedure is completely discussed in this section by explaining the interpretation problem and presenting a solution to this problem. The solution is presented by a probabilistic interpretation algorithm which calls the CAE function described in the previous section. In the given algorithm, a termination function, and a scoring function are defined. The termination function determines if the interpretation process can be stopped since at some point during the interpretation process it makes no sense to continue the process. The reason for stopping the interpretation process is that no significant changes can be seen in the results. The defined scoring function in this section assigns scores to interpretation Aboxes.</p><p>Problem The objective of the interpretation component is the generation of interpretations for the observations. An interpretation is an Abox which contains high level concept assertions. Since in the artificial intelligence, the agents are used for solving the problems, in the following the same problem is formalized in the perspective of an agent: Consider an intelligent agent and some percepts in an environment where the percepts are the analysis results of KDMA and HCI. The objective of this agent is finding explanations for the existence of percepts. The question is how the interpretation Aboxes are determined and how long the interpretation process must be performed by the agent. The functionality of this Media Interpretation Agent is presented in the M I Agent algorithm in Section 3.4. Solution In the following, an application for a probabilistic interpretation algorithm is presented which gives a solution to the mentioned problem. This solution illustrates a new perspective to the interpretation process and the reason why it is performed. Assume that the media interpretation component receives a weighted Abox A from KDMA and HCI which contains observations. In the following, the applied operation P (A, A , R, WR, T ) in the algorithm is explained:</p><p>The P (A, A , R, WR, T ) function determines the probability of the Abox A with respect to the Abox A , a set of rules R, a set of weighted rules WR, and the Tbox T where A ⊆ A . Note that R is a set of forward and backward chaining rules. The probability determination is performed based on the Markov logic formalism as follows:</p><formula xml:id="formula_30">P (A, A , R, WR, T ) = P M LN (A,A ,R,WR,T ) ( Q(A) | e(A ))<label>(9)</label></formula><p>Q(A) denotes an event composed of all assertions which appear in the Abox A. Assume that Abox A contains n assertions α 1 , . . . , α n . Consequently, the event for the Abox A is defined as follows:</p><formula xml:id="formula_31">Q(A) = α 1 = true ∧ . . . ∧ α n = true (10)</formula><p>where a 1 , . . . , a n denote the random variables of the MLN. Assume that an Abox A contains m assertions α 1 , . . . , α m . Then, the evidence vector e(A) defined as follows.</p><p>e(A) = a 1 = true, . . . , a m = true (11)</p><p>In order to answer the query</p><formula xml:id="formula_32">P M LN (A,A ,R,WR,T ) ( Q(A) | e(A )) the function M LN (A, A , R, WR, T )</formula><p>is called. This function builds the Markov logic network M LN based on the Aboxes A and A , the rules R, the weighted rules WR and the Tbox T which is a time consuming process. Note that in theory the above function is called not only once but several times. In a practical system, this might be handled more efficiently. This function returns a Markov logic network (F M LN , W M LN ), which we define here as follows using a tuple notation:</p><formula xml:id="formula_33">M LN (A, A , R, WR, T ) =                        {(α, w)} if w, α ∈ A {(α, ∞)} if α ∈ A {(α, w)} if w, α ∈ A {(α, ∞)} if α ∈ A {(α, ∞)} if α ∈ R {(α, w)} if w, α ∈ WR {(α, ∞)} if α ∈ T</formula><p>In the following, the interpretation algorithm Interpret is presented:</p><p>Function Interpret(A, CurrentI, Γ , T , FR, BR, WR, ) Input: an agenda A, a current interpretation Abox CurrentI, an Abox of observations Γ , a Tbox T , a set of forward chaining rules FR, a set of backward chaining rules BR, a set of weighted rules WR, and the desired precision of the results Output: an agenda A , a new interpretation Abox N ewI, and Abox differences for additions ∆ 1 and omissions ∆ 2 i := 0 ;</p><formula xml:id="formula_34">p 0 := P (Γ, Γ, R, WR, T ) ; Ξ := λ(A) • i := i + 1; p i := max A∈A P (Γ, A ∪ A 0 , R, WR, T ); return | p i − p i−1 |&lt; i ; Σ := (T , ∅); R := FR ∪ BR; S := λ((T , A 0 )), R, A, ∆) • P (Γ, A ∪ A 0 ∪ ∆, R, WR, T ); A := CAE(Ω, Ξ, Σ, R, S, A); N ewI = argmax A∈A (P (Γ, A, R, WR, T )); ∆ 1 = AboxDiff (N ewI, CurrentI); // additions ∆ 2 = AboxDiff (CurrentI, N ewI); // omissions return (A , N ewI, ∆ 1 , ∆ 2 );</formula><p>In the above algorithm, the termination function Ξ and the scoring function S are defined by lambda calculus terms. The termination condition Ξ of the algorithm is that no significant changes can be seen in the successive probabilities p i and p i−1 (scores) of the two successive generated interpretation Aboxes in two successive levels i − 1 and i. In this case, the current interpretation Abox CurrentI is preferred to the new interpretation Abox N ewI. In the next step, the CAE function is called which returns agenda A . Afterwards, the interpretation Abox NewI with the maximum score among the Aboxes A of A is selected. Additionally, the Abox differences ∆ 1 and ∆ 2 respectively for additions and omissions among the interpretation Aboxes CurrentI and N ewI are calculated. In the following, the strategy condition Ω is defined which is one of the parameters of CAE function:</p><p>Function Ω(I) Input: a set of interpretation Aboxes I Output: an Abox A and a fiat assertion α</p><formula xml:id="formula_35">A := A ∈ I | ¬∃A ∈ I, A = A : ∃α l ∈ A : ∀α l ∈ A : l &lt; l ; A := random select(A); min α s = α l ∈ A | ¬∃α l ∈ A , α l = α l , l &lt; l ;</formula><p>return (A, random select({min α s }));</p><p>In the above strategy function Ω, the agenda A is a set of Aboxes A such that the assigned superscripts to their assertions are minimum. In the next step, an Abox A from A is randomly selected. Afterwards, the min α s set is determined which contains the assertions α from A whose superscripts are minimum. These are the assertions which require explanations. The strategy function returns as output an Abox A and an assertion α which requires explanation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">The Media Interpretation Agent</head><p>In the following, the M I Agent function is presented which calls the Interpret function:</p><p>Function MI Agent(Q, P artners, Die, (T , A 0 ), FR, BR, WR, ) Input: a queue of percept results Q, a set of partners P artners, a function Die for the termination process, a background knowledge set (T , A 0 ), a set of forward chaining rules FR, a set of backward chaining rules BR, a set of weighted rules WR, and the desired precision of the results Output: -</p><formula xml:id="formula_36">CurrentI = ∅; A = {∅}; repeat Γ := extractObservations(Q); W := M AP (Γ, WR, T ) ; Γ := Select(W, Γ ); A := filter (λ(A)•consistent Σ (A), map(λ(A)•Γ ∪A∪A 0 ∪forward chain(Σ, FR, Γ ∪A∪A 0 ), {select(M AP (Γ ∪ A ∪ A 0 , WR, T ), Γ ∪ A ∪ A 0 ) | A ∈ A })); (A , N ewI, ∆ 1 , ∆ 2 ) := Interpret(A , CurrentI, Γ , T , FR, BR, WR ∪ Γ, ); CurrentI := N ewI; Communicate(∆ 1 , ∆ 2 , P artners); A := manage agenda(A ); until Die() ;</formula><p>where the filter function is defined as follows:</p><formula xml:id="formula_37">filter (f, X) = x∈X {x} if f (x) = true ∅ else<label>(12)</label></formula><p>The filter function takes as parameters a function f and a set X and returns a set consisting of the values of f applied to every element x of X.</p><p>In the M I Agent function, the current interpretation CurrentI and the agenda A are initialized to empty set. Since the agent performance is an incremental process, it is defined by a repeat − until loop. The percept results Γ are sent by KDMA and HCI to the queue Q. In order to take the observations Γ from the queue Q, the M I Agent calls the extractObservations function.</p><p>The M AP (Γ ∪ A, WR, T ) function determines the most probable world of observations Γ with respect to a set of weighted rules WR and the Tbox T . This function performs actually the mentioned MAP process in Chapter 2. It returns a vector W which consists of a set of zeros and ones assigned to the ground atoms of the considered world. The assertions with assigned zeros and ones are called respectively, negative and positive assertions.</p><p>The Select(W, Γ ) function selects the positive assertions from the bit vector W in the input Abox Γ . The selected positive assertions which require explanations are also known as fiat assertions. This operation returns as output an Abox Γ which has the following characteristic: Γ ⊆ Γ .</p><p>In the next step, a set of forward chaining rules FR is applied to all the Aboxes of A . The generated assertions in this process are added to the to the Abox A. In the next step, only the consistent Aboxes are selected and the other inconsistent Aboxes are not considered for the next steps.</p><p>In the next step, the Interpret function is called to determine the new agenda A , the new interpretation N ewI and the Abox differences ∆ 1 and ∆ 2 for additions and omissions among CurrentI and N ewI. Afterwards, the CurrentI is set to the N ewI and the M I Agent function communicates the Abox differences ∆ 1 and ∆ 2 to the partners. Additionally, the Tbox T , the set of forward chaining rules FR, the set of backward chaining rules BR, and the set of weighted rules WR can be learnt by the Learn function. The termination condition of the M I Agent function is that the Die() function is true.</p><p>Note that the M I Agent waits at the function call extractObservations(Q) if Q = ∅.</p><p>After presenting the above algorithms, the mentioned unanswered questions can be discussed. A reason for performing the interpretation process and explaining the fiat assertions is that the probability of P (A, A , R, WR, T ) will increase through the interpretation process. In other words, by explaining the observations the agent's belief to the percepts will increase. This shows a new perspective for performing the interpretation process.</p><p>The answer to the question whether there is any measure for stopping the interpretation process, is indeed positive. This is expressed by | p i − p i−1 |&lt; i which is the termination condition Ξ of the algorithm. The reason for selecting i and not as the upper limit for the termination condition is to terminate the oscillation behaviour of the results. In other words, the precision interval is tightened step by step during the interpretation process. The function manage agenda is explained Section 6. Before, we dicuss an example for interpreting a single video shot in Section 4, and a scene in Section 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Video Shot Interpretation</head><p>One of the main innovation introduced in the previous section, namely the introduction of a probabilistic preference measure to control the space of possible interpretations, is exemplified here using examples inspired by the environmental domain used in the project CASAM.</p><p>We have to mention that this example is not constructed to show the possible branchings through the interpretation process. The purpose of this example is to show how the probabilities of the most probable world of observations P (A 0 , A, R, WR, T ) behave during the interpretation process.</p><p>At the beginning of this example, the signature of the knowledge base is presented. The set of all concept names CN is divided into two disjoint sets Events and PhysicalThings such that CN = Events ∪ PhysicalThings where these two sets are defined as follows: Events = {CarEntry, EnvConference, EnvP rot, HealthP rot} PhysicalThings = {Car, DoorSlam, Building, Environment, Agency} EnvConference, EnvP rot and HealthP rot denote respectively environmental conference, environmental protection and health protection.</p><p>The set of role names RN is defined as follows: BR = {Causes(x, y) ← CarEntry(z), HasObject(z, x), HasEffect(z, y), Car(x), DoorSlam(y), OccursAt(x, y) ← EnvConference(z), HasSubEvent(z, x), HasLocation(z, y), CarEntry(x), Building(y), HasT opic(x, y) ← EnvP rot(z), HasSubEvent(z, x), HasObject(z, y), EnvConference(x), Environment(y), HasAgency(x, y) ← HealthP rot(z), HasObject(z, x), HasSubject(z, y), EnvP rot(x), Agency(y)}</p><formula xml:id="formula_38">RN = {Causes,</formula><p>In the following, a set of weighted rules WR is defined where all rules have the same high weight equal to 5: WR = {5 ∀x, y, z CarEntry(z)∧HasObject(z, x)∧HasEffect(z, y) → Car(x)∧DoorSlam(y)∧Causes(x, y), 5 ∀x, y, z EnvConference(z)∧HasSubEvent(z, x)∧HasLocation(z, y) → CarEntry(x)∧Building(y)∧OccursAt(x, y), 5 ∀x, y, z EnvP rot(z) ∧ HasSubEvent(z, x) ∧ HasObject(z, y) → EnvConference(x) ∧ Environment(y) ∧ HasT opic(x, y), 5 ∀x, y, z HealthP rot(z)∧HasObject(z, x)∧HasSubject(z, y) → EnvP rot(x)∧Agency(y)∧HasAgency(x, y)} Note that the weighted rules WR and their weights can be learnt by the machine learning component. The selected initial value for in this example is 0.05. In the following, ∆ 1 and ∆ 2 denote respectively the set of assertions hypothesized by a forward chaining rule and the set of assertions generated by a backward chaining rule at each interpretation level.</p><p>Let us assume that the media interpretation agent receives the following weighted Abox A: A = {1.3 Car(C 1 ), 1.2 DoorSlam(DS 1 ), −0.3 EngineSound(ES 1 ), Causes(C 1 , DS 1 )} The first applied operation to A is the M AP function which returns the bit vector W = 1, 1, 0, 1 . This vector is composed of positive and negative events (bits). By applying the Select function to W and the input Abox A, the assertions from the input Abox A are selected that correspond to the positive events in W . Additionally, the assigned weights to the positive assertions are also taken from the input Abox A. In the following, Abox A 0 is depicted which contains the positive assertions:</p><p>A 0 = {1.3 Car(C 1 ), 1.2 DoorSlam(DS 1 ), Causes(C 1 , DS 1 )} At this step, p 0 = P (A 0 , A 0 , R, WR, T ) = 0.755. Since no appropriate forward chaining rule from FR is applicable to Abox A 0 , ∆ 1 = ∅ and as a result A 0 = A 0 ∪ ∅. The next step is the performance of backward chain function where the next backward chaining rule from BR can be applied to Abox A 0 :</p><p>Causes(x, y) ← CarEntry(z), HasObject(z, x), HasEffect(z, y), Car(x), DoorSlam(y) Consequently, by applying the above rule the next set of assertions is hypothesized: ∆ 2 = {CarEntry(Ind 42 ), HasObject(Ind 42 , C 1 ), HasEffect(Ind 42 , DS 1 )} which are considered as strict assertions. Consequently, A 1 is defined as follows: A 1 = A 0 ∪ ∆ 2 . In the above Abox, p 1 = P (A 0 , A 1 , R, WR, T ) = 0.993. As it can be seen, p 1 &gt; p 0 i.e. P (A 0 , A i , R, WR, T ) increases by adding the new hypothesized assertions. This shows that the new assertions are considered as additional support. The termination condition of the algorithm is not fulfilled therefore the algorithm continues processing. At this level, it is still not known whether Abox A 1 can be considered as the final interpretation Abox. Thus, this process is continued with another level. Consider the next forward chaining rule:</p><p>∀x CarEntry(x) → ∃y Building(y), OccursAt(x, y) By applying the above rule, the next set of assertions is generated namely: ∆ 1 = {Building(Ind 43 ), OccursAt(Ind 42 , Ind 43 )} The new generated assertions are also considered as strict assertions. In the following, the expanded Abox A 1 is defined as follows:</p><formula xml:id="formula_39">A 1 = A 1 ∪ ∆ 1 .</formula><p>Let us assume the next backward chaining rule from BR:</p><p>OccursAt(x, y) ← EnvConference(z), HasSubEvent(z, x), HasLocation(z, y), CarEntry(x), Building(y)</p><p>Consequently, by applying the above abduction rule the next set of assertions is hypothesized: ∆ 2 = {EnvConference(Ind 44 ), HasSubEvent(Ind 44 , Ind 42 ), HasLocation(Ind 44 , Ind 43 )} which are considered as strict assertions. Consequently,</p><formula xml:id="formula_40">A 2 = A 1 ∪ ∆ 2 .</formula><p>In the above Abox, p 2 = P (A 0 , A 2 , R, WR, T ) = 0.988. As it can be seen, p 2 &lt; p 1 i.e. P (A 0 , A i , R, WR, T ) decreases slightly by adding the new hypothesized assertions. Since the termination condition of the algorithm is fulfilled, Abox A 1 can be considered as the final interpretation Abox. To realize how the further behaviour of the probabilities is, this process is continued. Consider the next forward chaining rule from FR:</p><p>∀x EnvConference(x) → ∃y Environment(y), HasT opic(x, y) By applying the above rule, new assertions are generated. ∆ 1 = {Environment(Ind 45 ), HasT opic(Ind 44 , Ind 45 )} In the following, the expanded Abox A 2 is defined: A 2 = A 2 ∪ ∆ 1 . Consider the next backward chaining rule from BR:</p><p>HasT opic(x, y) ← EnvP rot(z), HasSubEvent(z, x), HasObject(z, y), EnvConference(x), Environment(y) By applying the above abduction rule, the following set of assertions is hypothesized:</p><p>∆ 2 = {EnvP rot(Ind 46 ), HasSubEvent(Ind 46 , Ind 44 ), HasObject(Ind 46 , Ind 45 )} which are considered as strict assertions. In the following, A 3 is defined as follows A 3 = A 2 ∪ ∆ 2 . In the above Abox A 3 , p 3 = P (A 0 , A 3 , R, WR, T ) = 0.99. As it can be seen, p 3 &gt; p 2 , i.e. P (A 0 , A i , R, WR, T ) increases slightly by adding the new hypothesized assertions. Consider the next forward chaining rule: ∀x EnvP rot(x) → ∃y Agency(y), HasAgency(x, y) By applying the above rule, the next assertions are generated: ∆ 1 = {Agency(Ind 47 ), HasAgency(Ind 46 , Ind 47 )} As a result, the expanded Abox A 3 is presented as follows:</p><formula xml:id="formula_41">A 3 = A 3 ∪ ∆ 1 .</formula><p>Let us consider the next backward chaining rule from BR:</p><p>HasAgency(x, y) ← HealthP rot(z), HasObject(z, x), HasSubject(z, y), EnvP rot(x), Agency(y)</p><p>Consequently, new assertions are hypothesized by applying the above abduction rule, namely: ∆ 2 = {HealthP rot(Ind 48 ), HasObject(Ind 48 , Ind 46 ), HasSubject(Ind 48 , Ind 47 )} which are considered as strict assertions. Consequently, A 4 is defined as follows: A 4 = A 3 ∪ ∆ 2 .</p><p>In the above Abox, p 4 = P (A 0 , A 4 , R, WR, T ) = 0.985. As it can be seen, p 4 &lt; p 3 , i.e. P (A 0 , A i , R, WR, T ) decreases slightly by adding the new hypothesized assertions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Evaluation of the Results:</head><p>The determined probability values P (A 0 , A i , R, WR, T ) of this example are summarized in the next table which shows clearly the behaviour of the probabilities stepwise after performing the interpretation process: i Abox A i p i = P (A 0 , A i , R, WR, T ) 0 A 0 p 0 = 0.755 1 A 1 p 1 = 0.993 2 A 2 p 2 = 0.988 3 A 3 p 3 = 0.99 4 A 4 p 4 = 0.985 Table <ref type="table">1</ref>. Summary of the probability values In the above table, variable i denotes the successive levels of the interpretation process. In this example, the interpretation process is consecutively performed four times. As it can be seen, through the first interpretation level the probability p 1 increases strongly in comparison to p 0 . By performing the second, third and the forth interpretation levels, the probability values decrease slightly in comparison to p 1 . This means no significant changes can be seen in the results. In other words, the determination of A 3 and A 4 were not required at all. But the determination of A 2 was required to realize the slight difference |p 2 − p 1 | &lt; 2 . Consequently, Abox A 1 is considered as the final interpretation Abox.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Preference-based Scene Interpretation</head><p>In this example, we discuss how the video shot interpretation process can be performed by considering the results of two consecutive video shots. For the interpretation of each video shot we require information about the previous video shots otherwise the interpretation process does not work correctly. The question is which assertions have to be considered from the previous video shots. As it was discussed in this paper we would like to consider the assertions from the previous video shots which increase P (A 0 , A i , R, WR, T ). At the beginning of this example, the signature of the knowledge base is presented. The set of the concept names CN is divided into two disjoint sets Events and PhysicalThings which are described as follows:</p><p>Events where AudioSeg, HasSegLoc and V ideoSeg denote AudioSegment, HasSegmentLocator and V ideoSegment respectively. Note that the concepts and roles in FR which are not given in CN and RN appear only in the multimedia content ontology. The multimedia content ontology determines the structure of the multimedia document. Additionally, it determines whether the concpts are originated from video, audio or text. The above rules mean that the concept assertion CarEntry or CarExit from the first shot appear chronologically before the concept assertion CarEntry or CarExit from the second video shot. The set of backward chaining rules BR is presented as follows:</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>ALH f − -knowledge base Σ in an interpretation I are defined. A concept inclusion C D (concept definition C ≡ D) is satisfied in I, if C I ⊆ D I (resp. C I = D I ) and a role inclusion R S (role definition R ≡ S), if R I ⊆ S I (resp. R I = S I ). Similarly, assertions C(a) and R(a, b) are satisfied in I, if a I ∈ C I resp. (a, b) I ∈ R I . If an interpretation I satisfies all axioms of T resp. A it is called a model of T resp. A.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>, S, A): Input: a strategy function Ω, a termination function Ξ, a background knowledge Σ, a set of rules R, a scoring function S, and an agenda A Output: a set of interpretation Aboxes I I := {assign level(l, A)}; repeat I := I ; (A, α) := Ω(I) // A ∈ I, α ∈ A s.th. requires f iat(α l ) holds; l = l + 1; I := (A \ {A}) ∪ assign level(l, explanation step(Σ, R, S, A, α)); until Ξ(I) or no A and α can be selected such that I = I ; return I</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>= {CarEntry, CarExit, CarRide} PhysicalThings = {Car, DoorSlam} Additionally, the set of the role names RN and the set of the individual names IN are represented as follows: RN = {Causes, HasObject, HasEffect, Before, HasStartEvent, HasEndEvent} IN = {C 1 , C 2 , DS 1 , DS 2 , Ind 41 , Ind 42 , Ind 44 } The Tbox T contains the axiom CarEntry ¬CarExit. In the following, the set of forward chaining rules FR is given: F R = { ∀x, xl, y, yl, w, z AudioSeg(x), HasSegLoc(x, xl), V ideoSeg(y), HasSegLoc(y, yl), IsSmaller(xl, yl), Depicts(x, w), Depicts(y, z), CarEntry(w), CarEntry(z) → Before(z, w), ∀x, xl, y, yl, w, z AudioSeg(x), HasSegLoc(x, xl), V ideoSeg(y), HasSegLoc(y, yl), IsSmaller(xl, yl), Depicts(x, w), Depicts(y, z), CarEntry(w), CarExit(z) → Before(z, w), ∀x, xl, y, yl, w, z AudioSeg(x), HasSegLoc(x, xl), V ideoSeg(y), HasSegLoc(y, yl), IsSmaller(xl, yl), Depicts(x, w), Depicts(y, z), CarExit(w), CarEntry(z) → Before(z, w), ∀x, xl, y, yl, w, z AudioSeg(x), HasSegLoc(x, xl), V ideoSeg(y), HasSegLoc(y, yl), IsSmaller(xl, yl), Depicts(x, w), Depicts(y, z), CarExit(w), CarExit(z) → Before(z, w)}</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>OccursAt, HasAgency, HasT opic, HasSubject, HasObject, HasEffect,  HasSubEvent, HasLocation}    In the following, the set of individual names IN is given:IN = {C 1 , DS 1 , ES 1 ,Ind 42 , Ind 43 , Ind 44 , Ind 45 , Ind 46 , Ind 47 , Ind 48 } Note that the notations in this example are based on Alchemy notations i.e. the instance-, conceptand role names begin with capital letters. In the following, the set of forward chaining rules FR is defined: FR = {∀x CarEntry(x) → ∃y Building(y), OccursAt(x, y), ∀x EnvConference(x) → ∃y Environment(y), HasT opic(x, y), ∀x EnvP rot(x) → ∃y Agency(y), HasAgency(x, y)} Similarly, the set of backward chaining rules BR is given as follows:</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">We slightly misuse notation in assuming (T , A) ∪ ∆ = (T , A ∪ ∆). If Σ ∪ A is inconsistent the result is well-defined but useless. It will not be used afterwards.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">For this purpose, the variable-free syntax of axioms can be first translated to predicate logic.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">In the expansion process, variables have to be renamed. We neglect these issues here.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">Please note that in this report, the expression map is used in two different contexts. The first one M AP denotes the Maximum A Posteriori approach which is a sampling method whereas the second one map is a function used in the assign level(l, A) function.</note>
		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This work has been funded by the European Community with the project CASAM (Contract FP7-217061 CASAM) and by the German Science Foundation with the project PRESINT (DFG MO 801/1-1).</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Additionally, the set of weighted rules is defined as follows: Assume that the analysis results of second video shot given in the next Abox are sent to the queue Q:</p><p>A 2 = {1.3 Car(C 2 ), 1.2 DoorSlam(DS 2 ), Causes(C 2 , DS 2 )} Similarly, for the interpretation of the second video shot we will call the function M I Agent(Q, P artners, Die, (T , A 0 ), FR, BR, WR, ). The observation extracttion process from Q leads to Γ = A 2 . Afterwards, the most probable world W = 1, 1, 1 is determined and applying Select function on</p><p>Since no forward chaining rule is applicable to the above set and this set contains consistent Aboxes A</p><p>In the next step, the function Interpret(A , CurrentI, Γ , T , FR, BR, WR ∪ Γ, ) is called which determines P (Γ , Γ , R, WR, T ) = 0.733. Afterwards, the CAE function is called which determines the next exaplanations: </p><p>Afterwards applies the forward chaining rules on the above agenda. A new assertion Before(Ind 41 , Ind 42 ) is generated and added to the four interpretation Aboxes. In the following, the possible four interpretation Aboxes are given: </p><p>The above values show that the interpretation Abox I 2 has a higher scoring value than the other interpretation Aboxes. Therefore the final interpretation Abox is N ewI = I 2 . The Abox differences for additions and omissions are defined as follows:</p><p>For the next interpretation steps the agenda can continue with I 2 and eliminate the other interpretation Aboxes since this Abox has a higher scoring value.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Manage Agenda</head><p>In this section, we introduce some techniques which improves the performance of the agenda.</p><p>-Elimination of the interpretation Aboxes: This technique is applied if there are multiple interpretation Aboxes with different scoring values where one of the Aboxes has a higher scoring value. At this step, we can select this Abox, eliminate the remaining interpretation Aboxes and continue the interpretation process with the selected Abox. -Combining the interpretation Aboxes: Consider the interpretation Aboxes I 1 , . . . , I n . In order to determine the final interpretation Abox, the MAP process can be applied to the union of all interpretation Aboxes I 1 ∪ . . . ∪ I n . -Shrinking the interpretation Aboxes: By applying this technique, we can decide which assertions from the previous video shots have to be considered for the interpretation process of the following video shots since considering all assertions of the previous video shots will slow down the interpretation process. We believe that only the high level concept assertions from the previous video shots play an important role and not the low level concept assertions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Summary</head><p>For multimedia interpretation, a semantically well-founded formalization is required. In accordance with previous work, in CASAM a well-founded abduction-based approach is pursued. Extending previous work, abduction is controlled by probabilistic knowledge, and it is done in terms of first-order logic. Rather than merely using abduction for computing explanation with which observations are entailed, the approach presented in this paper, uses a probabilistic logic to motivate the explanation endeavor by increasing the belief in the observations. Hence, there exists a certain utility for an agent for the computational resources it spends for generating explanations. Thus, we have presented a first attempt to more appropriately model a media interpretation agent.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">The Description Logic Handbook: Theory, Implementation and Applications</title>
		<author>
			<persName><surname>Baader</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Introduction to Statistical Relational Learning</title>
				<editor>
			<persName><forename type="first">L</forename><surname>Getoor</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Taskar</surname></persName>
		</editor>
		<meeting><address><addrLine>Cambridge, MA</addrLine></address></meeting>
		<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="2003">2003. 2003. 2008. 2008. 2007. 2007</date>
			<biblScope unit="page" from="339" to="371" />
		</imprint>
	</monogr>
	<note>Journal of Logic and Computation</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Gibbs sampling in probabilistic description logics with deterministic dependencies</title>
		<author>
			<persName><surname>Gries</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Möller ; Gries</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Möller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the First International Workshop on Uncertainty in Description Logics</title>
				<meeting>of the First International Workshop on Uncertainty in Description Logics<address><addrLine>Edinburgh</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2010">2010. 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference</title>
		<author>
			<persName><forename type="first">J</forename><surname>Pearl ; Pearl</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1988">1988. 1988</date>
			<publisher>Morgan Kaufmann</publisher>
			<pubPlace>San Mateo, CA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
