=Paper= {{Paper |id=Vol-1350/paper-48 |storemode=property |title=Dynamic Bayesian Description Logics |pdfUrl=https://ceur-ws.org/Vol-1350/paper-48.pdf |volume=Vol-1350 |dblpUrl=https://dblp.org/rec/conf/dlog/CeylanP15 }} ==Dynamic Bayesian Description Logics== https://ceur-ws.org/Vol-1350/paper-48.pdf
            Dynamic Bayesian Description Logics

                    İsmail İlkan Ceylan1? and Rafael Peñaloza2 ??
                1
                Theoretical Computer Science, TU Dresden, Germany
                         ceylan@tcs.inf.tu-dresden.de
           2
             KRDB Research Centre, Free University of Bozen-Bolzano, Italy
                            rafael.penaloza@unibz.it




1      Introduction

It is well known that many artificial intelligence applications need to represent
and reason with knowledge that is not fully certain. This has motivated the
study of many knowledge representation formalisms that can effectively han-
dle uncertainty, and in particular probabilistic description logics (DLs) [7–9].
Although these logics are encompassed under the same umbrella, they differ
greatly in the way they interpret the probabilities (e.g. statistical vs. subjec-
tive), their probabilistic constructors (i.e., probabilistic axioms or probabilistic
concepts and roles), their semantics, and even their probabilistic independence
assumptions. A recent example of probabilistic DLs are the Bayesian DLs, which
can express both logical and probabilistic dependencies between axioms [2–4].
     One common feature among most of these probabilistic DLs is that they
consider the uncertainty degree (i.e., the probability) of the different events to
be fixed and static through time. However, this assumption is still too strong
for many application scenarios. Consider for example a situation where a grid
of sensors is collecting knowledge that is then fed into an ontology to reason
about the situation of a large system. Since the sensors might perform an incor-
rect reading, this knowledge and the consequences derived from it can only be
guaranteed to hold with some probability. However, the failure rate of a sensor
is not static over time; as the sensor ages, its probability of failing increases.
Moreover, the speed at which each sensor ages may also be influenced by other
external factors like the weather at the place it is located, or the amount of use
it is given.
     We propose to extend the formalism of Bayesian DLs to dynamic Bayesian
DLs, in which the probabilities of the axioms to hold are updated over discrete
time steps following the principles of dynamic Bayesian networks. Using this
principle, we can not only reason about the probabilistic entailments at every
point in time, but also reason about future events given some evidence at different
times. This work presents the first steps towards probabilistic reasoning about
complex events over time.
?
     Supported by DFG within the Research Training Group “RoSI” (GRK 1907).
??
     The work was developed while the author was still affiliated with TU Dresden and
     the author has been partially supported by the Cluster of Excellence “cfAED”.
                                         x0                   y0                     z0
            x                 x0    x    .4   x    y    x0    .9   z    x0    y0     .7
                                    ¬x   .4   x    y    ¬x0   .5   z    x0    ¬y 0   .2
                                              x    ¬y   x0    .8   z    ¬x0   y0     .4
                                              x    ¬y   ¬x0   .4   z    ¬x0   ¬y 0    1
                                              ¬x   y    x0    .8   ¬z   x0    y0     .6
            y        y0                       ¬x   y    ¬x0   .4   ¬z   x0    ¬y 0   .1
                                              ¬x   ¬y   x0    .7   ¬z   ¬x0   y0     .3
                                              ¬x   ¬y   ¬x0   .1   ¬z   ¬x0   ¬y 0    1

            z                 z0
            t                t+1

                Fig. 1. The TBN B→ over the variables V = {x, y, z}



2   Formalism

Following the ideas presented in [2], a dynamic Bayesian ontology is an ontology
from an arbitrary (but fixed) DL L, whose axioms are annotated with a context
that expresses when they are considered to hold. The difference is that these
contexts are now related via a dynamic Bayesian network.
    A Bayesian network (BN) [5] is a pair B = (G, Φ), where G = (V, E) is a fi-
nite DAG and Φ contains, for every x ∈ V , a conditional probability distribution
PB (x | π(x)) of x given its parents π(x). Dynamic BNs (DBNs) [6,10] extend BNs
to provide a compact representation of evolving joint probability distributions for
a fixed set of random variables. Updates of the JPD are expressed through a two-
slice BN, which expresses the probabilities at the next point in time, given the
current context. A two-slice BN (TBN) is a pair (G, Φ), where G = (V ∪ V 0 , E)
is a DAG containing no edges between elements of V , V 0 = {x0 | x ∈ V }, and Φ
contains for every x0 ∈ V 0 a conditional probability distribution P (x0 | π(x0 )) of
x0 given its parents π(x0 ) (see Figure 1). A dynamic Bayesian network (DBN)
is a pair D = (B1 , B→ ) where B1 is a BN and B→ is a TBN. Using the Markov
property: the probability of the future state is independent from the past, given
                        Qt D
the present state, the DBN     Q = (B1 , B→ ) defines, for every t ≥ 1, a probability
distribution PB (Vt ) = i=2 x∈V PB→ (xi | π(xi )i−1 )PB1 (V1 ). This distribution
is defined by unraveling the DBN starting from B1 , using B→ until t copies of V
have been created. This produces a new BN B1:t encoding the distribution over
time of the variables. Figure 2 depicts B1:3 for the DBN (B1 , B→ ) where B→ is
the TBN from Figure 1. The conditional probability tables of each node given
its parents (not depicted) are those of B1 for the nodes in V1 , and of B→ for
nodes in V2 ∪ V3 . Notice that B1:t has t copies of each random variable in V .
    A V -context is a consistent set of literals over V . A V -axiom is of the form
hα : κi where α ∈ A is an axiom and κ is a V -context. A V -ontology is a finite
set O of V -axioms, from the DL L. A DBL knowledge base (KB) over V is a
pair K = (O, D) where D is a DBN over V and O is a V -ontology. The semantics
of this logic is defined by producing for every point in time, a multiple-world
interpretation, where each world is associated to a probability that is compatible
with the probability distribution defined by the DBN at that point in time.


3       Reasoning

The main reasoning task that we consider is to compute the probability of ob-
serving a consequence at different points in time. We consider three variants of
this problem; namely, the probability of observing the consequence (i) exactly
at time t ≥ 0, (ii) at most at time t, or (iii) at any point in time. Combining
methods from DBNs and context-based reasoning [1], we show that all these
reasoning problems can be solved effectively.
    The main idea is based on the unraveling of the DBN to time t. Using this
unraveling, we readily know the probability of each context at every point in time
between the present state and t. The logical element of the problem (i.e., knowing
which contexts entail the consequence under consideration) is handled through
the computation of a so-called context formula, which intuitively summarizes
all the logical causes for the consequence to follow. Importantly, this context
formula can be computed once for each consequence, and used for many different
tests while reasoning. This unraveling and context formula can be used to solve
the problems (i) and (ii) introduced above, with the help of a standard BN
inference engine. Moreover, it is possible to add evidence observed at different
points in time into the computation. This additional evidence does not yield
any technical difficulties to our techniques, although may cause an increase in
complexity, depending on the type and frequency of observations.
    Clearly, the unraveling method cannot be used to compute the probability of
eventually observing the consequence, as described by the problem (iii) above:
one would potentially need to unravel the DBN to an infinite time, yielding a
structure for which no effective reasoning methods exist. Instead, we identify
some conditions under which this probability is easy to compute. Overall, this
does not yield a full reasoning mechanism, but provides a good approximation
in several meaningful cases.
    As mentioned before, this work provides only the first steps towards a for-
malism for reasoning about events with evolving uncertainty. The following step
is to be able to handle more complex time expressions and evidence.


                                   x1                   x2                  x3
          x

                           y1                    y2                   y3

    y            z                      z1                   z2                  z3


              Fig. 2. B1 and the three step unraveling B1:3 of (B1 , B→ )
References
 1. Baader, F., Knechtel, M., Peñaloza, R.: Context-Dependent Views to Axioms and
    Consequences of Semantic Web Ontologies. J. of Web Semantics 12–13, 22–40
    (2012)
 2. Ceylan, İ.İ., Peñaloza, R.: Bayesian Description Logics. In: Proc. of DL’14. CEUR
    Workshop Proceedings, vol. 1193. CEUR-WS (2014)
 3. Ceylan, İ.İ., Peñaloza, R.: The Bayesian Description Logic BEL. In: Proc. of IJ-
    CAR’14. LNCS, vol. 8562. Springer Verlag (2014)
 4. Ceylan, İ.İ., Peñaloza, R.: Tight Complexity Bounds for Reasoning in the Descrip-
    tion Logic BEL. In: Proc. of JELIA’14. LNCS, vol. 8761. Springer Verlag (2014)
 5. Darwiche, A.: Modeling and Reasoning with Bayesian Networks. Cambridge Uni-
    versity Press (2009)
 6. Dean, T., Kanazawa, K.: A model for reasoning about persistence and causation.
    Computational intelligence pp. 142–150 (1989), http://onlinelibrary.wiley.
    com/doi/10.1111/j.1467-8640.1989.tb00324.x/abstract
 7. Klinov, P., Parsia, B.: Representing sampling distributions in P-SROIQ. In: Proc.
    of URSW’11. Workshop Proceedings, vol. 778. CEUR (2011)
 8. Lukasiewicz, T., Straccia, U.: Managing uncertainty and vagueness in description
    logics for the Semantic Web. Web Semantics: Science, Services and Agents on the
    World Wide Web 6(4), 291–308 (Nov 2008)
 9. Lutz, C., Schröder, L.: Probabilistic Description Logics for Subjective Uncertainty.
    In: Proc. of KR’10. AAAI Press (2010)
10. Murphy, K.: Dynamic bayesian networks: representation, inference and learning.
    Ph.D. thesis, University of California, Berkeley (2002)