=Paper= {{Paper |id=Vol-2663/paper-9 |storemode=property |title=Give Inconsistency a Chance: Semantics for Ontology-Mediated Verification |pdfUrl=https://ceur-ws.org/Vol-2663/paper-9.pdf |volume=Vol-2663 |authors=Clemens Dubslaff,Patrick Koopmann,Anni-Yasmin Turhan |dblpUrl=https://dblp.org/rec/conf/dlog/DubslaffKT20 }} ==Give Inconsistency a Chance: Semantics for Ontology-Mediated Verification== https://ceur-ws.org/Vol-2663/paper-9.pdf
           Give Inconsistency a Chance:
    Semantics for Ontology-Mediated Verification

         Clemens Dubslaff, Patrick Koopmann, and Anni-Yasmin Turhan

                Technische Universität Dresden, Dresden, Germany
    {clemens.dubslaff,patrick.koopmann,anni-yasmin.turhan}@tu-dresden.de



        Abstract. Recently, we have introduced ontologized programs as a for-
        malism to link description logic ontologies with programs specified in
        the guarded command language, the de-facto standard input formalism
        for probabilistic model checking tools such as Prism, to allow for an
        ontology-mediated verification of stochastic systems. Central to our ap-
        proach is a complete separation of the two formalisms involved: guarded
        command language to describe the dynamics of the stochastic system
        and description logics are used to model background knowledge about
        the system in an ontology. In ontologized programs, these two languages
        are loosely coupled by an interface that mediates between these two
        worlds. Under the original semantics defined for ontologized programs, a
        program may enter a state that is inconsistent with the ontology, which
        limits the capabilities of the description logic component. We approach
        this issue in different ways by introducing consistency notions, and dis-
        cuss two alternative semantics for ontologized programs. Furthermore,
        we present complexity results for checking whether a program is consis-
        tent under the different semantics.


Keywords: Description Logics, Inconsistency, Probabilistic Model Checking,
Ontology-Mediated Verification


1     Introduction
Probabilistic model checking (PMC, see, e.g., [2,7] for surveys) is an automated
technique for the quantitative analysis of dynamic systems. PMC has been suc-
cessfully applied in many areas, e.g., to ensure that the system meets quality
requirements such as low error probabilities or an energy consumption within
a given bound. The de-facto standard specification language for dynamic sys-
tems in PMC is given by stochastic programs, a probabilistic variant of Dijk-
stra’s guarded command language [5,9] used within many PMC tools such as
Prism [10]. Usually, the behavior described by a stochastic program is part of
a bigger system, or might be even used within a collection of systems that have
an impact on the operational behavior as well. There are different ways how to
    Copyright © 2020 for this paper by its authors. Use permitted under Creative Com-
    mons License Attribution 4.0 International (CC BY 4.0).
2       Clemens Dubslaff, Patrick Koopmann, and Anni-Yasmin Turhan

model this by using stochastic programs. First, one could integrate additional
knowledge about the surrounding system directly into the stochastic program.
While this approach provides accurate PMC results, it has the disadvantage that
the guarded command language is not well-suited for describing complex static
knowledge. Alternatively, one can use non-determinism to over-approximate the
possible behaviors of the surrounding system. This approach has the disadvan-
tage that a best- and worst-case PMC analysis might lead to many possible
results that do not allow for a meaningful interpretation. As third option, one
can use ontology-mediated PMC, an approach to integrate knowledge described
by a description logic (DL) ontology into the PMC process, which we recently
proposed in [6]. The core of this approach are ontologized (stochastic) programs
which can be subject of PMC.

    A central idea of ontologized programs is the complete separation of concerns
and formalisms: the operational behavior is described using a program component
expressed by guarded commands. Static knowledge about the system is described
in an ontology using DL formalisms. An interface mediates between these two
worlds by providing mappings from statements in guarded command language
to DL and vice versa. The loose coupling achieved in this way allows to reuse and
replace both components easily, so that the same behavior can be analyzed with
different ontologies (to describe different system configurations), and the same
ontology can be used with different programs (e.g., to describe different strategies
of behavior to be analyzed within the same system). This approach distinguishes
our work from other approaches that use DLs for the verification of dynamic
systems, in which DL constructs are used directly within the program [1,4,8,14].
To be able to analyze ontologized programs practically, we developed a two-
step procedure in [6]. First, the ontologized program and the property to be
analyzed are rewritten into a plain stochastic program and a modified property.
Then, these are evaluated using the PMC tool Prism. This way, essentially
all model-checking tasks supported by Prism can be performed on ontologized
programs, ranging from verification of standard properties (like the probability
or the expected time for reaching a failure state) to more advanced quantitative
analysis (cf. [7,3]). Our approach was implemented and evaluated in [6].

    Under the original semantics of ontologized programs in [6], program com-
ponents are allowed to enter states that are inconsistent with the ontology. This
leaves the task of resolving the inconsistency to the program component by im-
plementing a sort of exception handling. However, inconsistent states may also
stand for situations that never can arise in practice. In this case, a semantics
that avoids inconsistent states when possible is favorable, leading to a consis-
tency notion of ontologized programs. In this paper, we answer the following
open questions: 1) how to handle inconsistent states in a meaningful way and
2) what does it mean for an ontologized program to be inconsistent. We approach
these questions by presenting three possible semantics of ontologized programs:
consistency-independent, probability-normalizing, and probability-preserving se-
mantics. Each semantics specifies the behavior of the program in a different way,
                                 Semantics for Ontology-Mediated Verification           3

and additionally comes with its own notion of consistency. For each of these no-
tions, we give tight complexity bounds for deciding consistency of programs.


2    Preliminaries

We assume familiarity of the reader with the basics of description logics. The
results in this paper apply to different DLs, on which we make some basic as-
sumptions: 1) they are fragments of SROIQ, and 2) they use unary encoding
for numbers. This was done for convenience in our proofs, and we conjecture
that our results can be lifted to a more general class of DLs.
     We briefly address relevant preliminaries on probabilistic operational mod-
els and their verification.
               P            A probability distribution over S is a function µ : S →
[0, 1]∩Q with s∈S µ(s) = 1, denoting the set of distributions over S by Distr (S).


Markov decision processes (MDPs) are tuples M = hQ, Act, P, q0 , Λ, λi
where Q and Act are countable sets of states and actions, respectively, P is a
partial probabilistic transition function P : Q × Act * Distr (Q) and q0 ∈ Q an
initial state, Λ is a set of labels assigned to states via a labeling function λ : Q →
℘(Λ). For convenience, for q1 , q2 ∈ Q, α ∈ Act, we abbreviate P (q1 , α)(q2 ) by
P (q1 , α, q2 ). An action α ∈ Act is enabled in a state q ∈ Q if P (q, α) is defined.
We assume that M does not have final states, i.e., in each state at least one
action is enabled. A finite path in M is a sequence π = q0 α0 q1 α1 . . . αk−1 qk
where pi = P (qi , αi , qi+1 ) > 0 for all i < k. The probability of π is Pr(π) =
p0 · p1 · . . . · pk−1 . Intuitively, the behavior of M in state q is to select an enabled
action α and move to a successor state q 0 with probability P (q, α, q 0 ). Such a
selection is done via (randomized) schedulers, i.e., functions S that map a finite
paths to a distribution over actions. For schedulers, we require that for each
finite path π in M with last state q, we have P (q, α) is defined for any α ∈ Act
with S(π)(α) > 0. Then, a probability measure PrS          M over maximal S-paths, i.e.,
infinite paths that follow S, is defined in the standard way (cf. [13]).


Probabilistic model checking (PMC, cf. [2]) is used in this paper for a
quantitative analysis of MDPs. Usually one has given a temporal logical property
φ over the set of labels Λ that defines sets of maximal paths in the MDP that
fulfill φ. For instance, linear temporal logic (LTL, [12]) can be used to specify
the set of paths reaching states labeled by an ` ∈ Λ, formally φ1 = ♦`, or where
always when such a state is reached, within two time steps
                                                          a state not labeled by
` is reached again, formally φ2 =  ` → (X¬` ∨ XX¬`) . For a scheduler S, any
LTL property constitutes a measurable set of S-paths such that we can reason
about the probability PrS   M (φ) of M satisfying an LTL property φ w.r.t. S. By
ranging over all possible schedulers, this enables a best- and worst-case analysis.
A stochastic property is an expression Φ = Pex (φ) ./ τ where φ is an LTL
formula, τ ∈ Q a threshold, ./ ∈ {<, ≤, ≥, >} a relation, and ex ∈ {min, max}.
Then, M satisfies Φ, denoted M |= Φ, iff ex = min and inf S PrS      M (φ) ./ τ or
4         Clemens Dubslaff, Patrick Koopmann, and Anni-Yasmin Turhan

ex = max and supS PrS  M (φ) ./ τ . The PMC problem for Φ and M amounts
to decide whether M |= Φ. For more details on MDPs and PMC, we refer to
standard textbooks such as [13,2,7]. Note that in [6] we considered weighted
MDP for ontology-mediated PMC over expected accumulated costs. For brevity,
we consider only plain MDPs in this paper.


Arithmetic Constraints and Boolean Expressions are defined over a
countable set V of variables. An evaluation over V is a function η that assigns
to each v ∈ V a value in {−δ(v), −δ(v) + 1, . . . , δ(v)} where δ : V → N defines
bounds of each variable. The set of evaluations is denoted by Eval (V). Let z
range over Z and v range over V. The set of arithmetic expressions E(V) is de-
fined by the grammar α ::= z | v | (α + α) | (α · α) . Evaluations naturally
extend to arithmetic expressions, e.g., η(α1 + α2 ) = η(α1 ) + η(α2 ). C(V) denotes
the set of arithmetic constraints over V, which are terms of the form (α ./ z)
with α ∈ E(V), ./ ∈ {>, =, <}, and z ∈ Z. For a given evaluation η ∈ Eval (V)
and constraint γ ∈ C(V), we write η |= γ iff γ holds in η. We denote by C(η)
the constraints entailed by η, i.e., C(η) = {c ∈ C(V) | η |= c}. B(V) denotes
the set of Boolean expressions over C(V), i.e., the set of propositional formulas
where arithmetic constrains over V are used as atoms. Entailment of Boolean
expressions B(V) from evaluations is defined in the usual way.


Stochastic programs provide in the area of PMC the de-facto standard to con-
cisely describe MDPs. They are essentially a probabilistic variant of Dijkstra’s
guarded command language [5,9]. We call a function u : V → E(V) update, and a
distribution σ ∈ Distr (Upd ) over a given finite set Upd of updates stochastic up-
date. The effect of an update u : V → E(V) on an evaluation η ∈ Eval (V) is their
composition η ◦ u ∈ Eval (V), i.e., (η ◦ u)(v) = max{−δ(v), min{η(u(v)), δ(v)}}
for all v ∈ V. This notion naturally extends to stochastic updates σ ∈ Distr (Upd )
where for any η, η 0 ∈ Eval (V) we have that (η ◦ σ)(η 0 ) is the sum over all σ(u)
for which η ◦ u = η 0 . A stochastic guarded command over a finite set of updates
Upd , briefly called command, is a pair hg, σi where g ∈ B(V) is a guard and
σ ∈ Distr (Upd ) is a stochastic update, which we write in the following form:

                                                   1/2 : server proc1 := 1
      (server proc1 = 2 ∧ server proc2 = 2) 7→                                       (1)
                                                   1/2 : server proc1 := 3

    A stochastic program over V is a tuple P = hV, C, η0 i where C is a finite set
of commands and η0 ∈ Eval (V) is an initial variable evaluation. For brevity, we
write Upd (P) for the set of all updates in C. The MDP induced by P is defined
as M[P] = hS, Act, P, η0 , Λ, λi where

    – S = Eval (V),                         – λ(η) = C(η) for all η ∈ S, and
    – Act = Distr (Upd (P)),                – P (η, σ, η 0 ) = (η◦σ)(η 0 ) for η, η 0 ∈ S
    – Λ = C(V),                               and hg, σi ∈ C with λ(η) |= g.
                              Semantics for Ontology-Mediated Verification         5



                          Ontologized Program State q1
     Command Perspective                         DL Perspective

                                  OverloadServer ≡ Server u ≥3runs.Process

        migrateServer2            Server(s1 )     Server(s2 )       Server(s3 )
                                  Process(p1 )    Process(p2 )      Process(p3 )

       server proc1 = 2                            runs(s2 , p1 )

       server proc2 = 2                            runs(s2 , p2 )

       server proc3 = 2                            runs(s2 , p3 )



Fig. 1. Perspectives on the ontologized program state q1 : the command perspective
(left) and DL perspective (right), linked by the interface (arrows).


3   Ontologized Stochastic Programs
In general, an ontologized program comprises the following three components [6]:
The Program is a specification of the operational behavior.
The Description Logic Ontology contains static background knowledge that
  may influence the behavior of the program.
The Interface links program and ontology by providing mappings between the
  languages used for the program and the ontology.
    Ontologized programs combine the two formalisms guarded command lan-
guage and description logics (DLs). To achieve maximum division of concerns,
the behaviors and the static knowledge about the context of the program are
specified separately and are only loosely coupled by an interface. This allows
for specifying operational behavior in the usual way and to extend and reuse
existing program specifications, and similarly, to easily link established or even
standardized ontologies to a program.
    Before we formally define ontologized programs, we explain how global states
of ontologized programs are represented. Both formalisms, guarded command
language and DL, may represent abstracted versions of global states in different
ways. For the guarded command language a state is described as an evaluation of
variables, while in DL a state is described using a set of axioms. Both formalisms
require different degrees of abstraction and detail, depending on their dedicated
purpose. Consequently, global states of ontologized programs are composed of
two components, which we call the command perspective and the DL perspective
on the global system state (see Figure 1). As can be seen in the figure, there
are some elements in the two perspectives that correspond to each other, while
others have no direct correspondence. The correspondence, depicted by arrows in
6         Clemens Dubslaff, Patrick Koopmann, and Anni-Yasmin Turhan

the figure, is defined in the interface component of the ontologized program via
a function Dc (DL to command ), mapping DL axioms to command expressions.
The part of the DL perspective inside the box is not mapped to anything on the
command side – this is the static part, and contains the background knowledge
about the system that is independent of the current state of the system. In
contrast, the mapped axioms in the ontology perspective can change, which is
why we call them fluents. Note that we also allow the command perspective to
use variables that have no correspondence in the DL perspective.
    So far, our description of global states does not depend on any DL reason-
ing, which is, however, required by the DL component to provide background
knowledge for the operational component of the program. To invoke a reasoning
task from the command side of ontologized programs, we have introduced the
notion of hooks in [6]. A hook is a propositional variable in the program com-
ponent that is linked by the interface to a Boolean query on the DL component
of the ontologized program state. The result of the query determines the value
of the propositional variable using the function cD (command to DL). In our
example, we would for each server i ∈ {1, 2, 3} define one hook migrateServeri
that stands for the knowledge that processes should be migrated from Server i:

                   cD(migrateServeri) = {OverloadedServer(si )}.

In the ontologized state shown in Figure 1, the hook migrateServer2 is active
because the ontology in DL perspective entails OverloadedServer(s2 ).
    To enable the actual use of hooks in the guarded command language, we have
to slightly extend the definition of commands. Specifically, we define abstract
stochastic programs as stochastic programs where guards are Boolean expressions
over arithmetic expressions and hooks. For example a command in an abstract
stochastic program looks like the following:

                                                     1/2 : server proc1 := 1
      (migrateServer2 ∧ server proc1 = 2) 7→                                      (2)
                                                     1/2 : server proc1 := 3

While the intention of this command is similar to the command (1), this version
omits the specification of the condition under which Server 2 needs to migrate.
Specifying and testing this condition is now delegated to the DL component.
   Now we are ready to formally define ontologized programs. Given an abstract
stochastic program P, we denote by HP the hooks used in P.
Definition 1. An ontologized program is a tuple P = hP, K, Ii where
    – P = hVP , CP , WP , ηP,0 i is an abstract stochastic program,
    – K is a DL ontology called the static DL knowledge,
    – I = hVI , HI , F, Dc, cDi is a tuple describing the interface, where VI is a set
      of public variables, HI is a set interface hooks, F is a set of DL axioms
      called fluents, and two mappings cD : HI → ℘(A) and Dc : F → B(VI ), and
P is compliant with I, i.e., HP ⊆ HI and VI ⊆ VP . If for a DL L every axiom
in K ∪ F is an L axiom, we call P an L-ontologized program.
                              Semantics for Ontology-Mediated Verification       7

4   Three Semantics of Ontologized Programs

Stochastic programs are used as concise representations of MDPs that can be
subject of a quantitative analysis. Analogously, ontologized programs are con-
cise representations of ontologized MDPs, where a quantitative analysis on these
MDPs can further rely on information provided by the ontology. Since there
is an additional logical component in ontologized programs, there are differ-
ent meaningful ways in which ontologized MDPs can be defined to represent
the operational behavior of ontologized programs. We present three semantics
for ontologized programs that differ in their treatment of inconsistencies. Such
inconsistencies arise when variable evaluations in the stochastic program have
an inconsistent meaning in the ontology. Choosing an appropriate semantics is
a modeling decision to be made when constructing the ontologized program.
Therefore we analyze the impact of this choice on the computational complex-
ity of deciding consistency w.r.t. each of the different semantics. First we define
formally that ontologized states comprise a command perspective and a DL
perspective (see Figure 1).

Definition 2. Let P = hP, K, Ii be an ontologized program as in Definition 1.
An ontologized state in P is a tuple q = hηq , Kq i, with command perspective ηq
and DL perspective Kq , where

    – K ⊆ Kq ,                – ηq ∈ Eval (VP ), and
    – Kq ⊆ K ∪ F,             – α ∈ Kq iff ηq |= Dc(α) for every α ∈ F.

We call an ontologized state q inconsistent if Kq is inconsistent.

    For every evaluation η ∈ Eval (VP ), there is always a unique KB Kη s.t.
hη, Kη i is an ontologized state in P. We denote this state by s(P, η). To de-
fine updates on ontologized states, we exploit the fact that 1) updates can only
directly modify the command perspective of ontologized states, and 2) the DL
perspective of ontologized states is fully determined by the command perspec-
tive. Thus, we can easily lift the effect of updates on evaluations to updates
on ontologized states. For an update u ∈ Upd (VP ) and an ontologized state
q, we define u(q) = s(P, u(ηq )). Correspondingly, we extend stochastic updates
σ : Upd (VP ) → [0, 1] to ontologized states q by setting σ(q, u(q)) = σ(u) for all
u ∈ Upd (VP ). To illustrate how commands are executed on ontologized states,
we consider the ontologized state q1 shown in Figure 1 and the abstract com-
mand in (2). First, we note that the guard in the command is active, since
the hook migrateServer2 is active in q1 . The command can thus be executed
on the state, and executes each of its updates with a 50% chance. For the
first update, server proc1 := 1, we obtain the ontologized state q2 in which
server proc1 = 2 is replaced by server proc1 = 1, and runs(s2 , p2 ) is replaced
by runs(s1 , p2 ). Consequently, the hook migrateServer2 becomes inactive in q2 ,
since cD(migrateServer2) = {OverloadedServer(s2 )} and OverloadedServer(s2 )
is not entailed by the DL perspective on q2 .
8         Clemens Dubslaff, Patrick Koopmann, and Anni-Yasmin Turhan

4.1     Consistency-Independent Semantics

We first present the semantics for ontologized programs originally introduced
in [6]. This semantics does not give inconsistent ontologized states a priori a spe-
cial meaning, therefore named consistency-independent semantics. The definition
of the MDP induced by an ontologized program under consistency-independent
semantics closely follows the one for MDPs induced by plain stochastic programs.


Definition 3. Let P = hP, K, Ii be an ontologized program as in Definition 1.
The MDP induced by P under consistency-independent semantics is given by
Mci [P] = hQ, Act, P, q0 , Λ, λi, where

    – Q = {s(P, η) | η ∈ Eval (VP )},       – λ(q) = C(ηq ) ∪ {` ∈ HP |
    – Act = Distr (Upd (P)),                  Kq |= cD(`)} for every q ∈ Q, and
    – q0 = s(P, η0 ),                       – P (q, σ) = σ(q) for all hg, σi ∈ C
    – Λ = HP ∪ C(VP ),                        with λ(q) |= g.

P is inconsistent if there exists an inconsistent q ∈ Q reachable from q0 .
    We can test consistency of ontologized programs using a reachability analysis
on the state space. For this, we generate states one after the other, using DL
reasoner calls for each state to determine their active hooks and their consis-
tency. As each state requires polynomial space, this can be done in polynomial
space with a k-oracle, where k is the complexity of entailment in the DL. A cor-
responding lower bound can be obtained by reduction of the word problem for
polynomially space-bounded Turing machines with k-oracle. The construction
used here shows a close relationship between Turing machines with k-oracle and
L-ontologized programs, provided that L is k-hard. This relationship is also used
for later complexity results presented in this paper.

Theorem 1. Let L be a DL such that deciding entailment in L is k-complete.
Then, deciding consistency of L-ontologized programs is PSpacek -complete, even
for non-stochastic programs.


4.2     Probability-Normalizing Semantics

If a stochastic program is inconsistent, consistency-independent semantics relies
on the program specification to deal with inconsistent states. On the one hand,
this can offer flexibility. On the other hand, it can be desirable to let the seman-
tics handle the situation by definition. Since inconsistent states have no corre-
spondence to states of the modeled system, they could stand for undesired states
that should not occur in the MDP at all. The idea of probability-normalizing
semantics is to remove all states of paths that can lead to an inconsistent state
and locally normalize probabilistic choices accordingly. Under this semantics, the
ontology serves an additional purpose: not only does it assign meaning to the
hooks, it also specifies which program states are possible, and thus restricts the
                              Semantics for Ontology-Mediated Verification          9

transitions from a given state. This allows for a further separation of concerns,
and thus for reusability of the behavioral and the DL component of the program:
we may use the same behavioral component with different ontologies that put
different constraints on the state space. In the running server example, we may
want to analyze a system in which the servers have different software and/or
hardware configurations, such that some processes cannot run on all servers. For
instance, we might use a different ontology that specifies that Process 1 cannot
run on Server 3. This restricts the possible outcomes of applying the command
in (2) on the state in Figure 1 (with the modified ontology). It can now only move
Process 1 to Server 1, so that this migration is performed with 100% probability.
    To achieve this behavior under consistent-independent semantics, we would
have to extend the commands of the program with a case distinction over all
possible successors to inconsistent states, which is clearly undesired in the above
example. Instead, we capture the restriction to possible states of the MDP by
defining probability-normalizing semantics for ontologized programs. Intuitively,
under probability-normalizing semantics, the induced MDP can be obtained from
the corresponding consistency-independent semantics by removing inconsistent
states and then normalizing the resulting probabilities. It is possible that for
some state q and command hg, σi with λ(q) |= g such a normalization is not
possible, since all successor states that should have a positive probability lead
to an inconsistent state. In this case, the command cannot be applied on q. If
no command can be applied anymore on a state, this state is removed from the
MDP as well.

Definition 4. Let Mci [P] = hQ, Act, P, q0 , Λ, λi for an ontologized program P.
An MDP M is probability-normalizing w.r.t. P if q0 is consistent and M =
hQ0 , Act, P 0 , q0 , Λ, λ0 i, where
1. Q0 ⊆ Q contains only consistent states,
2. λ0 is obtained from λ by restricting its domain to Q0 ,
3. for all q ∈ Q and σ ∈ Act s.t. P (q, σ) is defined, we have for all q 0 ∈ Q0 :
                                                       X
                   P (q, σ, q 0 ) = P 0 (q, σ, q 0 ) ·   0
                                                           P (q, σ, q̂).
                                                    q̂∈Q

P is called probability-normalizable if there exists a probability-normalizing MDP
w.r.t. by P. In case Q0 and P 0 are subset-maximal, M is the called the MDP
induced by P under probability-normalizing semantics, denoted Mpn [P].

Recall that according to our definition, MDPs cannot have final states. There-
fore, to obtain the probability probability-normalizing MDP, it does not suffice
to remove inconsistent state, but also states that would only lead to inconsistent
states would have to be removed. As this operation might lead to a removal
of the initial state, not every ontologized program is probability-normalizable.
While for consistency-independent semantics consistency of a program can be
decided by determining reachability of inconsistent states, we have to solve a
complementary task for probability-normalizability. Specifically, we have to de-
cide whether one can find an unbounded path that never enters an inconsistent
10      Clemens Dubslaff, Patrick Koopmann, and Anni-Yasmin Turhan

state, for which again polynomial space and an oracle for the reasoner is suffi-
cient. As a result, deciding probability-normalizability has the same complexity
as deciding consistency under consistency-independent semantics.

Theorem 2. Let L be a DL for which deciding entailment has complexity k.
Then, deciding whether an L-ontologized program is probability-normalizable is
PSpacek -complete, even for non-stochastic programs.

4.3   Probability-Preserving Semantics
If we use probability-normalizing semantics, it is possible that the probabilities
specified in the commands change. This is the right choice if the program uses
probabilities to model randomized behavior as in our running example: the pro-
gram migrates a process as soon as this becomes necessary, and it does so by
choosing each possible server with equal probability. If the ontology restricts the
number of possible choices, then consequently the probabilities need to change
as well. However, there are other scenarios where a change of probabilities is not
desired: rather than using stochastic commands to model randomized behavior,
the program component may use stochastic commands to describe probabilistic
outcomes of the simulated system that are based on measured probabilities. For
instance, we might know that at any point in time the server has a probability
of 1% of becoming disfunct and needs a restart. This probability should not be
affected by the specifications: there might be a program state in which a disfunct
server would necessarily lead to an inconsistency in the future. This should not
mean that the server cannot become disfunct and that the current program state
should be possible by definition. To capture such phenomena in a semantics, we
introduce probability-preserving semantics.

Definition 5. Let Mci [P] = hQ, Act, P, q0 , Λ, λi for an ontologized program P.
An MDP M is probability-preserving w.r.t. P if q0 is consistent and M =
hQ0 , Act, P 0 , q0 , Λ, λ0 i where

1. Q0 ⊆ Q contains only consistent states,
2. λ0 is obtained from λ by restricting its domain to Q0 , and
3. for all P 0 (q, σ, q 0 ) > 0 we have that P 0 (q, σ, q 0 ) = P (q, σ, q 0 ).

P is called probability-preservable if there exists a probability-preserving MDP
w.r.t. P. In case Q0 and P 0 are subset-maximal, M is the called the MDP
induced by P under probability-preserving semantics, denoted Mpp [P].

Note that Definition 5 requires M to be an MDP and hence, P 0 is a probability
transition function. Hence, Condition 3 implies that any transition that reaches
an inconsistent state with positive probability w.r.t. P has no corresponding
transition in P 0 .
   To verify whether a program is probability-preservable, we have to take the
non-deterministic and the probabilistic choices of the program differently into ac-
count. This is different to the other notions of consistency discussed here, because
                               Semantics for Ontology-Mediated Verification       11

only a single path, going over non-deterministic and probabilistic choices indif-
ferently, is sufficient to witness inconsistency or probability-normalizability. We
can characterize probability-preservability as follows: a program is probability-
preservable iff there exists a resolution of the non-deterministic choices in the
program such that for all probabilistic choices an inconsistent state is never
reached. Thus, testing this property requires both existential and universal ex-
plorations of the state space, which is why we can use alternating Turing ma-
chines with polynomial space to solve the corresponding problem.

Theorem 3. Let L be a DL for which deciding entailment has complexity k.
Then, deciding whether an L-ontologized program is probability-preservable is
APSpacek -complete. For non-stochastic programs, it is PSpacek -complete.

   Recall that APSpace = ExpTime (see for example Corollary 2 to Theorem
16.5 and Corollary 3 to Theorem 2.20 in [11]). However, as an exponential Turing
machine may also write exponentially long queries to the oracle, we may not
always have APSpacek = ExpTimek .

4.4   Ontology-mediated PMC
In [6] we presented a technique to apply PMC techniques to ontologized pro-
grams to perform quantitative analysis tasks on knowledge-intensive systems. For
this, we treated inconsistent states as consistent ones, following the consistency-
independent semantics presented in Section 4.1. Inconsistencies in ontologized
programs is not as bad as usually in other logic-based formalisms, as stochastic
properties can also be evaluated on MDPs with inconsistent states. In fact, they
can even have a designated meaning, e.g., modeling exceptions the program has
to face. An advantage of ontologized programs is that commands could be used
to detect when a program is in an inconsistent state. To this end, a guard employ-
ing a hook inc that is satisfied in exactly those states that are inconsistent, e.g.,
with cD(inc) = (> v ⊥), can invoke the actions to undertake when an exception
has occurred. Note that while all inconsistent states agree on the hooks that are
active (it is always the complete set of hooks), the command perspective of states
can still be different. Here, private variables may encode additional information
about the current state that can be used to decide how to react to inconsistencies.
An example for ontology-mediated PMC on consistency-independent semantics,
is to decide whether
                   Pmin (inc → (X¬inc ∨ XX¬inc)) > 95%
                                                        

Intuitively, this states that an exception is always successfully handled with
high probability of at least 95% in the sense that whenever an inconsistent state
is reached, a consistent state is reached again within at most two steps. Such
a stochastic property can be checked using our method in [6] and the PMC
tool Prism [10].
    Ontology-mediated PMC for probability-normalizing and -preserving seman-
tics is not as straight forward as under consistency-independent semantics. While
12     Clemens Dubslaff, Patrick Koopmann, and Anni-Yasmin Turhan

we could achieve a stochastic program that disallows to reach inconsistent states
also under consistency-independent semantics, this requires an explicit handling
of inconsistencies and modifications in the program that “provisions” the pos-
sible variable evaluations. More specifically, we then need a the possibility to
decide for each update in the command whether it would lead to an inconsistent
state or not. However, while this would work out well in our running example,
we cannot expect such an encoding for any ontologized program. Fortunately,
Definitions 4 and 5 are defined in a constructive way, i.e., these consistency-
dependent semantics are defined based on the consistency-independent one by
manipulating the state space and probabilistic transition function. Hence, we
could restrict the state space to consistent states and adjust further states and
probabilities on-the-fly during the construction process of the MDP.


5    Outlook

We are currently implementing a scenario to evaluate our implementation for
ontology-mediated PMC under the different semantics we presented in this pa-
per. Here, we can benefit from the constructive definition of our semantics that
evaluates inconsistent states beforehand and could hence directly included into
the model building process, e.g., in the probabilistic model checker Prism [10].
There are some theoretical results that are still left open but should be easy
to obtain: the complexity of the actual model-checking tasks, or the size of the
rewritings that our practical method produces. Furthermore, there are some ex-
tensions and modifications worth investigating: our semantics offer two possible
ways of treating probabilities occurring in the program to “restore” consistency.
The choice of which one to select depends on the kind of stochastic phenomena
actually modeled. It is easy to think of scenarios where both kinds of stochastics
occur. This would require a more flexible type of ontologized programs. In such
a program, some distributions may be marked as “fixed”, while others could
be subject to normalization. In our framework, modifications of program states
happen on the level of DL axioms and thus affect all models similarly to [4,8].
Another approach are DL action-based programs [1,14], where modifications of
states affect the level of interpretations. So, a variant of ontologized programs
where states are associated not with ontologies, but with interpretations, similar
to [1,14] is certainly an interesting direction for future research.


Acknowledgements

The authors are supported by the DFG through the Collaborative Research
Center TRR 248 (see https://perspicuous-computing.science, project ID
389792660), the Cluster of Excellence EXC 2050/1 (CeTI, project ID 390696704,
as part of Germany’s Excellence Strategy), and the Research Training Groups
QuantLA (GRK 1763) and RoSI (GRK 1907).
                                Semantics for Ontology-Mediated Verification         13

References
 1. Baader, F., Zarrieß, B.: Verification of Golog programs over description logic ac-
    tions. In: Proceedings of FroCos 2013. LNCS, vol. 8152, pp. 181–196. Springer
    (2013)
 2. Baier, C., Katoen, J.P.: Principles of Model Checking. MIT Press (2008)
 3. Baier, C., Dubslaff, C., Klüppelholz, S.: Trade-off analysis meets probabilistic
    model checking. In: Proc. of the 23rd Conference on Computer Science Logic and
    the 29th Symposium on Logic In Computer Science (CSL-LICS). pp. 1:1–1:10.
    ACM (2014)
 4. Calvanese, D., De Giacomo, G., Lenzerini, M., Rosati, R.: Actions and programs
    over description logic knowledge bases: A functional approach. In: Knowing, Rea-
    soning, and Acting: Essays in Honour of H. J. Levesque. College Publications (2011)
 5. Dijkstra, E.W.: A Discipline of Programming. Prentice-Hall (1976)
 6. Dubslaff, C., Koopmann, P., Turhan, A.: Ontology-mediated probabilistic model
    checking. In: Ahrendt, W., Tarifa, S.L.T. (eds.) Integrated Formal Methods - 15th
    International Conference, IFM 2019. Lecture Notes in Computer Science, vol.
    11918, pp. 194–211. Springer (2019). https://doi.org/10.1007/978-3-0u30-34968-
    4 11
 7. Forejt, V., Kwiatkowska, M.Z., Norman, G., Parker, D.: Automated verification
    techniques for probabilistic systems. In: Proc. of the School on Formal Methods for
    the Design of Computer, Communication and Software Systems, Formal Methods
    for Eternal Networked Software Systems (SFM’11). LNCS, vol. 6659, pp. 53–113.
    Springer (2011)
 8. Hariri, B.B., Calvanese, D., Montali, M., De Giacomo, G., De Masellis, R., Felli,
    P.: Description logic knowledge and action bases. J. Artif. Intell. Res. 46, 651–686
    (2013)
 9. Jifeng, H., Seidel, K., McIver, A.: Probabilistic models for the guarded command
    language. Science of Computer Programming 28(2), 171 – 192 (1997)
10. Kwiatkowska, M.Z., Norman, G., Parker, D.: PRISM 4.0: Verification of proba-
    bilistic real-time systems. In: Proc. of the 23rd Intern. Conf. on Computer Aided
    Verification (CAV’11). LNCS, vol. 6806, pp. 585–591 (2011)
11. Papadimitriou, C.H.: Computational complexity. Academic Internet Publ. (2007)
12. Pnueli, A.: The temporal logic of programs. In: 18th Annual Symposium on Foun-
    dations of Computer Science, Providence, Rhode Island, USA, 31 October - 1
    November 1977. pp. 46–57 (1977). https://doi.org/10.1109/SFCS.1977.32
13. Puterman, M.: Markov Decision Processes: Discrete Stochastic Dynamic Program-
    ming. John Wiley & Sons, Inc., New York, NY (1994)
14. Zarrieß, B., Claßen, J.: Verification of knowledge-based programs over description
    logic actions. In: IJCAI. pp. 3278–3284. AAAI Press (2015)
14     Clemens Dubslaff, Patrick Koopmann, and Anni-Yasmin Turhan

A     Appendix
We provide here the proofs of our results that have been omitted due to space
constraints in the main paper.

A.1   Auxiliary Lemmas for the Hardness Results
We define the size of a concept/axiom/ontology in the usual way as its string
length, that is, the number of symbols needed to write it down, where concept,
role and individual names count as one, as do logical operators, and number
restrictions are encoded using unary encoding.
    The following auxiliary lemma will be helpful for the hardness results, as it
shows how to represent ontologies based on a polynomially bounded enumeration
of DL axioms. We first need a more liberal definition of conservative extensions.

Definition 6. A renaming is a bijective function σ : NC ∪NR ∪NI → NC ∪NR ∪NI
s.t. σ(X) ∈ NC iff X ∈ NC , σ(X) ∈ NR iff X ∈ NR and σ(X) ∈ NI iff X ∈ NI .
Given an ontology O and a renaming σ, we denote by Oσ the result of replacing
every name X in O by σ(X).
    Let O1 and O2 be two ontologies. Then, O1 is a conservative extension of O2
modulo renaming iff there exists some renaming σ s.t. for every axiom α using
only names in O2 , O1 σ |= α iff O2 |= α.
Lemma 1. There exists a polynomial function p s.t. for every n ∈ N, there exists
a polynomially computable function π from indices {0, . . . , p(n)} to SROIQ
axioms s.t. for every ontology O of size at most n, we can compute in time
polynomial in O a set I ⊆ {0, . . . , p(n)} s.t. {π(i) | i ∈ I} is a conservative
extension of O modulo renaming.
Proof. Let n ∈ N be our bound on ontology sizes. Using standard structural
transformations, we can compute for every SROIQ ontology O in polynomial
time a normalized ontology that is a conservative of O and in which every axiom
is of one of the following forms:

1. A1 v {a1 },               5. ≤ir1 .A1 v A2 ,          9. r1 (a1 , a2 ),
2. A1 u A2 v A3 ,            6. A1 v ∃r.Self,           10. r1 v r2 ,
3. A1 v A2 t A3 ,            7. ∃r.Self. v A1 ,         11. r1 ◦ r2 v r3 ,
4. A1 v ≤ir1 .A2 ,           8. A1 (a1 ),               12. Disj(r1 , r2 ),

    where A1 , A2 , A3 ∈ NC ∪ {>, ⊥}, r1 , r2 , r3 ∈ NR , i ∈ N and a1 , a2 ∈ NI .
Let m be the maximal size of any ontology that is the result of normalizing
an ontology of size n. Note that m is polynomial in n. The number of concept
names, role names and individuals occurring in any normalized ontology O of
size m is bounded by m, and every number occurring in O is bounded by m as
well. This means, we can pick a set

                  X = {X1 , . . . , Xm } ⊆ NC ∪ {>, ⊥} ∪ NR ∪ NI
                              Semantics for Ontology-Mediated Verification     15

s.t. for every any normalized ontology O of size m there exists a renaming σ s.t.
Oσ uses only names from X. As the resulting ontology is still normalized, every
such ontology uses only axioms from the set

  A = {α | α is normalized and uses only names from X and numbers ≤ m}

As there are 12 axiom types, and each uses at most 3 names from X and one
number, the number of axioms in A is bounded by p(n) = 12 · |X|3 · m = 12m4 ,
which is polynomial in n because m is polynomial in n.
    The function π is required can then for example defined as follows: for i ∈
{0, . . . , p(n)}, we define π(i) = α, where
1. α is of the form (i mod 12) as above,
2. for a ∈ {1, 3}, Xj is the ath name occurring in α, where j = b 12mia−1 c
   mod m, and
3. if α is of the form (4) or (5), i.e. it contains a number, then this number is
           i
   j = b 12m 3 c mod m


    For every ontology O, we obtain the index set I as required by first normal-
izing O, renaming it accordingly, and then assigning the indices.             t
                                                                              u

   Using an index function π as in Lemma 1, we can define a special type of
oracle Turing machines called (alternating) Turing machines with DL oracles.
Such a Turing machine uses an oracle tape with tape alphabet {0, 1, b} (b being
the blank symbol), and is specified using an enumeration π of DL axioms as
above, and some axiom α. Based on the current oracle tape content, we obtain
an index set I that contains the number i iff the oracle tape contains a 1 at
position i. The oracle then answers yes if {π(i) | i ∈ I} |= α, and no otherwise.
For a given DL L, we may additionally require that the oracle only accepts
inputs for which the corresponding ontology is in L, in which case we call the
Turing machine a Turing machine with L-oracle for π, α.

Lemma 2. Let k be a complexity class and L a DL for which deciding entailment
is k-hard. Then, for every Turing machine T with k-oracle and bound n on the
size of the oracle tape, we can construct in time polynomial in n and T a Turing
machine T 0 with L-oracle for some axiom enumeration π and CI α that accepts
the same set of words with at most polynomial overhead.

Proof. We first argue that it suffices to focus on entailment of axioms of the
form α = A(a), since entailment of CIs can be reduced to it. Specifically, for any
ontology O and CI C v D, we have O |= C v D iff O ∪ {C(a), D v A} |= A(a),
where a and A do not occur in O. Because entailment in L is k-hard, we can
construct for every query to the k-oracle in polynomial time an ontology O that
entails α iff the query should return true. Because the oracle tape is bounded,
we can use Lemma 1 to translate that ontology into a corresponding index set
to serve as input for the L-oracle. T 0 thus proceeds as T , but before sending
a query to the oracle, it reduces it to the entailment of the axiom α from an
ontology O, which it then translates to a query to the L-oracle.                 t
                                                                                 u
16      Clemens Dubslaff, Patrick Koopmann, and Anni-Yasmin Turhan

A.2    Consistency-Independent Semantics

Theorem 1. Let L be a DL such that deciding entailment in L is k-complete.
Then, deciding consistency of L-ontologized programs is PSpacek -complete, even
for non-stochastic programs.

Proof. Let M[P] be the MDP induced by some ontologized program P as in
Definition 3 where K is expressed in L. For the upper bound, we specify a
non-deterministic PSpace algorithm that, starting from the initial state, ex-
plores all reachable states in M[P] and decides in k whether the current state
is inconsistent. Specifically, for each current variable evaluation in P, we non-
deterministically select a command in PP whose guard is satisfied. Then, we
non-deterministically select an update that has a positive probability in the dis-
tribution of the command and apply this update on the current evaluation. In
each step, we use a k-oracle to determine whether the current state is consistent,
and which hooks are active. As each state can be stored in polynomial space,
the algorithm runs in PSpacek .
    For the lower bound, we provide a polynomial reduction of the acceptance
problem for polynomially space bounded, deterministic Turing machines T with
k-oracle. Let
                     T = hQ, Γ, γ0 , Γi , Γo , q0 , F, δ, q? , q+ , q− i
be such a Turing machine, where

 – Q are the states,
 – Γ = {γ0 , . . . γm } is the tape alphabet,
 – γ0 is the blank symbol,
 – Γi ⊆ Γ is the input alphabet,
 – Γo is the oracle tape alphabet,
 – q0 is the initial state,
 – F ⊆ Q is the set of accepting states,                                     
 – δ : (Q \ (F ∪ {q? })) × Γ × Γo → Q × Γ × {−1, 0, +1} × Γo × {−1, 0, +1}
   is the transition relation, (which works on two tapes, the standard tape and
   the oracle tape),
 – q? ∈ Q \ F is the query state to query the oracle, and
 – q+ , q− ∈ Q \ F are the query answer states.

The Turing machine uses two polynomially bounded tapes: one standard tape
and one oracle tape. In every state that is non-final and not the query state, δ
defines the successor state of T . Whenever the current state is q? , the oracle is
invoked, leading to T entering q+ in the next step if the oracle accepts the content
of the oracle tape, and entering q− otherwise. Based on T and the input word
w = σ0 . . . σl , we build an ontologized program PT,w s.t. PT,w is inconsistent iff
T accepts w.
    Since entailment in L is k-hard, we can use Lemma 2 to construct a polyno-
mially space-bounded Turing machine T 0 with L-oracle for π, A(a) that accepts
                                   Semantics for Ontology-Mediated Verification       17

w iff so does T . Specifically, we obtain a set Fq of DL axioms, polynomially
bounded in |w|, a mapping π : {0, . . . , r(|w|)} 7→ Fq , where r is a polynomial,

                  T 0 = hQ0 , Γ, γ0 , Γi , {0, 1}, q0 , F 0 , δ 0 , q? , q+ , q− i,

where Q0 = {q0 , . . . , qn }, and T 0 accepts the same language as T , is still poly-
nomially space-bounded, but uses a different oracle. If T 0 is in state q? , and the
oracle tape contains the word σ0 . . . σr(|w|) , the Turing machine moves into the
state q+ iff {π(i) | 0 ≤ i ≤ r(|w|), σi = 1} |= A(a), and otherwise moves to q− .
Based on this Turing machine, we construct the ontologized program PT,w .
    Let p be a polynomial s.t. T 0 uses at most p(|w|) tape cells on input w. The
program uses the following variables:

 – state ∈ {0, . . . , n} stores the state of the Turing machine,
 – for 0 ≤ i ≤ p(|w|), dtape i ∈ {0, . . . , m} stores the letter on the default tape
   position i,
 – for 0 ≤ i ≤ r(|w|), otape i ∈ {0, 1} stores the letter on the oracle tape at
   position i,
 – dpos ∈ {0, . . . , p(|w|)} stores the current position on the default tape,
 – opos ∈ {0, . . . , r(|w|)} stores the current position on the oracle tape.

We use a single hook H = {oracle yes} standing for the positive oracle answer.
For every
                  hhqi1 , γi2 , γi3 i, hqi4 , γi5 , Dird , γi6 , Diro ii ∈ δ 0 ,
every i ∈ {0, . . . , p(|w|)} and every j ∈ {0, . . . , r(|w|)}, we use the following
non-stochastic command:
                                                                           
     state=i1                          state := i4
   ∧dpos=i ∧ dtape i=i2  7→ dtape i := i5 , dpos := dpos + Dird , (3)
     ∧opos=j ∧ otape j=i3              otape j := i6 , opos := opos + Diro

   To handle the behavior of the oracle, we further add the following commands,
where q? = qi1 , q+ = qi2 and q− = qi3

                   state = i1 ∧ oracle yes                 7→      state := i2        (4)
                 state = i1 ∧ ¬oracle yes                  7→      state := i3        (5)

   The initial state ηP,0 of the program is the evaluation state = 0, dtape i =
wi for i ∈ {0, . . . , l}, dtape i = 0 for i ∈ {l, . . . , p(|w|)}, otape i = 0 for
i ∈ {0, . . . , r(|w|)}, dpos = otape = 0. This completes the definition of the
program component P in PT,w .
   As DL axioms, we set for the static DL knowledge K = ∅ and for the fluents
F = Fq ∪ {> v ⊥}. It remains to specify the interface functions Dc and cD. For
Dc, we set for every i ∈ {0, . . . , r(|w|)},
                                                       
                              Dc(π(i)) = otape i = 1
18      Clemens Dubslaff, Patrick Koopmann, and Anni-Yasmin Turhan

to translate the oracle calls, and
                                                           
                                            _
                        Dc(> v ⊥) =               state = i ,
                                           qi ∈F


to encode that the ontologized states that correspond to accepting states in the
Turing machine are inconsistent. Finally, we specify cD by setting

                           cD(oracle yes) = {A(a)}.

It is standard to verify that the Turing machine T 0 accepts w iff the ontologized
program PT,w is inconsistent, i.e., can reach an inconsistent state. Since T ac-
cepts w iff so does T 0 , this completes the reduction.                          t
                                                                                 u

A.3    Probability-Normalizing Semantics
Theorem 2. Let L be a DL for which deciding entailment has complexity k.
Then, deciding whether an L-ontologized program is probability-normalizable is
PSpacek -complete, even for non-stochastic programs.

Proof. For the upper bound, note that the only way for P to be inconsistent un-
der probability-normalizing semantics is if all probabilistic choices would lead to
an inconsistent state in the M[P]. We can thus use a non-deterministic PSpacek -
algorithm similar to the one described in the proof for Theorem 1 to determine
whether a program is probability-normalizable: instead of testing for inconsis-
tency of the current state, we here test for consistency. Note that it is sufficient
to find a single execution path that never reaches an inconsistent state for an on-
tologized program to be probability-normalizable. This execution path might be
infinite, however, because the state space is bounded, any infinite execution path
would need to visit a state twice. We use a counter to put an exponential bound
on the states to be visited, and store a non-deterministically selected state for
later comparison. If this state repeats, we found an unbounded execution path
that never visits an inconsistent state, and accept. If we reach the bound, such
a loop was not found and we reject.
    For the lower bound, we note that the reduction used in the proof for Theo-
rem 1 can also be used for probability-normalizability. In that reduction, we re-
duce the word-problem of deterministic polynomially-space bounded Turing ma-
chines with k-oracle to consistency of ontologized programs under consistency-
independent semantics. Inspection of the construction reveals that the con-
structed program PT,w is not only non-stochastic, but even deterministic, by
which we mean that in every ontologized state, there is at most one command
that can be executed. Specifically, there is exactly one Command (3) in the proof
of Theorem 1 for each assignment to the variables state, dpos, opos, dtape -
i and otape j, which means that in each ontologized state, at most one of
these commands is executable. Moreover, for the assignment state = i1 , where
qi1 = q? denotes the oracle query state, there is no such command, as there
                               Semantics for Ontology-Mediated Verification       19

is no transition in the Turing machine for those states, and exactly one of the
commands (4) and (5) is executable if state = i1 , unless if the state is incon-
sistent. We obtain that, if some path in M[PT,w ] leads from the initial state to
an inconsistent state, then all paths lead to an inconsistent state, in which case
the program is not probability-normalizable. Consequently, PT,w is probability-
normalizable iff PT,w is consistent, which is the case iff T does not accept w. Since
non-acceptance of words in polynomially space-bounded Turing-machines with
k-oracle is also PSpacek -complete, we obtain that probability-normalizability is
PSpacek -hard.                                                                     tu


A.4    Probability-Preserving Semantics

Theorem 3. Let L be a DL for which deciding entailment has complexity k.
Then, deciding whether an L-ontologized program is probability-preservable is
APSpacek -complete. For non-stochastic programs, it is PSpacek -complete.

Proof. Both bounds are via reductions from and to alternating Turing machines.
In an alternating Turing machine, we switch between ∃-non-determinism and ∀-
non-determinism, meaning that the Turing machine can switch between ∃-non-
deterministic choices and ∀-non-deterministic transitions. Acceptance is then
defined inductively according to the choices involved: a configuration is accepting
if 1) it is in an accepting state, 2) there exists an ∃-non-deterministic transition
to a configuration that is accepting, or 3) all ∀-non-deterministic transitions lead
to an accepting configuration.
     For the upper bounds, we use a modification of the algorithm used in the
proof for Theorem 1 that runs in alternating polynomial space and decides
probability-preservability of ontologized programs. Let P be an ontologized pro-
gram as in Definition 1, and Mci [P] = hQ, Act, P, ι, Λ, λ, wgti the MDP induced
by P under consistency-independent semantics. The algorithm guesses the states
q ∈ Q to be included in the probability-preserving MDP one after the other,
starting from ι, where we use alternation to switch between the non-deterministic
and the probabilistic choices of the program. Specifically, we have to guess a set
of states Q0 s.t.: 1) no state in Q0 is inconsistent, 2) for every Q0 there is always
some guarded command that can be executed on the current state, and 3) all
successor-states into which this command would bring us with a positive prob-
ability are included in Q0 . Our algorithm thus proceeds as follows based on the
currently guessed state. If the current state is inconsistent, we reject. Otherwise,
we non-deterministically select a guarded command hg, σi ∈ CP s.t. q |= g (∃-
non-determinism) and continue on all states q 0 ∈ Q such that P (q, σ, q 0 ) > 0,
q 0 ∈ Q0 (∀-non-determinism). Each state takes polynomial space, and we use a
k-oracle to determine which hooks are active and whether the state is consistent.
Since this procedure can be implemented by an alternating Turing machine with
k-oracle that uses polynomial space, probability-preservability can be decided
in APSpacek . If the program is non-stochastic, the ∀-non-determinism is not
needed, and the algorithm runs in PSpacek .
20       Clemens Dubslaff, Patrick Koopmann, and Anni-Yasmin Turhan

   For the lower bound, we adapt the reduction used in the proof for Theorem 1.
However, this time, we reduce the word acceptance problem for polynomially
space bounded alternating Turing machines with k-oracle. Specifically, let

                         T = hQ, Γ, γ0 , Γi , Γo , q0 , F, δ, q? , q+ , q− , gi,

be such a Turing machine, where Q = {q0 , . . . , qn }, Γ = {γ0 , . . . γn } is the
standard tape alphabet, γ0 ∈ Γi is the blank symbol, Γi ⊆ Γ is the input
alphabet, Γo is the oracle alphabet, q0 is the initial state, F ⊆ Q∧ ∪ Q∨ is the
set of accepting states,

     δ : (Q \ (F ∪ {q? })) × Γ × Γo → ℘(Q × Γ × {−1, +1} × Γo ∪ {−1, +1})

 is the transition function, the states q? , q+ q− ∈ Q manage the querying of the
oracle, and g : Q → {∀, ∃, accept, reject} partitions the states Q into four cat-
egories of universal and existential quantified states, and accepting and rejecting
states, respectively. The definition of acceptance is standard, specifically, the
oracle works as in the proof for Theorem 1, and a configuration with state q is
accepting 1) if g(q) = accept, 2) g(q) = ∃ and one of the successor configurations
is accepting, or 3) g(q) = ∀ and all successor configurations are accepting.
    Again, we use Lemma 2 to construct, based on T and the input word w, a
polynomially bounded set Fq of DL axioms, a mapping π : Fq → {0, . . . , r(|w|)},
where r is a polynomial, and an alternating Turing machine

                   T 0 = hQ0 , Γ, γ0 , Γi , {0, 1, b}, q0 , F 0 , δ 0 , q? , q+ , q− , g 0 i

with an L-oracle for π, A v B, that is polynomially space-bounded, and accepts
w iff so does T 0 . If T 0 is in state q? , and the oracle tape contains the word
σ0 . . . σr(|w|) , the Turing machines moves into the state q+ iff
                         
                           π(i) | 0 ≤ i ≤ r(|w|), σi = 1 |= A(a),

and otherwise moves to q− . Based on this Turing machine, we construct the
ontologized program PT,w in the same way as in the proof for Theorem 1, but
with a different set CP of commands.
   We first model the ∀-transitions. For every

                 t = hqi1 , γi2 , γi3 i ∈ (Q \ (F ∪ {q? , q+ , q− })) × Γ × Γo ,

where g 0 (qi1 ) = ∀, every hqi4 , γi5 , Dird , γi6 , Diro i ∈ δ 0 (t), every i ∈ {0, . . . , p(|w|)}
and every j ∈ {0, . . . , r(|w|)}, we use the command
                                                                                               
    state = i1                                      state := i4
∧ dpos = i ∧ dtape i = i2  7→ dtape i := i5 , dpos := dpos + Dird ,
  ∧ opos = j ∧ otape j = i3                         otape j := i6 , opos := opos + Diro

 Note that for the program to be not probability preservable, every command
that we can execute in an ontologized state leads to an inconsistency, which is
                                   Semantics for Ontology-Mediated Verification         21

why we model ∀-transitions in this way. For the ∃-transitions, we make use of
the stochastic component. Let

              t1 = hqi1 , γi2 , γi3 i ∈ (Q \ (F ∪ {q? , q+ , q− })) × Γ × Γo ,

where g 0 (qi1 ) = ∃. For every i ∈ {0, . . . , p(|w|)} and j ∈ {0, . . . , r(|w|)}, we add
the stochastic command
                                                         
                           state = i1
                       ∧ dpos = i ∧ dtape i = i2  7→ σ
                        ∧ opos = j ∧ otape j = i3

where σ is the stochastic update such that for every

                        t2 = hqi4 , γi5 , Dird , γi6 , Diro i ∈ δ 0 (t1 ),

we have                                               
                   state := i4
                                                             1
               σ dtape i := i5 , dpos := dpos + Dird , = 0        .
                                                          |δ (t1 )|
                  otape j := i6 , opos := opos + Diro
The corresponding transitions are excluded from the probability-preserving MDP
if there is a positive probability of getting into an inconsistent state. To this end,
we can model ∃-transitions in this way.
    The stochastic commands that handle the use of the oracle are as in the proof
for Theorem 1. It is standard to show that the resulting program is probability-
preserving iff w is accepted by the Turing machine, so that we obtain that
probability-preservability is APSpacek -hard.
    Now assume that for no state q ∈ Q, g 0 (q) = ∃. T then corresponds to
a coNPSpacek -machine, and the ontologized program PT,w is non-stochastic.
As a consequence, deciding whether non-stochastic programs are probability-
preserving is coNPSpacek = PSpacek -hard.                                           t
                                                                                    u