<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Diroi</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Give Inconsistency a Chance: Semantics for Ontology-Mediated Veri cation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Technische Universitat Dresden</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dresden</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Germany</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>clemens.dubslaff</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>patrick.koopmann</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>anni-yasmin.turhang@tu-dresden.de</string-name>
        </contrib>
      </contrib-group>
      <volume>2</volume>
      <issue>0</issue>
      <abstract>
        <p>Recently, we have introduced ontologized programs as a formalism to link description logic ontologies with programs speci ed in the guarded command language, the de-facto standard input formalism for probabilistic model checking tools such as Prism, to allow for an ontology-mediated veri cation of stochastic systems. Central to our approach is a complete separation of the two formalisms involved: guarded command language to describe the dynamics of the stochastic system and description logics are used to model background knowledge about the system in an ontology. In ontologized programs, these two languages are loosely coupled by an interface that mediates between these two worlds. Under the original semantics de ned for ontologized programs, a program may enter a state that is inconsistent with the ontology, which limits the capabilities of the description logic component. We approach this issue in di erent ways by introducing consistency notions, and discuss two alternative semantics for ontologized programs. Furthermore, we present complexity results for checking whether a program is consistent under the di erent semantics.</p>
      </abstract>
      <kwd-group>
        <kwd>Description Logics</kwd>
        <kwd>Inconsistency</kwd>
        <kwd>Probabilistic Model Checking</kwd>
        <kwd>Ontology-Mediated Veri cation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Probabilistic model checking (PMC, see, e.g., [
        <xref ref-type="bibr" rid="ref2 ref7">2,7</xref>
        ] for surveys) is an automated
technique for the quantitative analysis of dynamic systems. PMC has been
successfully applied in many areas, e.g., to ensure that the system meets quality
requirements such as low error probabilities or an energy consumption within
a given bound. The de-facto standard speci cation language for dynamic
systems in PMC is given by stochastic programs, a probabilistic variant of
Dijkstra's guarded command language [
        <xref ref-type="bibr" rid="ref5 ref9">5,9</xref>
        ] used within many PMC tools such as
Prism [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Usually, the behavior described by a stochastic program is part of
a bigger system, or might be even used within a collection of systems that have
an impact on the operational behavior as well. There are di erent ways how to
Copyright © 2020 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0).
model this by using stochastic programs. First, one could integrate additional
knowledge about the surrounding system directly into the stochastic program.
While this approach provides accurate PMC results, it has the disadvantage that
the guarded command language is not well-suited for describing complex static
knowledge. Alternatively, one can use non-determinism to over-approximate the
possible behaviors of the surrounding system. This approach has the
disadvantage that a best- and worst-case PMC analysis might lead to many possible
results that do not allow for a meaningful interpretation. As third option, one
can use ontology-mediated PMC, an approach to integrate knowledge described
by a description logic (DL) ontology into the PMC process, which we recently
proposed in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The core of this approach are ontologized (stochastic) programs
which can be subject of PMC.
      </p>
      <p>
        A central idea of ontologized programs is the complete separation of concerns
and formalisms: the operational behavior is described using a program component
expressed by guarded commands. Static knowledge about the system is described
in an ontology using DL formalisms. An interface mediates between these two
worlds by providing mappings from statements in guarded command language
to DL and vice versa. The loose coupling achieved in this way allows to reuse and
replace both components easily, so that the same behavior can be analyzed with
di erent ontologies (to describe di erent system con gurations), and the same
ontology can be used with di erent programs (e.g., to describe di erent strategies
of behavior to be analyzed within the same system). This approach distinguishes
our work from other approaches that use DLs for the veri cation of dynamic
systems, in which DL constructs are used directly within the program [
        <xref ref-type="bibr" rid="ref1 ref14 ref4 ref8">1,4,8,14</xref>
        ].
To be able to analyze ontologized programs practically, we developed a
twostep procedure in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. First, the ontologized program and the property to be
analyzed are rewritten into a plain stochastic program and a modi ed property.
Then, these are evaluated using the PMC tool Prism. This way, essentially
all model-checking tasks supported by Prism can be performed on ontologized
programs, ranging from veri cation of standard properties (like the probability
or the expected time for reaching a failure state) to more advanced quantitative
analysis (cf. [
        <xref ref-type="bibr" rid="ref3 ref7">7,3</xref>
        ]). Our approach was implemented and evaluated in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        Under the original semantics of ontologized programs in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], program
components are allowed to enter states that are inconsistent with the ontology. This
leaves the task of resolving the inconsistency to the program component by
implementing a sort of exception handling. However, inconsistent states may also
stand for situations that never can arise in practice. In this case, a semantics
that avoids inconsistent states when possible is favorable, leading to a
consistency notion of ontologized programs. In this paper, we answer the following
open questions: 1) how to handle inconsistent states in a meaningful way and
2) what does it mean for an ontologized program to be inconsistent. We approach
these questions by presenting three possible semantics of ontologized programs:
consistency-independent, probability-normalizing, and probability-preserving
semantics. Each semantics speci es the behavior of the program in a di erent way,
and additionally comes with its own notion of consistency. For each of these
notions, we give tight complexity bounds for deciding consistency of programs.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Preliminaries</title>
      <p>We assume familiarity of the reader with the basics of description logics. The
results in this paper apply to di erent DLs, on which we make some basic
assumptions: 1) they are fragments of SROIQ, and 2) they use unary encoding
for numbers. This was done for convenience in our proofs, and we conjecture
that our results can be lifted to a more general class of DLs.</p>
      <p>
        We brie y address relevant preliminaries on probabilistic operational
models and their veri cation. A probability distribution over S is a function : S !
[0; 1]\Q with Ps2S (s) = 1, denoting the set of distributions over S by Distr (S).
Markov decision processes (MDPs) are tuples M = hQ; Act ; P; q0; ; i
where Q and Act are countable sets of states and actions, respectively, P is a
partial probabilistic transition function P : Q Act * Distr (Q) and q0 2 Q an
initial state, is a set of labels assigned to states via a labeling function : Q !
}( ). For convenience, for q1; q2 2 Q, 2 Act , we abbreviate P (q1; )(q2) by
P (q1; ; q2). An action 2 Act is enabled in a state q 2 Q if P (q; ) is de ned.
We assume that M does not have nal states, i.e., in each state at least one
action is enabled. A nite path in M is a sequence = q0 0 q1 1 : : : k 1qk
where pi = P (qi; i; qi+1) &gt; 0 for all i &lt; k. The probability of is Pr( ) =
p0 p1 : : : pk 1. Intuitively, the behavior of M in state q is to select an enabled
action and move to a successor state q0 with probability P (q; ; q0). Such a
selection is done via (randomized) schedulers, i.e., functions S that map a nite
paths to a distribution over actions. For schedulers, we require that for each
nite path in M with last state q, we have P (q; ) is de ned for any 2 Act
with S( )( ) &gt; 0. Then, a probability measure PrSM over maximal S-paths, i.e.,
in nite paths that follow S, is de ned in the standard way (cf. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]).
Probabilistic model checking (PMC, cf. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]) is used in this paper for a
quantitative analysis of MDPs. Usually one has given a temporal logical property
over the set of labels that de nes sets of maximal paths in the MDP that
ful ll . For instance, linear temporal logic (LTL, [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]) can be used to specify
the set of paths reaching states labeled by an ` 2 , formally 1 = `, or where
always when such a state is reached, within two time steps a state not labeled by
` is reached again, formally 2 = ` ! (X:` _ XX:`) . For a scheduler S, any
LTL property constitutes a measurable set of S-paths such that we can reason
about the probability PrSM( ) of M satisfying an LTL property w.r.t. S. By
ranging over all possible schedulers, this enables a best- and worst-case analysis.
A stochastic property is an expression = Pex( ) ./ where is an LTL
formula, 2 Q a threshold, ./ 2 f&lt;; ; ; &gt;g a relation, and ex 2 fmin; maxg.
Then, M satis es , denoted M j= , i ex = min and inf S PrSM( ) ./ or
ex = max and supS PrSM( ) ./ . The PMC problem for and M amounts
to decide whether M j= . For more details on MDPs and PMC, we refer to
standard textbooks such as [
        <xref ref-type="bibr" rid="ref13 ref2 ref7">13,2,7</xref>
        ]. Note that in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] we considered weighted
MDP for ontology-mediated PMC over expected accumulated costs. For brevity,
we consider only plain MDPs in this paper.
      </p>
      <p>Arithmetic Constraints and Boolean Expressions are de ned over a
countable set V of variables. An evaluation over V is a function that assigns
to each v 2 V a value in f (v); (v) + 1; : : : ; (v)g where : V ! N de nes
bounds of each variable. The set of evaluations is denoted by Eval (V). Let z
range over Z and v range over V. The set of arithmetic expressions E(V) is
dened by the grammar ::= z j v j ( + ) j ( ) : Evaluations naturally
extend to arithmetic expressions, e.g., ( 1 + 2) = ( 1) + ( 2). C(V) denotes
the set of arithmetic constraints over V, which are terms of the form ( ./ z)
with 2 E(V), ./ 2 f&gt;; =; &lt;g, and z 2 Z. For a given evaluation 2 Eval (V)
and constraint 2 C(V), we write j= i holds in . We denote by C( )
the constraints entailed by , i.e., C( ) = fc 2 C(V) j j= cg. B(V) denotes
the set of Boolean expressions over C(V), i.e., the set of propositional formulas
where arithmetic constrains over V are used as atoms. Entailment of Boolean
expressions B(V) from evaluations is de ned in the usual way.</p>
      <p>
        Stochastic programs provide in the area of PMC the de-facto standard to
concisely describe MDPs. They are essentially a probabilistic variant of Dijkstra's
guarded command language [
        <xref ref-type="bibr" rid="ref5 ref9">5,9</xref>
        ]. We call a function u : V ! E(V) update, and a
distribution 2 Distr (Upd ) over a given nite set Upd of updates stochastic
update. The e ect of an update u : V ! E(V) on an evaluation 2 Eval (V) is their
composition u 2 Eval (V), i.e., ( u)(v) = maxf (v); minf (u(v)); (v)gg
for all v 2 V. This notion naturally extends to stochastic updates 2 Distr (Upd )
where for any ; 0 2 Eval (V) we have that ( )( 0) is the sum over all (u)
for which u = 0. A stochastic guarded command over a nite set of updates
Upd , brie y called command, is a pair hg; i where g 2 B(V) is a guard and
2 Distr (Upd ) is a stochastic update, which we write in the following form:
1=2 : server proc1 := 1
(server proc1 = 2 ^ server proc2 = 2) 7! 1=2 : server proc1 := 3
(1)
      </p>
      <p>A stochastic program over V is a tuple P = hV; C; 0i where C is a nite set
of commands and 0 2 Eval (V) is an initial variable evaluation. For brevity, we
write Upd (P) for the set of all updates in C. The MDP induced by P is de ned
as M[P] = hS; Act ; P; 0; ; i where
{ S = Eval (V),
{ Act = Distr (Upd (P)),
{ = C(V),
{ ( ) = C( ) for all 2 S, and
{ P ( ; ; 0) = ( )( 0) for ; 0 2 S</p>
      <p>and hg; i 2 C with ( ) j= g:</p>
      <p>Ontologized Program State q1
Command Perspective</p>
      <p>DL Perspective
migrateServer2
server proc1 = 2
server proc2 = 2
server proc3 = 2</p>
      <p>OverloadServer
Server(s1)
Process(p1)
3runs:Process</p>
      <p>Server(s3)</p>
      <p>Process(p3)</p>
      <p>Server u
Server(s2)
Process(p2)
runs(s2; p1)
runs(s2; p2)
runs(s2; p3)
The Program is a speci cation of the operational behavior.</p>
      <p>The Description Logic Ontology contains static background knowledge that
may in uence the behavior of the program.</p>
      <p>The Interface links program and ontology by providing mappings between the
languages used for the program and the ontology.</p>
      <p>Ontologized programs combine the two formalisms guarded command
language and description logics (DLs). To achieve maximum division of concerns,
the behaviors and the static knowledge about the context of the program are
speci ed separately and are only loosely coupled by an interface. This allows
for specifying operational behavior in the usual way and to extend and reuse
existing program speci cations, and similarly, to easily link established or even
standardized ontologies to a program.</p>
      <p>Before we formally de ne ontologized programs, we explain how global states
of ontologized programs are represented. Both formalisms, guarded command
language and DL, may represent abstracted versions of global states in di erent
ways. For the guarded command language a state is described as an evaluation of
variables, while in DL a state is described using a set of axioms. Both formalisms
require di erent degrees of abstraction and detail, depending on their dedicated
purpose. Consequently, global states of ontologized programs are composed of
two components, which we call the command perspective and the DL perspective
on the global system state (see Figure 1). As can be seen in the gure, there
are some elements in the two perspectives that correspond to each other, while
others have no direct correspondence. The correspondence, depicted by arrows in
the gure, is de ned in the interface component of the ontologized program via
a function Dc (DL to command ), mapping DL axioms to command expressions.
The part of the DL perspective inside the box is not mapped to anything on the
command side { this is the static part, and contains the background knowledge
about the system that is independent of the current state of the system. In
contrast, the mapped axioms in the ontology perspective can change, which is
why we call them uents. Note that we also allow the command perspective to
use variables that have no correspondence in the DL perspective.</p>
      <p>
        So far, our description of global states does not depend on any DL
reasoning, which is, however, required by the DL component to provide background
knowledge for the operational component of the program. To invoke a reasoning
task from the command side of ontologized programs, we have introduced the
notion of hooks in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. A hook is a propositional variable in the program
component that is linked by the interface to a Boolean query on the DL component
of the ontologized program state. The result of the query determines the value
of the propositional variable using the function cD (command to DL). In our
example, we would for each server i 2 f1; 2; 3g de ne one hook migrateServeri
that stands for the knowledge that processes should be migrated from Server i:
cD(migrateServeri) = fOverloadedServer(si)g:
In the ontologized state shown in Figure 1, the hook migrateServer2 is active
because the ontology in DL perspective entails OverloadedServer(s2).
      </p>
      <p>To enable the actual use of hooks in the guarded command language, we have
to slightly extend the de nition of commands. Speci cally, we de ne abstract
stochastic programs as stochastic programs where guards are Boolean expressions
over arithmetic expressions and hooks. For example a command in an abstract
stochastic program looks like the following:
1=2 : server proc1 := 1
(migrateServer2 ^ server proc1 = 2) 7! 1=2 : server proc1 := 3
(2)
While the intention of this command is similar to the command (1), this version
omits the speci cation of the condition under which Server 2 needs to migrate.
Specifying and testing this condition is now delegated to the DL component.</p>
      <p>Now we are ready to formally de ne ontologized programs. Given an abstract
stochastic program P, we denote by HP the hooks used in P.</p>
      <p>De nition 1. An ontologized program is a tuple P = hP; K; Ii where
{ P = hVP; CP; WP; P;0i is an abstract stochastic program,
{ K is a DL ontology called the static DL knowledge,
{ I = hVI; HI; F ; Dc; cDi is a tuple describing the interface, where VI is a set
of public variables, HI is a set interface hooks, F is a set of DL axioms
called uents, and two mappings cD : HI ! }(A) and Dc : F ! B(VI), and
P is compliant with I, i.e., HP HI and VI VP. If for a DL L every axiom
in K [ F is an L axiom, we call P an L-ontologized program.</p>
    </sec>
    <sec id="sec-3">
      <title>Three Semantics of Ontologized Programs</title>
      <p>Stochastic programs are used as concise representations of MDPs that can be
subject of a quantitative analysis. Analogously, ontologized programs are
concise representations of ontologized MDPs, where a quantitative analysis on these
MDPs can further rely on information provided by the ontology. Since there
is an additional logical component in ontologized programs, there are di
erent meaningful ways in which ontologized MDPs can be de ned to represent
the operational behavior of ontologized programs. We present three semantics
for ontologized programs that di er in their treatment of inconsistencies. Such
inconsistencies arise when variable evaluations in the stochastic program have
an inconsistent meaning in the ontology. Choosing an appropriate semantics is
a modeling decision to be made when constructing the ontologized program.
Therefore we analyze the impact of this choice on the computational
complexity of deciding consistency w.r.t. each of the di erent semantics. First we de ne
formally that ontologized states comprise a command perspective and a DL
perspective (see Figure 1).</p>
      <p>De nition 2. Let P = hP; K; Ii be an ontologized program as in De nition 1.
An ontologized state in P is a tuple q = h q; Kqi, with command perspective q
and DL perspective Kq, where
{ K
{ Kq</p>
      <p>Kq,
K [ F ,
{ q 2 Eval (VP), and
{ 2 Kq i q j= Dc( ) for every
2 F .</p>
      <p>We call an ontologized state q inconsistent if Kq is inconsistent.</p>
      <p>For every evaluation 2 Eval (VP), there is always a unique KB K s.t.
h ; K i is an ontologized state in P. We denote this state by s(P; ). To
dene updates on ontologized states, we exploit the fact that 1) updates can only
directly modify the command perspective of ontologized states, and 2) the DL
perspective of ontologized states is fully determined by the command
perspective. Thus, we can easily lift the e ect of updates on evaluations to updates
on ontologized states. For an update u 2 Upd (VP) and an ontologized state
q, we de ne u(q) = s(P; u( q)). Correspondingly, we extend stochastic updates
: Upd (VP) ! [0; 1] to ontologized states q by setting (q; u(q)) = (u) for all
u 2 Upd (VP). To illustrate how commands are executed on ontologized states,
we consider the ontologized state q1 shown in Figure 1 and the abstract
command in (2). First, we note that the guard in the command is active, since
the hook migrateServer2 is active in q1. The command can thus be executed
on the state, and executes each of its updates with a 50% chance. For the
rst update, server proc1 := 1, we obtain the ontologized state q2 in which
server proc1 = 2 is replaced by server proc1 = 1, and runs(s2; p2) is replaced
by runs(s1; p2). Consequently, the hook migrateServer2 becomes inactive in q2,
since cD(migrateServer2) = fOverloadedServer(s2)g and OverloadedServer(s2)
is not entailed by the DL perspective on q2.
4.1</p>
      <sec id="sec-3-1">
        <title>Consistency-Independent Semantics</title>
        <p>
          We rst present the semantics for ontologized programs originally introduced
in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. This semantics does not give inconsistent ontologized states a priori a
special meaning, therefore named consistency-independent semantics. The de nition
of the MDP induced by an ontologized program under consistency-independent
semantics closely follows the one for MDPs induced by plain stochastic programs.
De nition 3. Let P = hP; K; Ii be an ontologized program as in De nition 1.
The MDP induced by P under consistency-independent semantics is given by
Mci[P] = hQ; Act ; P; q0; ; i; where
{ Q = fs(P; ) j 2 Eval (VP)g,
{ Act = Distr (Upd (P)),
{ q0 = s(P; 0),
{ = HP [ C(VP),
{ (q) = C( q) [ f` 2 HP j
        </p>
        <p>Kq j= cD(`)g for every q 2 Q, and
{ P (q; ) = (q) for all hg; i 2 C
with (q) j= g.</p>
        <p>P is inconsistent if there exists an inconsistent q 2 Q reachable from q0.</p>
        <p>We can test consistency of ontologized programs using a reachability analysis
on the state space. For this, we generate states one after the other, using DL
reasoner calls for each state to determine their active hooks and their
consistency. As each state requires polynomial space, this can be done in polynomial
space with a k-oracle, where k is the complexity of entailment in the DL. A
corresponding lower bound can be obtained by reduction of the word problem for
polynomially space-bounded Turing machines with k-oracle. The construction
used here shows a close relationship between Turing machines with k-oracle and
L-ontologized programs, provided that L is k-hard. This relationship is also used
for later complexity results presented in this paper.</p>
        <p>Theorem 1. Let L be a DL such that deciding entailment in L is k-complete.
Then, deciding consistency of L-ontologized programs is PSpacek-complete, even
for non-stochastic programs.
4.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Probability-Normalizing Semantics</title>
        <p>If a stochastic program is inconsistent, consistency-independent semantics relies
on the program speci cation to deal with inconsistent states. On the one hand,
this can o er exibility. On the other hand, it can be desirable to let the
semantics handle the situation by de nition. Since inconsistent states have no
correspondence to states of the modeled system, they could stand for undesired states
that should not occur in the MDP at all. The idea of probability-normalizing
semantics is to remove all states of paths that can lead to an inconsistent state
and locally normalize probabilistic choices accordingly. Under this semantics, the
ontology serves an additional purpose: not only does it assign meaning to the
hooks, it also speci es which program states are possible, and thus restricts the
transitions from a given state. This allows for a further separation of concerns,
and thus for reusability of the behavioral and the DL component of the program:
we may use the same behavioral component with di erent ontologies that put
di erent constraints on the state space. In the running server example, we may
want to analyze a system in which the servers have di erent software and/or
hardware con gurations, such that some processes cannot run on all servers. For
instance, we might use a di erent ontology that speci es that Process 1 cannot
run on Server 3. This restricts the possible outcomes of applying the command
in (2) on the state in Figure 1 (with the modi ed ontology). It can now only move
Process 1 to Server 1, so that this migration is performed with 100% probability.</p>
        <p>To achieve this behavior under consistent-independent semantics, we would
have to extend the commands of the program with a case distinction over all
possible successors to inconsistent states, which is clearly undesired in the above
example. Instead, we capture the restriction to possible states of the MDP by
de ning probability-normalizing semantics for ontologized programs. Intuitively,
under probability-normalizing semantics, the induced MDP can be obtained from
the corresponding consistency-independent semantics by removing inconsistent
states and then normalizing the resulting probabilities. It is possible that for
some state q and command hg; i with (q) j= g such a normalization is not
possible, since all successor states that should have a positive probability lead
to an inconsistent state. In this case, the command cannot be applied on q. If
no command can be applied anymore on a state, this state is removed from the
MDP as well.</p>
        <p>De nition 4. Let Mci[P] = hQ; Act ; P; q0; ; i for an ontologized program P.
An MDP M is probability-normalizing w.r.t. P if q0 is consistent and M =
hQ0; Act ; P 0; q0; ; 0i, where
1. Q0 Q contains only consistent states,
2. 0 is obtained from by restricting its domain to Q0,
3. for all q 2 Q and 2 Act s.t. P (q; ) is de ned, we have for all q0 2 Q0:
P (q; ; q0) =</p>
        <p>P 0(q; ; q0) X
q^2Q0</p>
        <p>P (q; ; q^):
P is called probability-normalizable if there exists a probability-normalizing MDP
w.r.t. by P. In case Q0 and P 0 are subset-maximal, M is the called the MDP
induced by P under probability-normalizing semantics, denoted Mpn[P].
Recall that according to our de nition, MDPs cannot have nal states.
Therefore, to obtain the probability probability-normalizing MDP, it does not su ce
to remove inconsistent state, but also states that would only lead to inconsistent
states would have to be removed. As this operation might lead to a removal
of the initial state, not every ontologized program is probability-normalizable.
While for consistency-independent semantics consistency of a program can be
decided by determining reachability of inconsistent states, we have to solve a
complementary task for probability-normalizability. Speci cally, we have to
decide whether one can nd an unbounded path that never enters an inconsistent
state, for which again polynomial space and an oracle for the reasoner is su
cient. As a result, deciding probability-normalizability has the same complexity
as deciding consistency under consistency-independent semantics.
Theorem 2. Let L be a DL for which deciding entailment has complexity k.
Then, deciding whether an L-ontologized program is probability-normalizable is
PSpacek-complete, even for non-stochastic programs.
If we use probability-normalizing semantics, it is possible that the probabilities
speci ed in the commands change. This is the right choice if the program uses
probabilities to model randomized behavior as in our running example: the
program migrates a process as soon as this becomes necessary, and it does so by
choosing each possible server with equal probability. If the ontology restricts the
number of possible choices, then consequently the probabilities need to change
as well. However, there are other scenarios where a change of probabilities is not
desired: rather than using stochastic commands to model randomized behavior,
the program component may use stochastic commands to describe probabilistic
outcomes of the simulated system that are based on measured probabilities. For
instance, we might know that at any point in time the server has a probability
of 1% of becoming disfunct and needs a restart. This probability should not be
a ected by the speci cations: there might be a program state in which a disfunct
server would necessarily lead to an inconsistency in the future. This should not
mean that the server cannot become disfunct and that the current program state
should be possible by de nition. To capture such phenomena in a semantics, we
introduce probability-preserving semantics.</p>
        <p>De nition 5. Let Mci[P] = hQ; Act ; P; q0; ; i for an ontologized program P.
An MDP M is probability-preserving w.r.t. P if q0 is consistent and M =
hQ0; Act ; P 0; q0; ; 0i where
1. Q0 Q contains only consistent states,
2. 0 is obtained from by restricting its domain to Q0, and
3. for all P 0(q; ; q0) &gt; 0 we have that P 0(q; ; q0) = P (q; ; q0).</p>
        <p>P is called probability-preservable if there exists a probability-preserving MDP
w.r.t. P. In case Q0 and P 0 are subset-maximal, M is the called the MDP
induced by P under probability-preserving semantics, denoted Mpp[P].
Note that De nition 5 requires M to be an MDP and hence, P 0 is a probability
transition function. Hence, Condition 3 implies that any transition that reaches
an inconsistent state with positive probability w.r.t. P has no corresponding
transition in P 0.</p>
        <p>To verify whether a program is probability-preservable, we have to take the
non-deterministic and the probabilistic choices of the program di erently into
account. This is di erent to the other notions of consistency discussed here, because
only a single path, going over non-deterministic and probabilistic choices
indifferently, is su cient to witness inconsistency or probability-normalizability. We
can characterize probability-preservability as follows: a program is
probabilitypreservable i there exists a resolution of the non-deterministic choices in the
program such that for all probabilistic choices an inconsistent state is never
reached. Thus, testing this property requires both existential and universal
explorations of the state space, which is why we can use alternating Turing
machines with polynomial space to solve the corresponding problem.
Theorem 3. Let L be a DL for which deciding entailment has complexity k.
Then, deciding whether an L-ontologized program is probability-preservable is
APSpacek-complete. For non-stochastic programs, it is PSpacek-complete.</p>
        <p>
          Recall that APSpace = ExpTime (see for example Corollary 2 to Theorem
16.5 and Corollary 3 to Theorem 2.20 in [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]). However, as an exponential Turing
machine may also write exponentially long queries to the oracle, we may not
always have APSpacek = ExpTimek.
4.4
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>Ontology-mediated PMC</title>
        <p>
          In [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] we presented a technique to apply PMC techniques to ontologized
programs to perform quantitative analysis tasks on knowledge-intensive systems. For
this, we treated inconsistent states as consistent ones, following the
consistencyindependent semantics presented in Section 4.1. Inconsistencies in ontologized
programs is not as bad as usually in other logic-based formalisms, as stochastic
properties can also be evaluated on MDPs with inconsistent states. In fact, they
can even have a designated meaning, e.g., modeling exceptions the program has
to face. An advantage of ontologized programs is that commands could be used
to detect when a program is in an inconsistent state. To this end, a guard
employing a hook inc that is satis ed in exactly those states that are inconsistent, e.g.,
with cD(inc) = (&gt; v ?), can invoke the actions to undertake when an exception
has occurred. Note that while all inconsistent states agree on the hooks that are
active (it is always the complete set of hooks), the command perspective of states
can still be di erent. Here, private variables may encode additional information
about the current state that can be used to decide how to react to inconsistencies.
An example for ontology-mediated PMC on consistency-independent semantics,
is to decide whether
        </p>
        <p>
          Pmin
(inc ! (X:inc _ XX:inc))
&gt; 95%
Intuitively, this states that an exception is always successfully handled with
high probability of at least 95% in the sense that whenever an inconsistent state
is reached, a consistent state is reached again within at most two steps. Such
a stochastic property can be checked using our method in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] and the PMC
tool Prism [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
        <p>Ontology-mediated PMC for probability-normalizing and -preserving
semantics is not as straight forward as under consistency-independent semantics. While
we could achieve a stochastic program that disallows to reach inconsistent states
also under consistency-independent semantics, this requires an explicit handling
of inconsistencies and modi cations in the program that \provisions" the
possible variable evaluations. More speci cally, we then need a the possibility to
decide for each update in the command whether it would lead to an inconsistent
state or not. However, while this would work out well in our running example,
we cannot expect such an encoding for any ontologized program. Fortunately,
De nitions 4 and 5 are de ned in a constructive way, i.e., these
consistencydependent semantics are de ned based on the consistency-independent one by
manipulating the state space and probabilistic transition function. Hence, we
could restrict the state space to consistent states and adjust further states and
probabilities on-the- y during the construction process of the MDP.
5</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Outlook</title>
      <p>
        We are currently implementing a scenario to evaluate our implementation for
ontology-mediated PMC under the di erent semantics we presented in this
paper. Here, we can bene t from the constructive de nition of our semantics that
evaluates inconsistent states beforehand and could hence directly included into
the model building process, e.g., in the probabilistic model checker Prism [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
There are some theoretical results that are still left open but should be easy
to obtain: the complexity of the actual model-checking tasks, or the size of the
rewritings that our practical method produces. Furthermore, there are some
extensions and modi cations worth investigating: our semantics o er two possible
ways of treating probabilities occurring in the program to \restore" consistency.
The choice of which one to select depends on the kind of stochastic phenomena
actually modeled. It is easy to think of scenarios where both kinds of stochastics
occur. This would require a more exible type of ontologized programs. In such
a program, some distributions may be marked as \ xed", while others could
be subject to normalization. In our framework, modi cations of program states
happen on the level of DL axioms and thus a ect all models similarly to [
        <xref ref-type="bibr" rid="ref4 ref8">4,8</xref>
        ].
Another approach are DL action-based programs [
        <xref ref-type="bibr" rid="ref1 ref14">1,14</xref>
        ], where modi cations of
states a ect the level of interpretations. So, a variant of ontologized programs
where states are associated not with ontologies, but with interpretations, similar
to [
        <xref ref-type="bibr" rid="ref1 ref14">1,14</xref>
        ] is certainly an interesting direction for future research.
      </p>
      <sec id="sec-4-1">
        <title>Acknowledgements</title>
        <p>The authors are supported by the DFG through the Collaborative Research
Center TRR 248 (see https://perspicuous-computing.science, project ID
389792660), the Cluster of Excellence EXC 2050/1 (CeTI, project ID 390696704,
as part of Germany's Excellence Strategy), and the Research Training Groups
QuantLA (GRK 1763) and RoSI (GRK 1907).</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Appendix</title>
      <p>We provide here the proofs of our results that have been omitted due to space
constraints in the main paper.</p>
      <p>A.1</p>
      <sec id="sec-5-1">
        <title>Auxiliary Lemmas for the Hardness Results</title>
        <p>We de ne the size of a concept/axiom/ontology in the usual way as its string
length, that is, the number of symbols needed to write it down, where concept,
role and individual names count as one, as do logical operators, and number
restrictions are encoded using unary encoding.</p>
        <p>The following auxiliary lemma will be helpful for the hardness results, as it
shows how to represent ontologies based on a polynomially bounded enumeration
of DL axioms. We rst need a more liberal de nition of conservative extensions.
De nition 6. A renaming is a bijective function : NC [NR [NI ! NC [NR [NI
s.t. (X) 2 NC i X 2 NC, (X) 2 NR i X 2 NR and (X) 2 NI i X 2 NI.
Given an ontology O and a renaming , we denote by O the result of replacing
every name X in O by (X).</p>
        <p>Let O1 and O2 be two ontologies. Then, O1 is a conservative extension of O2
modulo renaming i there exists some renaming s.t. for every axiom using
only names in O2, O1 j= i O2 j= .</p>
        <p>Lemma 1. There exists a polynomial function p s.t. for every n 2 N, there exists
a polynomially computable function from indices f0; : : : ; p(n)g to SROIQ
axioms s.t. for every ontology O of size at most n, we can compute in time
polynomial in O a set I f0; : : : ; p(n)g s.t. f (i) j i 2 Ig is a conservative
extension of O modulo renaming.</p>
        <p>Proof. Let n 2 N be our bound on ontology sizes. Using standard structural
transformations, we can compute for every SROIQ ontology O in polynomial
time a normalized ontology that is a conservative of O and in which every axiom
is of one of the following forms:
1. A1 v fa1g,
2. A1 u A2 v A3,
3. A1 v A2 t A3,
4. A1 v ir1:A2,
5. ir1:A1 v A2,
6. A1 v 9r:Self,
7. 9r:Self: v A1,
8. A1(a1),
9. r1(a1; a2),
10. r1 v r2,
11. r1 r2 v r3,
12. Disj(r1; r2),
where A1; A2; A3 2 NC [ f&gt;; ?g, r1; r2; r3 2 NR, i 2 N and a1; a2 2 NI.
Let m be the maximal size of any ontology that is the result of normalizing
an ontology of size n. Note that m is polynomial in n. The number of concept
names, role names and individuals occurring in any normalized ontology O of
size m is bounded by m, and every number occurring in O is bounded by m as
well. This means, we can pick a set</p>
        <p>X = fX1; : : : ; Xmg</p>
        <p>NC [ f&gt;; ?g [ NR [ NI
s.t. for every any normalized ontology O of size m there exists a renaming s.t.
O uses only names from X. As the resulting ontology is still normalized, every
such ontology uses only axioms from the set</p>
        <p>A = f j
is normalized and uses only names from X and numbers
mg
As there are 12 axiom types, and each uses at most 3 names from X and one
number, the number of axioms in A is bounded by p(n) = 12 jXj3 m = 12m4,
which is polynomial in n because m is polynomial in n.</p>
        <p>The function is required can then for example de ned as follows: for i 2
f0; : : : ; p(n)g, we de ne (i) = , where
1. is of the form (i mod 12) as above,
2. for a 2 f1; 3g, Xj is the ath name occurring in , where j = b 12mia 1 c
mod m, and
3. if is of the form (4) or (5), i.e. it contains a number, then this number is
j = b 12mi3 c mod m</p>
        <p>For every ontology O, we obtain the index set I as required by rst
normalizing O, renaming it accordingly, and then assigning the indices. tu</p>
        <p>Using an index function as in Lemma 1, we can de ne a special type of
oracle Turing machines called (alternating) Turing machines with DL oracles.
Such a Turing machine uses an oracle tape with tape alphabet f0; 1; bg (b being
the blank symbol), and is speci ed using an enumeration of DL axioms as
above, and some axiom . Based on the current oracle tape content, we obtain
an index set I that contains the number i i the oracle tape contains a 1 at
position i. The oracle then answers yes if f (i) j i 2 Ig j= , and no otherwise.
For a given DL L, we may additionally require that the oracle only accepts
inputs for which the corresponding ontology is in L, in which case we call the
Turing machine a Turing machine with L-oracle for , .</p>
        <p>Lemma 2. Let k be a complexity class and L a DL for which deciding entailment
is k-hard. Then, for every Turing machine T with k-oracle and bound n on the
size of the oracle tape, we can construct in time polynomial in n and T a Turing
machine T 0 with L-oracle for some axiom enumeration and CI that accepts
the same set of words with at most polynomial overhead.</p>
        <p>Proof. We rst argue that it su ces to focus on entailment of axioms of the
form = A(a), since entailment of CIs can be reduced to it. Speci cally, for any
ontology O and CI C v D, we have O j= C v D i O [ fC(a); D v Ag j= A(a),
where a and A do not occur in O. Because entailment in L is k-hard, we can
construct for every query to the k-oracle in polynomial time an ontology O that
entails i the query should return true. Because the oracle tape is bounded,
we can use Lemma 1 to translate that ontology into a corresponding index set
to serve as input for the L-oracle. T 0 thus proceeds as T , but before sending
a query to the oracle, it reduces it to the entailment of the axiom from an
ontology O, which it then translates to a query to the L-oracle. tu
A.2</p>
      </sec>
      <sec id="sec-5-2">
        <title>Consistency-Independent Semantics</title>
        <p>Theorem 1. Let L be a DL such that deciding entailment in L is k-complete.
Then, deciding consistency of L-ontologized programs is PSpacek-complete, even
for non-stochastic programs.</p>
        <p>Proof. Let M[P] be the MDP induced by some ontologized program P as in
De nition 3 where K is expressed in L. For the upper bound, we specify a
non-deterministic PSpace algorithm that, starting from the initial state,
explores all reachable states in M[P] and decides in k whether the current state
is inconsistent. Speci cally, for each current variable evaluation in P, we
nondeterministically select a command in PP whose guard is satis ed. Then, we
non-deterministically select an update that has a positive probability in the
distribution of the command and apply this update on the current evaluation. In
each step, we use a k-oracle to determine whether the current state is consistent,
and which hooks are active. As each state can be stored in polynomial space,
the algorithm runs in PSpacek.</p>
        <p>For the lower bound, we provide a polynomial reduction of the acceptance
problem for polynomially space bounded, deterministic Turing machines T with
k-oracle. Let</p>
        <p>T = hQ; ; 0; i; o; q0; F; ; q?; q+; q i
be such a Turing machine, where
{ Q are the states,
{ = f 0; : : : mg is the tape alphabet,
{ 0 is the blank symbol,
{ i is the input alphabet,
{ o is the oracle tape alphabet,
{ q0 is the initial state,
{ F Q is the set of accepting states,
{ : (Q n (F [ fq?g)) o ! Q f 1; 0; +1g o f 1; 0; +1g
is the transition relation, (which works on two tapes, the standard tape and
the oracle tape),
{ q? 2 Q n F is the query state to query the oracle, and
{ q+; q 2 Q n F are the query answer states.</p>
        <p>The Turing machine uses two polynomially bounded tapes: one standard tape
and one oracle tape. In every state that is non- nal and not the query state,
de nes the successor state of T . Whenever the current state is q?, the oracle is
invoked, leading to T entering q+ in the next step if the oracle accepts the content
of the oracle tape, and entering q otherwise. Based on T and the input word
w = 0 : : : l, we build an ontologized program PT;w s.t. PT;w is inconsistent i
T accepts w.</p>
        <p>Since entailment in L is k-hard, we can use Lemma 2 to construct a
polynomially space-bounded Turing machine T 0 with L-oracle for ; A(a) that accepts
w i so does T . Speci cally, we obtain a set Fq of DL axioms, polynomially
bounded in jwj, a mapping : f0; : : : ; r(jwj)g 7! Fq, where r is a polynomial,</p>
        <p>T 0 = hQ0; ; 0; i; f0; 1g; q0; F 0; 0; q?; q+; q i;
where Q0 = fq0; : : : ; qng, and T 0 accepts the same language as T , is still
polynomially space-bounded, but uses a di erent oracle. If T 0 is in state q?, and the
oracle tape contains the word 0 : : : r(jwj), the Turing machine moves into the
state q+ i f (i) j 0 i r(jwj); i = 1g j= A(a), and otherwise moves to q .
Based on this Turing machine, we construct the ontologized program PT;w.</p>
        <p>Let p be a polynomial s.t. T 0 uses at most p(jwj) tape cells on input w. The
program uses the following variables:
{ state 2 f0; : : : ; ng stores the state of the Turing machine,
{ for 0 i p(jwj), dtape i 2 f0; : : : ; mg stores the letter on the default tape
position i,
{ for 0 i r(jwj), otape i 2 f0; 1g stores the letter on the oracle tape at
position i,
{ dpos 2 f0; : : : ; p(jwj)g stores the current position on the default tape,
{ opos 2 f0; : : : ; r(jwj)g stores the current position on the oracle tape.
We use a single hook H = foracle yesg standing for the positive oracle answer.
For every</p>
        <p>hhqi1 ; i2 ; i3 i; hqi4 ; i5 ; Dird; i6 ; Diroii 2 0;
every i 2 f0; : : : ; p(jwj)g and every j 2 f0; : : : ; r(jwj)g, we use the following
non-stochastic command:
0 state=i1 1 0state := i4 1
@ ^dpos=i ^ dtape i=i2A 7! @dtape i := i5; dpos := dpos + Dird;A
^opos=j ^ otape j=i3 otape j := i6; opos := opos + Diro
To handle the behavior of the oracle, we further add the following commands,
where q? = qi1 , q+ = qi2 and q = qi3
(3)
(4)
(5)
state = i1 ^ oracle yes 7!
state = i1 ^ :oracle yes 7!
state := i2
state := i3</p>
        <p>The initial state P;0 of the program is the evaluation state = 0, dtape i =
wi for i 2 f0; : : : ; lg, dtape i = 0 for i 2 fl; : : : ; p(jwj)g, otape i = 0 for
i 2 f0; : : : ; r(jwj)g, dpos = otape = 0. This completes the de nition of the
program component P in PT;w.</p>
        <p>As DL axioms, we set for the static DL knowledge K = ; and for the uents
F = Fq [ f&gt; v ?g. It remains to specify the interface functions Dc and cD. For
Dc, we set for every i 2 f0; : : : ; r(jwj)g,</p>
        <p>Dc( (i)) = otape i = 1
to translate the oracle calls, and
0</p>
        <p>1</p>
        <p>qi2F
to encode that the ontologized states that correspond to accepting states in the
Turing machine are inconsistent. Finally, we specify cD by setting
cD(oracle yes) = fA(a)g:
It is standard to verify that the Turing machine T 0 accepts w i the ontologized
program PT;w is inconsistent, i.e., can reach an inconsistent state. Since T
accepts w i so does T 0, this completes the reduction.
tu
A.3</p>
      </sec>
      <sec id="sec-5-3">
        <title>Probability-Normalizing Semantics</title>
        <p>Theorem 2. Let L be a DL for which deciding entailment has complexity k.
Then, deciding whether an L-ontologized program is probability-normalizable is
PSpacek-complete, even for non-stochastic programs.</p>
        <p>Proof. For the upper bound, note that the only way for P to be inconsistent
under probability-normalizing semantics is if all probabilistic choices would lead to
an inconsistent state in the M[P]. We can thus use a non-deterministic
PSpacekalgorithm similar to the one described in the proof for Theorem 1 to determine
whether a program is probability-normalizable: instead of testing for
inconsistency of the current state, we here test for consistency. Note that it is su cient
to nd a single execution path that never reaches an inconsistent state for an
ontologized program to be probability-normalizable. This execution path might be
in nite, however, because the state space is bounded, any in nite execution path
would need to visit a state twice. We use a counter to put an exponential bound
on the states to be visited, and store a non-deterministically selected state for
later comparison. If this state repeats, we found an unbounded execution path
that never visits an inconsistent state, and accept. If we reach the bound, such
a loop was not found and we reject.</p>
        <p>For the lower bound, we note that the reduction used in the proof for
Theorem 1 can also be used for probability-normalizability. In that reduction, we
reduce the word-problem of deterministic polynomially-space bounded Turing
machines with k-oracle to consistency of ontologized programs under
consistencyindependent semantics. Inspection of the construction reveals that the
constructed program PT;w is not only non-stochastic, but even deterministic, by
which we mean that in every ontologized state, there is at most one command
that can be executed. Speci cally, there is exactly one Command (3) in the proof
of Theorem 1 for each assignment to the variables state, dpos, opos, dtape
i and otape j, which means that in each ontologized state, at most one of
these commands is executable. Moreover, for the assignment state = i1, where
qi1 = q? denotes the oracle query state, there is no such command, as there
is no transition in the Turing machine for those states, and exactly one of the
commands (4) and (5) is executable if state = i1, unless if the state is
inconsistent. We obtain that, if some path in M[PT;w] leads from the initial state to
an inconsistent state, then all paths lead to an inconsistent state, in which case
the program is not probability-normalizable. Consequently, PT;w is
probabilitynormalizable i PT;w is consistent, which is the case i T does not accept w. Since
non-acceptance of words in polynomially space-bounded Turing-machines with
k-oracle is also PSpacek-complete, we obtain that probability-normalizability is
PSpacek-hard.
tu
A.4
Theorem 3. Let L be a DL for which deciding entailment has complexity k.
Then, deciding whether an L-ontologized program is probability-preservable is
APSpacek-complete. For non-stochastic programs, it is PSpacek-complete.
Proof. Both bounds are via reductions from and to alternating Turing machines.
In an alternating Turing machine, we switch between 9-non-determinism and
8non-determinism, meaning that the Turing machine can switch between
9-nondeterministic choices and 8-non-deterministic transitions. Acceptance is then
de ned inductively according to the choices involved: a con guration is accepting
if 1) it is in an accepting state, 2) there exists an 9-non-deterministic transition
to a con guration that is accepting, or 3) all 8-non-deterministic transitions lead
to an accepting con guration.</p>
        <p>For the upper bounds, we use a modi cation of the algorithm used in the
proof for Theorem 1 that runs in alternating polynomial space and decides
probability-preservability of ontologized programs. Let P be an ontologized
program as in De nition 1, and Mci[P] = hQ; Act ; P; ; ; ; wgt i the MDP induced
by P under consistency-independent semantics. The algorithm guesses the states
q 2 Q to be included in the probability-preserving MDP one after the other,
starting from , where we use alternation to switch between the non-deterministic
and the probabilistic choices of the program. Speci cally, we have to guess a set
of states Q0 s.t.: 1) no state in Q0 is inconsistent, 2) for every Q0 there is always
some guarded command that can be executed on the current state, and 3) all
successor-states into which this command would bring us with a positive
probability are included in Q0. Our algorithm thus proceeds as follows based on the
currently guessed state. If the current state is inconsistent, we reject. Otherwise,
we non-deterministically select a guarded command hg; i 2 CP s.t. q j= g
(9non-determinism) and continue on all states q0 2 Q such that P (q; ; q0) &gt; 0,
q0 2 Q0 (8-non-determinism). Each state takes polynomial space, and we use a
k-oracle to determine which hooks are active and whether the state is consistent.
Since this procedure can be implemented by an alternating Turing machine with
k-oracle that uses polynomial space, probability-preservability can be decided
in APSpacek. If the program is non-stochastic, the 8-non-determinism is not
needed, and the algorithm runs in PSpacek.</p>
        <p>For the lower bound, we adapt the reduction used in the proof for Theorem 1.
However, this time, we reduce the word acceptance problem for polynomially
space bounded alternating Turing machines with k-oracle. Speci cally, let</p>
        <p>T = hQ; ; 0; i; o; q0; F; ; q?; q+; q ; gi;
be such a Turing machine, where Q = fq0; : : : ; qng, = f 0; : : : ng is the
standard tape alphabet, 0 2 i is the blank symbol, i is the input
alphabet, o is the oracle alphabet, q0 is the initial state, F Q^ [ Q_ is the
set of accepting states,
: (Q n (F [ fq?g))
o ! }(Q
f 1; +1g
o [ f 1; +1g)
is the transition function, the states q?, q+ q 2 Q manage the querying of the
oracle, and g : Q ! f8; 9; accept; rejectg partitions the states Q into four
categories of universal and existential quanti ed states, and accepting and rejecting
states, respectively. The de nition of acceptance is standard, speci cally, the
oracle works as in the proof for Theorem 1, and a con guration with state q is
accepting 1) if g(q) = accept, 2) g(q) = 9 and one of the successor con gurations
is accepting, or 3) g(q) = 8 and all successor con gurations are accepting.</p>
        <p>Again, we use Lemma 2 to construct, based on T and the input word w, a
polynomially bounded set Fq of DL axioms, a mapping : Fq ! f0; : : : ; r(jwj)g,
where r is a polynomial, and an alternating Turing machine</p>
        <p>T 0 = hQ0; ; 0; i; f0; 1; bg; q0; F 0; 0; q?; q+; q ; g0i
with an L-oracle for ; A v B, that is polynomially space-bounded, and accepts
w i so does T 0. If T 0 is in state q?, and the oracle tape contains the word
0 : : : r(jwj), the Turing machines moves into the state q+ i
(i) j 0
i</p>
        <p>r(jwj); i = 1 j= A(a);
and otherwise moves to q . Based on this Turing machine, we construct the
ontologized program PT;w in the same way as in the proof for Theorem 1, but
with a di erent set CP of commands.</p>
        <p>We rst model the 8-transitions. For every
t = hqi1 ; i2 ; i3 i 2 (Q n (F [ fq?; q+; q g))
o;
where g0(qi1 ) = 8, every hqi4 ; i5 ; Dird; i6 ; Diroi 2 0(t), every i 2 f0; : : : ; p(jwj)g
and every j 2 f0; : : : ; r(jwj)g, we use the command
0 state = i1
@^ dpos = i
^ opos = j</p>
        <p>1
Note that for the program to be not probability preservable, every command
that we can execute in an ontologized state leads to an inconsistency, which is
why we model 8-transitions in this way. For the 9-transitions, we make use of
the stochastic component. Let
t1 = hqi1 ; i2 ; i3 i 2 (Q n (F [ fq?; q+; q g))
o;
where g0(qi1 ) = 9. For every i 2 f0; : : : ; p(jwj)g and j 2 f0; : : : ; r(jwj)g, we add
the stochastic command
where is the stochastic update such that for every
we have
0 state := i4 1
@dtape i := i5; dpos := dpos + Dird;A =
otape j := i6; opos := opos + Diro
:
The corresponding transitions are excluded from the probability-preserving MDP
if there is a positive probability of getting into an inconsistent state. To this end,
we can model 9-transitions in this way.</p>
        <p>The stochastic commands that handle the use of the oracle are as in the proof
for Theorem 1. It is standard to show that the resulting program is
probabilitypreserving i w is accepted by the Turing machine, so that we obtain that
probability-preservability is APSpacek-hard.</p>
        <p>Now assume that for no state q 2 Q, g0(q) = 9. T then corresponds to
a coNPSpacek-machine, and the ontologized program PT;w is non-stochastic.
As a consequence, deciding whether non-stochastic programs are
probabilitypreserving is coNPSpacek = PSpacek-hard.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Baader</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zarrie</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Veri cation of Golog programs over description logic actions</article-title>
          .
          <source>In: Proceedings of FroCos 2013. LNCS</source>
          , vol.
          <volume>8152</volume>
          , pp.
          <volume>181</volume>
          {
          <fpage>196</fpage>
          . Springer (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Baier</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Katoen</surname>
            ,
            <given-names>J.P.</given-names>
          </string-name>
          :
          <article-title>Principles of Model Checking</article-title>
          . MIT Press (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Baier</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dubsla</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , Kluppelholz, S.:
          <article-title>Trade-o analysis meets probabilistic model checking</article-title>
          .
          <source>In: Proc. of the 23rd Conference on Computer Science Logic and the 29th Symposium on Logic In Computer Science (CSL-LICS)</source>
          . pp.
          <volume>1</volume>
          :
          <issue>1</issue>
          {1:
          <fpage>10</fpage>
          .
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Calvanese</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Giacomo</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lenzerini</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosati</surname>
          </string-name>
          , R.:
          <article-title>Actions and programs over description logic knowledge bases: A functional approach</article-title>
          . In: Knowing,
          <string-name>
            <surname>Reasoning</surname>
          </string-name>
          , and Acting: Essays in Honour of H. J.
          <string-name>
            <surname>Levesque</surname>
          </string-name>
          . College
          <string-name>
            <surname>Publications</surname>
          </string-name>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Dijkstra</surname>
            ,
            <given-names>E.W.:</given-names>
          </string-name>
          <article-title>A Discipline of Programming</article-title>
          . Prentice-Hall (
          <year>1976</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Dubsla</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koopmann</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turhan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Ontology-mediated probabilistic model checking</article-title>
          . In: Ahrendt,
          <string-name>
            <given-names>W.</given-names>
            ,
            <surname>Tarifa</surname>
          </string-name>
          , S.L.T. (eds.)
          <source>Integrated Formal Methods - 15th International Conference, IFM 2019. Lecture Notes in Computer Science</source>
          , vol.
          <volume>11918</volume>
          , pp.
          <volume>194</volume>
          {
          <fpage>211</fpage>
          . Springer (
          <year>2019</year>
          ). https://doi.org/10.1007/978-3
          <source>-0u30-34968- 4 11</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Forejt</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kwiatkowska</surname>
            ,
            <given-names>M.Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Norman</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parker</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Automated veri cation techniques for probabilistic systems</article-title>
          .
          <source>In: Proc. of the School on Formal Methods for the Design of Computer</source>
          ,
          <source>Communication and Software Systems, Formal Methods for Eternal Networked Software Systems (SFM'11)</source>
          . LNCS, vol.
          <volume>6659</volume>
          , pp.
          <volume>53</volume>
          {
          <fpage>113</fpage>
          . Springer (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Hariri</surname>
            ,
            <given-names>B.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calvanese</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Montali</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Giacomo</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De</surname>
            <given-names>Masellis</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Felli</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          :
          <article-title>Description logic knowledge and action bases</article-title>
          .
          <source>J. Artif. Intell. Res</source>
          .
          <volume>46</volume>
          ,
          <issue>651</issue>
          {
          <fpage>686</fpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Jifeng</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Seidel</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McIver</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Probabilistic models for the guarded command language</article-title>
          .
          <source>Science of Computer Programming</source>
          <volume>28</volume>
          (
          <issue>2</issue>
          ),
          <volume>171</volume>
          {
          <fpage>192</fpage>
          (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Kwiatkowska</surname>
            ,
            <given-names>M.Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Norman</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parker</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>PRISM 4.0: Veri cation of probabilistic real-time systems</article-title>
          .
          <source>In: Proc. of the 23rd Intern. Conf. on Computer Aided Veri cation (CAV'11)</source>
          . LNCS, vol.
          <volume>6806</volume>
          , pp.
          <volume>585</volume>
          {
          <issue>591</issue>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Papadimitriou</surname>
            ,
            <given-names>C.H.</given-names>
          </string-name>
          :
          <article-title>Computational complexity</article-title>
          . Academic Internet Publ. (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Pnueli</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>The temporal logic of programs</article-title>
          .
          <source>In: 18th Annual Symposium on Foundations of Computer Science</source>
          , Providence, Rhode Island, USA, 31 October - 1
          <source>November</source>
          <year>1977</year>
          . pp.
          <volume>46</volume>
          {
          <issue>57</issue>
          (
          <year>1977</year>
          ). https://doi.org/10.1109/SFCS.
          <year>1977</year>
          .32
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Puterman</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Markov Decision Processes: Discrete Stochastic Dynamic Programming</article-title>
          . John Wiley &amp; Sons, Inc., New York, NY (
          <year>1994</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Zarrie</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cla</surname>
            <given-names>en</given-names>
          </string-name>
          , J.:
          <article-title>Veri cation of knowledge-based programs over description logic actions</article-title>
          .
          <source>In: IJCAI</source>
          . pp.
          <volume>3278</volume>
          {
          <fpage>3284</fpage>
          . AAAI Press (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>