<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Give Inconsistency a Chance: Semantics for Ontology-Mediated Verification</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Clemens</forename><surname>Dubslaff</surname></persName>
							<email>clemens.dubslaff@tu-dresden.de</email>
							<affiliation key="aff0">
								<orgName type="institution">Technische Universität Dresden</orgName>
								<address>
									<settlement>Dresden</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Patrick</forename><surname>Koopmann</surname></persName>
							<email>patrick.koopmann@tu-dresden.de</email>
							<affiliation key="aff0">
								<orgName type="institution">Technische Universität Dresden</orgName>
								<address>
									<settlement>Dresden</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Anni-Yasmin</forename><surname>Turhan</surname></persName>
							<email>anni-yasmin.turhan@tu-dresden.de</email>
							<affiliation key="aff0">
								<orgName type="institution">Technische Universität Dresden</orgName>
								<address>
									<settlement>Dresden</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Give Inconsistency a Chance: Semantics for Ontology-Mediated Verification</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">96F2E866A74D837DD924D3677F37950E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T21:45+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Description Logics</term>
					<term>Inconsistency</term>
					<term>Probabilistic Model Checking</term>
					<term>Ontology-Mediated Verification</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Recently, we have introduced ontologized programs as a formalism to link description logic ontologies with programs specified in the guarded command language, the de-facto standard input formalism for probabilistic model checking tools such as Prism, to allow for an ontology-mediated verification of stochastic systems. Central to our approach is a complete separation of the two formalisms involved: guarded command language to describe the dynamics of the stochastic system and description logics are used to model background knowledge about the system in an ontology. In ontologized programs, these two languages are loosely coupled by an interface that mediates between these two worlds. Under the original semantics defined for ontologized programs, a program may enter a state that is inconsistent with the ontology, which limits the capabilities of the description logic component. We approach this issue in different ways by introducing consistency notions, and discuss two alternative semantics for ontologized programs. Furthermore, we present complexity results for checking whether a program is consistent under the different semantics.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Probabilistic model checking (PMC, see, e.g., <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b6">7]</ref> for surveys) is an automated technique for the quantitative analysis of dynamic systems. PMC has been successfully applied in many areas, e.g., to ensure that the system meets quality requirements such as low error probabilities or an energy consumption within a given bound. The de-facto standard specification language for dynamic systems in PMC is given by stochastic programs, a probabilistic variant of Dijkstra's guarded command language <ref type="bibr" target="#b4">[5,</ref><ref type="bibr">9]</ref> used within many PMC tools such as Prism <ref type="bibr" target="#b9">[10]</ref>. Usually, the behavior described by a stochastic program is part of a bigger system, or might be even used within a collection of systems that have an impact on the operational behavior as well. There are different ways how to model this by using stochastic programs. First, one could integrate additional knowledge about the surrounding system directly into the stochastic program. While this approach provides accurate PMC results, it has the disadvantage that the guarded command language is not well-suited for describing complex static knowledge. Alternatively, one can use non-determinism to over-approximate the possible behaviors of the surrounding system. This approach has the disadvantage that a best-and worst-case PMC analysis might lead to many possible results that do not allow for a meaningful interpretation. As third option, one can use ontology-mediated PMC, an approach to integrate knowledge described by a description logic (DL) ontology into the PMC process, which we recently proposed in <ref type="bibr" target="#b5">[6]</ref>. The core of this approach are ontologized (stochastic) programs which can be subject of PMC.</p><p>A central idea of ontologized programs is the complete separation of concerns and formalisms: the operational behavior is described using a program component expressed by guarded commands. Static knowledge about the system is described in an ontology using DL formalisms. An interface mediates between these two worlds by providing mappings from statements in guarded command language to DL and vice versa. The loose coupling achieved in this way allows to reuse and replace both components easily, so that the same behavior can be analyzed with different ontologies (to describe different system configurations), and the same ontology can be used with different programs (e.g., to describe different strategies of behavior to be analyzed within the same system). This approach distinguishes our work from other approaches that use DLs for the verification of dynamic systems, in which DL constructs are used directly within the program <ref type="bibr">[1,</ref><ref type="bibr">4,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b13">14]</ref>. To be able to analyze ontologized programs practically, we developed a twostep procedure in <ref type="bibr" target="#b5">[6]</ref>. First, the ontologized program and the property to be analyzed are rewritten into a plain stochastic program and a modified property. Then, these are evaluated using the PMC tool Prism. This way, essentially all model-checking tasks supported by Prism can be performed on ontologized programs, ranging from verification of standard properties (like the probability or the expected time for reaching a failure state) to more advanced quantitative analysis (cf. <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b2">3]</ref>). Our approach was implemented and evaluated in <ref type="bibr" target="#b5">[6]</ref>.</p><p>Under the original semantics of ontologized programs in <ref type="bibr" target="#b5">[6]</ref>, program components are allowed to enter states that are inconsistent with the ontology. This leaves the task of resolving the inconsistency to the program component by implementing a sort of exception handling. However, inconsistent states may also stand for situations that never can arise in practice. In this case, a semantics that avoids inconsistent states when possible is favorable, leading to a consistency notion of ontologized programs. In this paper, we answer the following open questions: 1) how to handle inconsistent states in a meaningful way and 2) what does it mean for an ontologized program to be inconsistent. We approach these questions by presenting three possible semantics of ontologized programs: consistency-independent, probability-normalizing, and probability-preserving semantics. Each semantics specifies the behavior of the program in a different way, and additionally comes with its own notion of consistency. For each of these notions, we give tight complexity bounds for deciding consistency of programs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Preliminaries</head><p>We assume familiarity of the reader with the basics of description logics. The results in this paper apply to different DLs, on which we make some basic assumptions: 1) they are fragments of SROIQ, and 2) they use unary encoding for numbers. This was done for convenience in our proofs, and we conjecture that our results can be lifted to a more general class of DLs.</p><p>We briefly address relevant preliminaries on probabilistic operational models and their verification. A probability distribution over S is a function µ : S → [0, 1]∩Q with s∈S µ(s) = 1, denoting the set of distributions over S by Distr (S).</p><p>Markov decision processes (MDPs) are tuples M = Q, Act, P, q 0 , Λ, λ where Q and Act are countable sets of states and actions, respectively, P is a partial probabilistic transition function P : Q × Act Distr (Q) and q 0 ∈ Q an initial state, Λ is a set of labels assigned to states via a labeling function λ : Q → ℘(Λ). For convenience, for q 1 , q 2 ∈ Q, α ∈ Act, we abbreviate P (q 1 , α)(q 2 ) by P (q 1 , α, q 2 ). An action α ∈ Act is enabled in a state q ∈ Q if P (q, α) is defined. We assume that M does not have final states, i.e., in each state at least one action is enabled. A finite path in M is a sequence π = q 0 α 0 q 1 α 1 . . . α k−1 q k where p i = P (q i , α i , q i+1 ) &gt; 0 for all i &lt; k. The probability of π is Pr(π) = p 0 • p 1 • . . . • p k−1 . Intuitively, the behavior of M in state q is to select an enabled action α and move to a successor state q with probability P (q, α, q ). Such a selection is done via (randomized) schedulers, i.e., functions S that map a finite paths to a distribution over actions. For schedulers, we require that for each finite path π in M with last state q, we have P (q, α) is defined for any α ∈ Act with S(π)(α) &gt; 0. Then, a probability measure Pr S M over maximal S-paths, i.e., infinite paths that follow S, is defined in the standard way (cf. <ref type="bibr" target="#b12">[13]</ref>).</p><p>Probabilistic model checking (PMC, cf. <ref type="bibr" target="#b1">[2]</ref>) is used in this paper for a quantitative analysis of MDPs. Usually one has given a temporal logical property φ over the set of labels Λ that defines sets of maximal paths in the MDP that fulfill φ. For instance, linear temporal logic (LTL, <ref type="bibr" target="#b11">[12]</ref>) can be used to specify the set of paths reaching states labeled by an ∈ Λ, formally φ 1 = ♦ , or where always when such a state is reached, within two time steps a state not labeled by is reached again, formally φ 2 = → (X¬ ∨ XX¬ ) . For a scheduler S, any LTL property constitutes a measurable set of S-paths such that we can reason about the probability Pr S M (φ) of M satisfying an LTL property φ w.r.t. S. By ranging over all possible schedulers, this enables a best-and worst-case analysis. A stochastic property is an expression Φ = P ex (φ) τ where φ is an LTL formula, τ ∈ Q a threshold, ∈ {&lt;, ≤, ≥, &gt;} a relation, and ex ∈ {min, max}. Then, M satisfies Φ, denoted M |= Φ, iff ex = min and inf S Pr S M (φ) τ or ex = max and sup S Pr S M (φ) τ . The PMC problem for Φ and M amounts to decide whether M |= Φ. For more details on MDPs and PMC, we refer to standard textbooks such as <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b6">7]</ref>. Note that in <ref type="bibr" target="#b5">[6]</ref> we considered weighted MDP for ontology-mediated PMC over expected accumulated costs. For brevity, we consider only plain MDPs in this paper.</p><p>Arithmetic Constraints and Boolean Expressions are defined over a countable set V of variables. An evaluation over V is a function η that assigns to each v ∈ V a value in {−δ(v), −δ(v) + 1, . . . , δ(v)} where δ : V → N defines bounds of each variable. The set of evaluations is denoted by Eval (V). Let z range over Z and v range over V. The set of arithmetic expressions E(V) is defined by the grammar α ::</p><formula xml:id="formula_0">= z | v | (α + α) | (α • α) .</formula><p>Evaluations naturally extend to arithmetic expressions, e.g., η(α 1 + α 2 ) = η(α 1 ) + η(α 2 ). C(V) denotes the set of arithmetic constraints over V, which are terms of the form (α z) with α ∈ E(V), ∈ {&gt;, =, &lt;}, and z ∈ Z. For a given evaluation η ∈ Eval (V) and constraint γ ∈ C(V), we write η |= γ iff γ holds in η. We denote by C(η) the constraints entailed by η, i.e., C(η) = {c ∈ C(V) | η |= c}. B(V) denotes the set of Boolean expressions over C(V), i.e., the set of propositional formulas where arithmetic constrains over V are used as atoms. Entailment of Boolean expressions B(V) from evaluations is defined in the usual way.</p><p>Stochastic programs provide in the area of PMC the de-facto standard to concisely describe MDPs. They are essentially a probabilistic variant of Dijkstra's guarded command language <ref type="bibr" target="#b4">[5,</ref><ref type="bibr">9]</ref>. We call a function u : V → E(V) update, and a distribution σ ∈ Distr (Upd ) over a given finite set Upd of updates stochastic update. The effect of an update u : V → E(V) on an evaluation η ∈ Eval (V) is their composition η • u ∈ Eval (V), i.e., (η • u)(v) = max{−δ(v), min{η(u(v)), δ(v)}} for all v ∈ V. This notion naturally extends to stochastic updates σ ∈ Distr (Upd ) where for any η, η ∈ Eval (V) we have that (η • σ)(η ) is the sum over all σ(u) for which η • u = η . A stochastic guarded command over a finite set of updates Upd , briefly called command, is a pair g, σ where g ∈ B(V) is a guard and σ ∈ Distr (Upd ) is a stochastic update, which we write in the following form:</p><p>(server proc1 = 2 ∧ server proc2 = 2) → 1/2 : server proc1 := 1 1/2 : server proc1 := 3</p><p>A stochastic program over V is a tuple P = V, C, η 0 where C is a finite set of commands and η 0 ∈ Eval (V) is an initial variable evaluation. For brevity, we write Upd (P) for the set of all updates in C. The MDP induced by P is defined as M[P] = S, Act, P, η 0 , Λ, λ where</p><formula xml:id="formula_2">-S = Eval (V), -Act = Distr (Upd (P)), -Λ = C(V),</formula><p>λ(η) = C(η) for all η ∈ S, and -P (η, σ, η ) = (η•σ)(η ) for η, η ∈ S and g, σ ∈ C with λ(η) |= g. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Ontologized Stochastic Programs</head><p>In general, an ontologized program comprises the following three components <ref type="bibr" target="#b5">[6]</ref>:</p><p>The Program is a specification of the operational behavior.</p><p>The Description Logic Ontology contains static background knowledge that may influence the behavior of the program. The Interface links program and ontology by providing mappings between the languages used for the program and the ontology.</p><p>Ontologized programs combine the two formalisms guarded command language and description logics (DLs). To achieve maximum division of concerns, the behaviors and the static knowledge about the context of the program are specified separately and are only loosely coupled by an interface. This allows for specifying operational behavior in the usual way and to extend and reuse existing program specifications, and similarly, to easily link established or even standardized ontologies to a program.</p><p>Before we formally define ontologized programs, we explain how global states of ontologized programs are represented. Both formalisms, guarded command language and DL, may represent abstracted versions of global states in different ways. For the guarded command language a state is described as an evaluation of variables, while in DL a state is described using a set of axioms. Both formalisms require different degrees of abstraction and detail, depending on their dedicated purpose. Consequently, global states of ontologized programs are composed of two components, which we call the command perspective and the DL perspective on the global system state (see Figure <ref type="figure" target="#fig_0">1</ref>). As can be seen in the figure, there are some elements in the two perspectives that correspond to each other, while others have no direct correspondence. The correspondence, depicted by arrows in the figure, is defined in the interface component of the ontologized program via a function Dc (DL to command ), mapping DL axioms to command expressions. The part of the DL perspective inside the box is not mapped to anything on the command side -this is the static part, and contains the background knowledge about the system that is independent of the current state of the system. In contrast, the mapped axioms in the ontology perspective can change, which is why we call them fluents. Note that we also allow the command perspective to use variables that have no correspondence in the DL perspective.</p><p>So far, our description of global states does not depend on any DL reasoning, which is, however, required by the DL component to provide background knowledge for the operational component of the program. To invoke a reasoning task from the command side of ontologized programs, we have introduced the notion of hooks in <ref type="bibr" target="#b5">[6]</ref>. A hook is a propositional variable in the program component that is linked by the interface to a Boolean query on the DL component of the ontologized program state. The result of the query determines the value of the propositional variable using the function cD (command to DL). In our example, we would for each server i ∈ {1, 2, 3} define one hook migrateServeri that stands for the knowledge that processes should be migrated from Server i:</p><formula xml:id="formula_3">cD(migrateServeri) = {OverloadedServer(s i )}.</formula><p>In the ontologized state shown in Figure <ref type="figure" target="#fig_0">1</ref>, the hook migrateServer2 is active because the ontology in DL perspective entails OverloadedServer(s 2 ).</p><p>To enable the actual use of hooks in the guarded command language, we have to slightly extend the definition of commands. Specifically, we define abstract stochastic programs as stochastic programs where guards are Boolean expressions over arithmetic expressions and hooks. For example a command in an abstract stochastic program looks like the following:</p><formula xml:id="formula_4">(migrateServer2 ∧ server proc1 = 2) → 1/2 : server proc1 := 1 1/2 : server proc1 := 3 (2)</formula><p>While the intention of this command is similar to the command (1), this version omits the specification of the condition under which Server 2 needs to migrate. Specifying and testing this condition is now delegated to the DL component. Now we are ready to formally define ontologized programs. Given an abstract stochastic program P, we denote by H P the hooks used in P. Definition 1. An ontologized program is a tuple P = P, K, I where -P = V P , C P , W P , η P,0 is an abstract stochastic program, -K is a DL ontology called the static DL knowledge, -I = V I , H I , F, Dc, cD is a tuple describing the interface, where V I is a set of public variables, H I is a set interface hooks, F is a set of DL axioms called fluents, and two mappings cD : H I → ℘(A) and Dc : F → B(V I ), and P is compliant with I, i.e., H P ⊆ H I and V I ⊆ V P . If for a DL L every axiom in K ∪ F is an L axiom, we call P an L-ontologized program.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Three Semantics of Ontologized Programs</head><p>Stochastic programs are used as concise representations of MDPs that can be subject of a quantitative analysis. Analogously, ontologized programs are concise representations of ontologized MDPs, where a quantitative analysis on these MDPs can further rely on information provided by the ontology. Since there is an additional logical component in ontologized programs, there are different meaningful ways in which ontologized MDPs can be defined to represent the operational behavior of ontologized programs. We present three semantics for ontologized programs that differ in their treatment of inconsistencies. Such inconsistencies arise when variable evaluations in the stochastic program have an inconsistent meaning in the ontology. Choosing an appropriate semantics is a modeling decision to be made when constructing the ontologized program.</p><p>Therefore we analyze the impact of this choice on the computational complexity of deciding consistency w.r.t. each of the different semantics. First we define formally that ontologized states comprise a command perspective and a DL perspective (see Figure <ref type="figure" target="#fig_0">1</ref>).</p><p>Definition 2. Let P = P, K, I be an ontologized program as in Definition 1.</p><p>An ontologized state in P is a tuple q = η q , K q , with command perspective η q and DL perspective K q , where</p><formula xml:id="formula_5">-K ⊆ K q , -K q ⊆ K ∪ F,</formula><p>-η q ∈ Eval (V P ), and -α ∈ K q iff η q |= Dc(α) for every α ∈ F.</p><p>We call an ontologized state q inconsistent if K q is inconsistent.</p><p>For every evaluation η ∈ Eval (V P ), there is always a unique KB K η s.t. η, K η is an ontologized state in P. We denote this state by s(P, η). To define updates on ontologized states, we exploit the fact that 1) updates can only directly modify the command perspective of ontologized states, and 2) the DL perspective of ontologized states is fully determined by the command perspective. Thus, we can easily lift the effect of updates on evaluations to updates on ontologized states. For an update u ∈ Upd (V P ) and an ontologized state q, we define u(q) = s(P, u(η q )). Correspondingly, we extend stochastic updates σ : Upd (V P ) → [0, 1] to ontologized states q by setting σ(q, u(q)) = σ(u) for all u ∈ Upd (V P ). To illustrate how commands are executed on ontologized states, we consider the ontologized state q 1 shown in Figure <ref type="figure" target="#fig_0">1</ref> and the abstract command in <ref type="bibr" target="#b1">(2)</ref>. First, we note that the guard in the command is active, since the hook migrateServer2 is active in q 1 . The command can thus be executed on the state, and executes each of its updates with a 50% chance. For the first update, server proc1 := 1, we obtain the ontologized state q 2 in which server proc1 = 2 is replaced by server proc1 = 1, and runs(s 2 , p 2 ) is replaced by runs(s 1 , p 2 ). Consequently, the hook migrateServer2 becomes inactive in q 2 , since cD(migrateServer2) = {OverloadedServer(s 2 )} and OverloadedServer(s 2 ) is not entailed by the DL perspective on q 2 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Consistency-Independent Semantics</head><p>We first present the semantics for ontologized programs originally introduced in <ref type="bibr" target="#b5">[6]</ref>. This semantics does not give inconsistent ontologized states a priori a special meaning, therefore named consistency-independent semantics. The definition of the MDP induced by an ontologized program under consistency-independent semantics closely follows the one for MDPs induced by plain stochastic programs. Definition 3. Let P = P, K, I be an ontologized program as in Definition 1. The MDP induced by P under consistency-independent semantics is given by M ci [P] = Q, Act, P, q 0 , Λ, λ , where</p><formula xml:id="formula_6">-Q = {s(P, η) | η ∈ Eval (V P )}, -Act = Distr (Upd (P)), -q 0 = s(P, η 0 ), -Λ = H P ∪ C(V P ), -λ(q) = C(η q ) ∪ { ∈ H P | K q |= cD( )}</formula><p>for every q ∈ Q, and -P (q, σ) = σ(q) for all g, σ ∈ C with λ(q) |= g.</p><p>P is inconsistent if there exists an inconsistent q ∈ Q reachable from q 0 .</p><p>We can test consistency of ontologized programs using a reachability analysis on the state space. For this, we generate states one after the other, using DL reasoner calls for each state to determine their active hooks and their consistency. As each state requires polynomial space, this can be done in polynomial space with a k-oracle, where k is the complexity of entailment in the DL. A corresponding lower bound can be obtained by reduction of the word problem for polynomially space-bounded Turing machines with k-oracle. The construction used here shows a close relationship between Turing machines with k-oracle and L-ontologized programs, provided that L is k-hard. This relationship is also used for later complexity results presented in this paper.</p><p>Theorem 1. Let L be a DL such that deciding entailment in L is k-complete. Then, deciding consistency of L-ontologized programs is PSpace k -complete, even for non-stochastic programs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Probability-Normalizing Semantics</head><p>If a stochastic program is inconsistent, consistency-independent semantics relies on the program specification to deal with inconsistent states. On the one hand, this can offer flexibility. On the other hand, it can be desirable to let the semantics handle the situation by definition. Since inconsistent states have no correspondence to states of the modeled system, they could stand for undesired states that should not occur in the MDP at all. The idea of probability-normalizing semantics is to remove all states of paths that can lead to an inconsistent state and locally normalize probabilistic choices accordingly. Under this semantics, the ontology serves an additional purpose: not only does it assign meaning to the hooks, it also specifies which program states are possible, and thus restricts the transitions from a given state. This allows for a further separation of concerns, and thus for reusability of the behavioral and the DL component of the program: we may use the same behavioral component with different ontologies that put different constraints on the state space. In the running server example, we may want to analyze a system in which the servers have different software and/or hardware configurations, such that some processes cannot run on all servers. For instance, we might use a different ontology that specifies that Process 1 cannot run on Server 3. This restricts the possible outcomes of applying the command in (2) on the state in Figure <ref type="figure" target="#fig_0">1</ref> (with the modified ontology). It can now only move Process 1 to Server 1, so that this migration is performed with 100% probability.</p><p>To achieve this behavior under consistent-independent semantics, we would have to extend the commands of the program with a case distinction over all possible successors to inconsistent states, which is clearly undesired in the above example. Instead, we capture the restriction to possible states of the MDP by defining probability-normalizing semantics for ontologized programs. Intuitively, under probability-normalizing semantics, the induced MDP can be obtained from the corresponding consistency-independent semantics by removing inconsistent states and then normalizing the resulting probabilities. It is possible that for some state q and command g, σ with λ(q) |= g such a normalization is not possible, since all successor states that should have a positive probability lead to an inconsistent state. In this case, the command cannot be applied on q. If no command can be applied anymore on a state, this state is removed from the MDP as well. Definition 4. Let M ci [P] = Q, Act, P, q 0 , Λ, λ for an ontologized program P. An MDP M is probability-normalizing w.r.t. P if q 0 is consistent and M = Q , Act, P , q 0 , Λ, λ , where 1. Q ⊆ Q contains only consistent states, 2. λ is obtained from λ by restricting its domain to Q , 3. for all q ∈ Q and σ ∈ Act s.t. P (q, σ) is defined, we have for all q ∈ Q : P (q, σ, q ) = P (q, σ, q ) • q∈Q P (q, σ, q). P is called probability-normalizable if there exists a probability-normalizing MDP w.r.t. by P. In case Q and P are subset-maximal, M is the called the MDP induced by P under probability-normalizing semantics, denoted M pn <ref type="bibr">[P]</ref>.</p><p>Recall that according to our definition, MDPs cannot have final states. Therefore, to obtain the probability probability-normalizing MDP, it does not suffice to remove inconsistent state, but also states that would only lead to inconsistent states would have to be removed. As this operation might lead to a removal of the initial state, not every ontologized program is probability-normalizable. While for consistency-independent semantics consistency of a program can be decided by determining reachability of inconsistent states, we have to solve a complementary task for probability-normalizability. Specifically, we have to decide whether one can find an unbounded path that never enters an inconsistent state, for which again polynomial space and an oracle for the reasoner is sufficient. As a result, deciding probability-normalizability has the same complexity as deciding consistency under consistency-independent semantics. Theorem 2. Let L be a DL for which deciding entailment has complexity k. Then, deciding whether an L-ontologized program is probability-normalizable is PSpace k -complete, even for non-stochastic programs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Probability-Preserving Semantics</head><p>If we use probability-normalizing semantics, it is possible that the probabilities specified in the commands change. This is the right choice if the program uses probabilities to model randomized behavior as in our running example: the program migrates a process as soon as this becomes necessary, and it does so by choosing each possible server with equal probability. If the ontology restricts the number of possible choices, then consequently the probabilities need to change as well. However, there are other scenarios where a change of probabilities is not desired: rather than using stochastic commands to model randomized behavior, the program component may use stochastic commands to describe probabilistic outcomes of the simulated system that are based on measured probabilities. For instance, we might know that at any point in time the server has a probability of 1% of becoming disfunct and needs a restart. This probability should not be affected by the specifications: there might be a program state in which a disfunct server would necessarily lead to an inconsistency in the future. This should not mean that the server cannot become disfunct and that the current program state should be possible by definition. To capture such phenomena in a semantics, we introduce probability-preserving semantics. Definition 5. Let M ci [P] = Q, Act, P, q 0 , Λ, λ for an ontologized program P. An MDP M is probability-preserving w.r.t. P if q 0 is consistent and M = Q , Act, P , q 0 , Λ, λ where 1. Q ⊆ Q contains only consistent states, 2. λ is obtained from λ by restricting its domain to Q , and 3. for all P (q, σ, q ) &gt; 0 we have that P (q, σ, q ) = P (q, σ, q ). P is called probability-preservable if there exists a probability-preserving MDP w.r.t. P. In case Q and P are subset-maximal, M is the called the MDP induced by P under probability-preserving semantics, denoted M pp [P].</p><p>Note that Definition 5 requires M to be an MDP and hence, P is a probability transition function. Hence, Condition 3 implies that any transition that reaches an inconsistent state with positive probability w.r.t. P has no corresponding transition in P .</p><p>To verify whether a program is probability-preservable, we have to take the non-deterministic and the probabilistic choices of the program differently into account. This is different to the other notions of consistency discussed here, because only a single path, going over non-deterministic and probabilistic choices indifferently, is sufficient to witness inconsistency or probability-normalizability. We can characterize probability-preservability as follows: a program is probabilitypreservable iff there exists a resolution of the non-deterministic choices in the program such that for all probabilistic choices an inconsistent state is never reached. Thus, testing this property requires both existential and universal explorations of the state space, which is why we can use alternating Turing machines with polynomial space to solve the corresponding problem.</p><p>Theorem 3. Let L be a DL for which deciding entailment has complexity k. Then, deciding whether an L-ontologized program is probability-preservable is APSpace k -complete. For non-stochastic programs, it is PSpace k -complete.</p><p>Recall that APSpace = ExpTime (see for example Corollary 2 to Theorem 16.5 and Corollary 3 to Theorem 2.20 in <ref type="bibr" target="#b10">[11]</ref>). However, as an exponential Turing machine may also write exponentially long queries to the oracle, we may not always have APSpace k = ExpTime k .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4">Ontology-mediated PMC</head><p>In <ref type="bibr" target="#b5">[6]</ref> we presented a technique to apply PMC techniques to ontologized programs to perform quantitative analysis tasks on knowledge-intensive systems. For this, we treated inconsistent states as consistent ones, following the consistencyindependent semantics presented in Section 4.1. Inconsistencies in ontologized programs is not as bad as usually in other logic-based formalisms, as stochastic properties can also be evaluated on MDPs with inconsistent states. In fact, they can even have a designated meaning, e.g., modeling exceptions the program has to face. An advantage of ontologized programs is that commands could be used to detect when a program is in an inconsistent state. To this end, a guard employing a hook inc that is satisfied in exactly those states that are inconsistent, e.g., with cD(inc) = ( ⊥), can invoke the actions to undertake when an exception has occurred. Note that while all inconsistent states agree on the hooks that are active (it is always the complete set of hooks), the command perspective of states can still be different. Here, private variables may encode additional information about the current state that can be used to decide how to react to inconsistencies. An example for ontology-mediated PMC on consistency-independent semantics, is to decide whether</p><formula xml:id="formula_7">P min (inc → (X¬inc ∨ XX¬inc)) &gt; 95%</formula><p>Intuitively, this states that an exception is always successfully handled with high probability of at least 95% in the sense that whenever an inconsistent state is reached, a consistent state is reached again within at most two steps. Such a stochastic property can be checked using our method in <ref type="bibr" target="#b5">[6]</ref> and the PMC tool Prism <ref type="bibr" target="#b9">[10]</ref>.</p><p>Ontology-mediated PMC for probability-normalizing and -preserving semantics is not as straight forward as under consistency-independent semantics. While we could achieve a stochastic program that disallows to reach inconsistent states also under consistency-independent semantics, this requires an explicit handling of inconsistencies and modifications in the program that "provisions" the possible variable evaluations. More specifically, we then need a the possibility to decide for each update in the command whether it would lead to an inconsistent state or not. However, while this would work out well in our running example, we cannot expect such an encoding for any ontologized program. Fortunately, Definitions 4 and 5 are defined in a constructive way, i.e., these consistencydependent semantics are defined based on the consistency-independent one by manipulating the state space and probabilistic transition function. Hence, we could restrict the state space to consistent states and adjust further states and probabilities on-the-fly during the construction process of the MDP.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Outlook</head><p>We are currently implementing a scenario to evaluate our implementation for ontology-mediated PMC under the different semantics we presented in this paper. Here, we can benefit from the constructive definition of our semantics that evaluates inconsistent states beforehand and could hence directly included into the model building process, e.g., in the probabilistic model checker Prism <ref type="bibr" target="#b9">[10]</ref>. There are some theoretical results that are still left open but should be easy to obtain: the complexity of the actual model-checking tasks, or the size of the rewritings that our practical method produces. Furthermore, there are some extensions and modifications worth investigating: our semantics offer two possible ways of treating probabilities occurring in the program to "restore" consistency. The choice of which one to select depends on the kind of stochastic phenomena actually modeled. It is easy to think of scenarios where both kinds of stochastics occur. This would require a more flexible type of ontologized programs. In such a program, some distributions may be marked as "fixed", while others could be subject to normalization. In our framework, modifications of program states happen on the level of DL axioms and thus affect all models similarly to <ref type="bibr">[4,</ref><ref type="bibr" target="#b7">8]</ref>. Another approach are DL action-based programs <ref type="bibr">[1,</ref><ref type="bibr" target="#b13">14]</ref>, where modifications of states affect the level of interpretations. So, a variant of ontologized programs where states are associated not with ontologies, but with interpretations, similar to <ref type="bibr">[1,</ref><ref type="bibr" target="#b13">14]</ref> is certainly an interesting direction for future research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A Appendix</head><p>We provide here the proofs of our results that have been omitted due to space constraints in the main paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A.1 Auxiliary Lemmas for the Hardness Results</head><p>We define the size of a concept/axiom/ontology in the usual way as its string length, that is, the number of symbols needed to write it down, where concept, role and individual names count as one, as do logical operators, and number restrictions are encoded using unary encoding.</p><p>The following auxiliary lemma will be helpful for the hardness results, as it shows how to represent ontologies based on a polynomially bounded enumeration of DL axioms. We first need a more liberal definition of conservative extensions. Definition 6. A renaming is a bijective function σ :</p><formula xml:id="formula_8">N C ∪N R ∪N I → N C ∪N R ∪N I s.t. σ(X) ∈ N C iff X ∈ N C , σ(X) ∈ N R iff X ∈ N R and σ(X) ∈ N I iff X ∈ N I .</formula><p>Given an ontology O and a renaming σ, we denote by Oσ the result of replacing every name X in O by σ(X).</p><p>Let O 1 and O 2 be two ontologies. Then, O 1 is a conservative extension of O 2 modulo renaming iff there exists some renaming σ s.t. for every axiom α using only names in Proof. Let n ∈ N be our bound on ontology sizes. Using standard structural transformations, we can compute for every SROIQ ontology O in polynomial time a normalized ontology that is a conservative of O and in which every axiom is of one of the following forms: s.t. for every any normalized ontology O of size m there exists a renaming σ s.t. Oσ uses only names from X. As the resulting ontology is still normalized, every such ontology uses only axioms from the set A = {α | α is normalized and uses only names from X and numbers ≤ m} As there are 12 axiom types, and each uses at most 3 names from X and one number, the number of axioms in A is bounded by p</p><formula xml:id="formula_9">O 2 , O 1 σ |= α iff O 2 |= α.</formula><formula xml:id="formula_10">(n) = 12 • |X| 3 • m = 12m 4 , which is polynomial in n because m is polynomial in n.</formula><p>The function π is required can then for example defined as follows: for i ∈ {0, . . . , p(n)}, we define π(i) = α, where 1. α is of the form (i mod 12) as above, 2. for a ∈ {1, 3}, X j is the ath name occurring in α, where j = i 12m a−1 mod m, and 3. if α is of the form (4) or (5), i.e. it contains a number, then this number is</p><formula xml:id="formula_11">j = i 12m 3 mod m</formula><p>For every ontology O, we obtain the index set I as required by first normalizing O, renaming it accordingly, and then assigning the indices.</p><p>Using an index function π as in Lemma 1, we can define a special type of oracle Turing machines called (alternating) Turing machines with DL oracles. Such a Turing machine uses an oracle tape with tape alphabet {0, 1, ¡ b} ( ¡ b being the blank symbol), and is specified using an enumeration π of DL axioms as above, and some axiom α. Based on the current oracle tape content, we obtain an index set I that contains the number i iff the oracle tape contains a 1 at position i. The oracle then answers yes if {π(i) | i ∈ I} |= α, and no otherwise. For a given DL L, we may additionally require that the oracle only accepts inputs for which the corresponding ontology is in L, in which case we call the Turing machine a Turing machine with L-oracle for π, α.</p><p>Lemma 2. Let k be a complexity class and L a DL for which deciding entailment is k-hard. Then, for every Turing machine T with k-oracle and bound n on the size of the oracle tape, we can construct in time polynomial in n and T a Turing machine T with L-oracle for some axiom enumeration π and CI α that accepts the same set of words with at most polynomial overhead.</p><p>Proof. We first argue that it suffices to focus on entailment of axioms of the form α = A(a), since entailment of CIs can be reduced to it. Specifically, for any ontology O and CI C D, we have</p><formula xml:id="formula_12">O |= C D iff O ∪ {C(a), D A} |= A(a)</formula><p>, where a and A do not occur in O. Because entailment in L is k-hard, we can construct for every query to the k-oracle in polynomial time an ontology O that entails α iff the query should return true. Because the oracle tape is bounded, we can use Lemma 1 to translate that ontology into a corresponding index set to serve as input for the L-oracle. T thus proceeds as T , but before sending a query to the oracle, it reduces it to the entailment of the axiom α from an ontology O, which it then translates to a query to the L-oracle.</p><p>A.2 Consistency-Independent Semantics Theorem 1. Let L be a DL such that deciding entailment in L is k-complete. Then, deciding consistency of L-ontologized programs is PSpace k -complete, even for non-stochastic programs.</p><p>Proof. Let M[P] be the MDP induced by some ontologized program P as in Definition 3 where K is expressed in L. For the upper bound, we specify a non-deterministic PSpace algorithm that, starting from the initial state, explores all reachable states in M[P] and decides in k whether the current state is inconsistent. Specifically, for each current variable evaluation in P, we nondeterministically select a command in P P whose guard is satisfied. Then, we non-deterministically select an update that has a positive probability in the distribution of the command and apply this update on the current evaluation. In each step, we use a k-oracle to determine whether the current state is consistent, and which hooks are active. As each state can be stored in polynomial space, the algorithm runs in PSpace k .</p><p>For the lower bound, we provide a polynomial reduction of the acceptance problem for polynomially space bounded, deterministic Turing machines T with k-oracle. Let T = Q, Γ, γ 0 , Γ i , Γ o , q 0 , F, δ, q ? , q + , q − be such a Turing machine, where</p><formula xml:id="formula_13">-Q are the states, -Γ = {γ 0 , . . . γ m } is the tape alphabet, -γ 0 is the blank symbol, -Γ i ⊆ Γ is the input alphabet, -Γ o is the oracle tape alphabet, -q 0 is the initial state, -F ⊆ Q is the set of accepting states, -δ : (Q \ (F ∪ {q ? })) × Γ × Γ o → Q × Γ × {−1, 0, +1} × Γ o × {−1, 0, +1}</formula><p>is the transition relation, (which works on two tapes, the standard tape and the oracle tape), q ? ∈ Q \ F is the query state to query the oracle, and q + , q − ∈ Q \ F are the query answer states.</p><p>The Turing machine uses two polynomially bounded tapes: one standard tape and one oracle tape. In every state that is non-final and not the query state, δ defines the successor state of T . Whenever the current state is q ? , the oracle is invoked, leading to T entering q + in the next step if the oracle accepts the content of the oracle tape, and entering q − otherwise. Based on T and the input word w = σ 0 . . . σ l , we build an ontologized program P T,w s.t. P T,w is inconsistent iff T accepts w.</p><p>Since entailment in L is k-hard, we can use Lemma 2 to construct a polynomially space-bounded Turing machine T with L-oracle for π, A(a) that accepts w iff so does T . Specifically, we obtain a set F q of DL axioms, polynomially bounded in |w|, a mapping π : {0, . . . , r(|w|)} → F q , where r is a polynomial, T = Q , Γ, γ 0 , Γ i , {0, 1}, q 0 , F , δ , q ? , q + , q − , where Q = {q 0 , . . . , q n }, and T accepts the same language as T , is still polynomially space-bounded, but uses a different oracle. If T is in state q ? , and the oracle tape contains the word σ 0 . . . σ r(|w|) , the Turing machine moves into the state q + iff {π(i) | 0 ≤ i ≤ r(|w|), σ i = 1} |= A(a), and otherwise moves to q − . Based on this Turing machine, we construct the ontologized program P T,w .</p><p>Let p be a polynomial s.t. T uses at most p(|w|) tape cells on input w. The program uses the following variables:</p><p>state ∈ {0, . . . , n} stores the state of the Turing machine, for 0 ≤ i ≤ p(|w|), dtape i ∈ {0, . . . , m} stores the letter on the default tape position i, for 0 ≤ i ≤ r(|w|), otape i ∈ {0, 1} stores the letter on the oracle tape at position i, dpos ∈ {0, . . . , p(|w|)} stores the current position on the default tape, opos ∈ {0, . . . , r(|w|)} stores the current position on the oracle tape.</p><p>We use a single hook H = {oracle yes} standing for the positive oracle answer. For every q i1 , γ i2 , γ i3 , q i4 , γ i5 , Dir </p><p>To handle the behavior of the oracle, we further add the following commands, where q ? = q i1 , q + = q i2 and q − = q i3 state = i 1 ∧ oracle yes → state := i 2 (4)</p><formula xml:id="formula_15">state = i 1 ∧ ¬oracle yes → state := i 3<label>(5)</label></formula><p>The initial state η P,0 of the program is the evaluation state = 0, dtape i = w i for i ∈ {0, . . . , l}, dtape i = 0 for i ∈ {l, . . . , p(|w|)}, otape i = 0 for i ∈ {0, . . . , r(|w|)}, dpos = otape = 0. This completes the definition of the program component P in P T,w .</p><p>As DL axioms, we set for the static DL knowledge K = ∅ and for the fluents F = F q ∪ { ⊥}. It remains to specify the interface functions Dc and cD. For Dc, we set for every i ∈ {0, . . . , r(|w|)}, Dc(π(i)) = otape i = 1 to translate the oracle calls, and</p><formula xml:id="formula_16">Dc( ⊥) =   qi∈F state = i   ,</formula><p>to encode that the ontologized states that correspond to accepting states in the Turing machine are inconsistent. Finally, we specify cD by setting cD(oracle yes) = {A(a)}.</p><p>It is standard to verify that the Turing machine T accepts w iff the ontologized program P T,w is inconsistent, i.e., can reach an inconsistent state. Since T accepts w iff so does T , this completes the reduction.</p><p>A.3 Probability-Normalizing Semantics Theorem 2. Let L be a DL for which deciding entailment has complexity k. Then, deciding whether an L-ontologized program is probability-normalizable is PSpace k -complete, even for non-stochastic programs.</p><p>Proof. For the upper bound, note that the only way for P to be inconsistent under probability-normalizing semantics is if all probabilistic choices would lead to an inconsistent state in the M[P]. We can thus use a non-deterministic PSpace kalgorithm similar to the one described in the proof for Theorem 1 to determine whether a program is probability-normalizable: instead of testing for inconsistency of the current state, we here test for consistency. Note that it is sufficient to find a single execution path that never reaches an inconsistent state for an ontologized program to be probability-normalizable. This execution path might be infinite, however, because the state space is bounded, any infinite execution path would need to visit a state twice. We use a counter to put an exponential bound on the states to be visited, and store a non-deterministically selected state for later comparison. If this state repeats, we found an unbounded execution path that never visits an inconsistent state, and accept. If we reach the bound, such a loop was not found and we reject.</p><p>For the lower bound, we note that the reduction used in the proof for Theorem 1 can also be used for probability-normalizability. In that reduction, we reduce the word-problem of deterministic polynomially-space bounded Turing machines with k-oracle to consistency of ontologized programs under consistencyindependent semantics. Inspection of the construction reveals that the constructed program P T,w is not only non-stochastic, but even deterministic, by which we mean that in every ontologized state, there is at most one command that can be executed. Specifically, there is exactly one Command (3) in the proof of Theorem 1 for each assignment to the variables state, dpos, opos, dtapei and otape j, which means that in each ontologized state, at most one of these commands is executable. Moreover, for the assignment state = i 1 , where q i1 = q ? denotes the oracle query state, there is no such command, as there is no transition in the Turing machine for those states, and exactly one of the commands (4) and ( <ref type="formula" target="#formula_15">5</ref>) is executable if state = i 1 , unless if the state is inconsistent. We obtain that, if some path in M[P T,w ] leads from the initial state to an inconsistent state, then all paths lead to an inconsistent state, in which case the program is not probability-normalizable. Consequently, P T,w is probabilitynormalizable iff P T,w is consistent, which is the case iff T does not accept w. Since non-acceptance of words in polynomially space-bounded Turing-machines with k-oracle is also PSpace k -complete, we obtain that probability-normalizability is PSpace k -hard.</p><p>A.4 Probability-Preserving Semantics Theorem 3. Let L be a DL for which deciding entailment has complexity k. Then, deciding whether an L-ontologized program is probability-preservable is APSpace k -complete. For non-stochastic programs, it is PSpace k -complete.</p><p>Proof. Both bounds are via reductions from and to alternating Turing machines. In an alternating Turing machine, we switch between ∃-non-determinism and ∀non-determinism, meaning that the Turing machine can switch between ∃-nondeterministic choices and ∀-non-deterministic transitions. Acceptance is then defined inductively according to the choices involved: a configuration is accepting if 1) it is in an accepting state, 2) there exists an ∃-non-deterministic transition to a configuration that is accepting, or 3) all ∀-non-deterministic transitions lead to an accepting configuration.</p><p>For the upper bounds, we use a modification of the algorithm used in the proof for Theorem 1 that runs in alternating polynomial space and decides probability-preservability of ontologized programs. Let P be an ontologized program as in Definition 1, and M ci [P] = Q, Act, P, ι, Λ, λ, wgt the MDP induced by P under consistency-independent semantics. The algorithm guesses the states q ∈ Q to be included in the probability-preserving MDP one after the other, starting from ι, where we use alternation to switch between the non-deterministic and the probabilistic choices of the program. Specifically, we have to guess a set of states Q s.t.: 1) no state in Q is inconsistent, 2) for every Q there is always some guarded command that can be executed on the current state, and 3) all successor-states into which this command would bring us with a positive probability are included in Q . Our algorithm thus proceeds as follows based on the currently guessed state. If the current state is inconsistent, we reject. Otherwise, we non-deterministically select a guarded command g, σ ∈ C P s.t. q |= g (∃non-determinism) and continue on all states q ∈ Q such that P (q, σ, q ) &gt; 0, q ∈ Q (∀-non-determinism). Each state takes polynomial space, and we use a k-oracle to determine which hooks are active and whether the state is consistent. Since this procedure can be implemented by an alternating Turing machine with k-oracle that uses polynomial space, probability-preservability can be decided in APSpace k . If the program is non-stochastic, the ∀-non-determinism is not needed, and the algorithm runs in PSpace k .</p><p>For the lower bound, we adapt the reduction used in the proof for Theorem 1. However, this time, we reduce the word acceptance problem for polynomially space bounded alternating Turing machines with k-oracle. Specifically, let T = Q, Γ, γ 0 , Γ i , Γ o , q 0 , F, δ, q ? , q + , q − , g , be such a Turing machine, where Q = {q 0 , . . . , q n }, Γ = {γ 0 , . . . γ n } is the standard tape alphabet, γ 0 ∈ Γ i is the blank symbol, Γ i ⊆ Γ is the input alphabet, Γ o is the oracle alphabet, q 0 is the initial state, F ⊆ Q ∧ ∪ Q ∨ is the set of accepting states,</p><formula xml:id="formula_17">δ : (Q \ (F ∪ {q ? })) × Γ × Γ o → ℘(Q × Γ × {−1, +1} × Γ o ∪ {−1, +1})</formula><p>is the transition function, the states q ? , q + q − ∈ Q manage the querying of the oracle, and g : Q → {∀, ∃, accept, reject} partitions the states Q into four categories of universal and existential quantified states, and accepting and rejecting states, respectively. The definition of acceptance is standard, specifically, the oracle works as in the proof for Theorem 1, and a configuration with state q is accepting 1) if g(q) = accept, 2) g(q) = ∃ and one of the successor configurations is accepting, or 3) g(q) = ∀ and all successor configurations are accepting.</p><p>Again, we use Lemma 2 to construct, based on T and the input word w, a polynomially bounded set F q of DL axioms, a mapping π : F q → {0, . . . , r(|w|)}, where r is a polynomial, and an alternating Turing machine T = Q , Γ, γ 0 , Γ i , {0, 1, ¡ b}, q 0 , F , δ , q ? , q + , q − , g with an L-oracle for π, A B, that is polynomially space-bounded, and accepts w iff so does T . If T is in state q ? , and the oracle tape contains the word σ 0 . . . σ r(|w|) , the Turing machines moves into the state q + iff π(i) | 0 ≤ i ≤ r(|w|), σ i = 1 |= A(a), and otherwise moves to q − . Based on this Turing machine, we construct the ontologized program P T,w in the same way as in the proof for Theorem 1, but with a different set C P of commands.</p><p>We first model the ∀-transitions. For every</p><formula xml:id="formula_18">t = q i1 , γ i2 , γ i3 ∈ (Q \ (F ∪ {q ? , q + , q − })) × Γ × Γ o ,</formula><p>where g (q i1 ) = ∀, every q i4 , γ i5 , Dir Note that for the program to be not probability preservable, every command that we can execute in an ontologized state leads to an inconsistency, which is why we model ∀-transitions in this way. For the ∃-transitions, we make use of the stochastic component. Let t 1 = q i1 , γ i2 , γ i3 ∈ (Q \ (F ∪ {q ? , q + , q − })) × Γ × Γ o , where g (q i1 ) = ∃. For every i ∈ {0, . . . , p(|w|)} and j ∈ {0, . . . , r(|w|)}, we add the stochastic command</p><formula xml:id="formula_19">  state = i 1 ∧ dpos = i ∧ dtape i = i 2 ∧ opos = j ∧ otape j = i 3   → σ</formula><p>where σ is the stochastic update such that for every The corresponding transitions are excluded from the probability-preserving MDP if there is a positive probability of getting into an inconsistent state. To this end, we can model ∃-transitions in this way.</p><p>The stochastic commands that handle the use of the oracle are as in the proof for Theorem 1. It is standard to show that the resulting program is probabilitypreserving iff w is accepted by the Turing machine, so that we obtain that probability-preservability is APSpace k -hard. Now assume that for no state q ∈ Q, g (q) = ∃. T then corresponds to a coNPSpace k -machine, and the ontologized program P T,w is non-stochastic. As a consequence, deciding whether non-stochastic programs are probabilitypreserving is coNPSpace k = PSpace k -hard.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Lemma 1 .</head><label>1</label><figDesc>There exists a polynomial function p s.t. for every n ∈ N, there exists a polynomially computable function π from indices {0, . . . , p(n)} to SROIQ axioms s.t. for every ontology O of size at most n, we can compute in time polynomial in O a set I ⊆ {0, . . . , p(n)} s.t. {π(i) | i ∈ I} is a conservative extension of O modulo renaming.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>i 4</head><label>4</label><figDesc>d , γ i6 , Dir o ∈ δ , every i ∈ {0, . . . , p(|w|)} and every j ∈ {0, . . . , r(|w|)}, we use the following non-stochastic command: dtape i := i 5 , dpos := dpos + Dir d , otape j := i 6 , opos := opos + Dir o  </figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>state = i 1 ∧</head><label>1</label><figDesc>d , γ i6 , Dir o ∈ δ (t), every i ∈ {0, . . . , p(|w|)} and every j ∈ {0, . . . , r(|w|)}, we use the command   dpos = i ∧ dtape i = i 2 ∧ opos = j ∧ otape j = i 3 i 4 dtape i := i 5 , dpos := dpos + Dir d , otape j := i 6 , opos := opos + Dir o  </figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>t 2 =</head><label>2</label><figDesc>q i4 , γ i5 , Dir d , γ i6 , Dir o ∈ δ (t 1 ), we have σ   state := i 4 dtape i := i 5 , dpos := dpos + Dir d , otape j := i 6 , opos := opos + Dir o   = 1 |δ (t 1 )| .</figDesc></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>The authors are supported by the DFG through the Collaborative Research Center TRR 248 (see https://perspicuous-computing.science, project ID 389792660), the Cluster of Excellence EXC 2050/1 (CeTI, project ID 390696704, as part of Germany's Excellence Strategy), and the Research Training Groups QuantLA (GRK 1763) and RoSI (GRK 1907).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Verification of Golog programs over description logic actions</title>
		<author>
			<persName><forename type="first">F</forename><surname>Baader</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zarrieß</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of FroCos 2013. LNCS</title>
				<meeting>FroCos 2013. LNCS</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="volume">8152</biblScope>
			<biblScope unit="page" from="181" to="196" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Principles of Model Checking</title>
		<author>
			<persName><forename type="first">C</forename><surname>Baier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Katoen</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2008">2008</date>
			<publisher>MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Trade-off analysis meets probabilistic model checking</title>
		<author>
			<persName><forename type="first">C</forename><surname>Baier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Dubslaff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Klüppelholz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 23rd Conference on Computer Science Logic and the 29th Symposium on Logic In Computer Science (CSL-LICS)</title>
				<meeting>of the 23rd Conference on Computer Science Logic and the 29th Symposium on Logic In Computer Science (CSL-LICS)</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page">10</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Actions and programs over description logic knowledge bases: A functional approach</title>
		<author>
			<persName><forename type="first">D</forename><surname>Calvanese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>De Giacomo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lenzerini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rosati</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Knowing, Reasoning, and Acting: Essays in Honour of H</title>
				<editor>
			<persName><forename type="middle">J</forename><surname>Levesque</surname></persName>
		</editor>
		<imprint>
			<publisher>College Publications</publisher>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">A Discipline of Programming</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">W</forename><surname>Dijkstra</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1976">1976</date>
			<publisher>Prentice-Hall</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Ontology-mediated probabilistic model checking</title>
		<author>
			<persName><forename type="first">C</forename><surname>Dubslaff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Koopmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Turhan</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-0u30-34968-4_11</idno>
		<idno>-34968- 4 11</idno>
		<ptr target="https://doi.org/10.1007/978-3-0u30" />
	</analytic>
	<monogr>
		<title level="m">Integrated Formal Methods -15th International Conference, IFM 2019</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">W</forename><surname>Ahrendt</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><forename type="middle">L T</forename><surname>Tarifa</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">11918</biblScope>
			<biblScope unit="page" from="194" to="211" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Automated verification techniques for probabilistic systems</title>
		<author>
			<persName><forename type="first">V</forename><surname>Forejt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">Z</forename><surname>Kwiatkowska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Norman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Parker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the School on Formal Methods for the Design of Computer, Communication and Software Systems, Formal Methods for Eternal Networked Software Systems (SFM&apos;11)</title>
				<meeting>of the School on Formal Methods for the Design of Computer, Communication and Software Systems, Formal Methods for Eternal Networked Software Systems (SFM&apos;11)</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="volume">6659</biblScope>
			<biblScope unit="page" from="53" to="113" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Description logic knowledge and action bases</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">B</forename><surname>Hariri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Calvanese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Montali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>De Giacomo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>De Masellis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Felli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Artif. Intell. Res</title>
		<imprint>
			<biblScope unit="volume">46</biblScope>
			<biblScope unit="page" from="651" to="686" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Probabilistic models for the guarded command language</title>
		<author>
			<persName><forename type="first">H</forename><surname>Jifeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Seidel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mciver</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Science of Computer Programming</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="171" to="192" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">PRISM 4.0: Verification of probabilistic real-time systems</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">Z</forename><surname>Kwiatkowska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Norman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Parker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 23rd Intern. Conf. on Computer Aided Verification (CAV&apos;11). LNCS</title>
				<meeting>of the 23rd Intern. Conf. on Computer Aided Verification (CAV&apos;11). LNCS</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="volume">6806</biblScope>
			<biblScope unit="page" from="585" to="591" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Computational complexity</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">H</forename><surname>Papadimitriou</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
			<publisher>Academic Internet Publ</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">The temporal logic of programs</title>
		<author>
			<persName><forename type="first">A</forename><surname>Pnueli</surname></persName>
		</author>
		<idno type="DOI">10.1109/SFCS.1977.32</idno>
		<ptr target="https://doi.org/10.1109/SFCS.1977.32" />
	</analytic>
	<monogr>
		<title level="m">18th Annual Symposium on Foundations of Computer Science</title>
				<meeting><address><addrLine>Providence, Rhode Island, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1977-11-01">31 October -1 November 1977. 1977</date>
			<biblScope unit="page" from="46" to="57" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Markov Decision Processes: Discrete Stochastic Dynamic Programming</title>
		<author>
			<persName><forename type="first">M</forename><surname>Puterman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1994">1994</date>
			<publisher>John Wiley &amp; Sons, Inc</publisher>
			<pubPlace>New York, NY</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Verification of knowledge-based programs over description logic actions</title>
		<author>
			<persName><forename type="first">B</forename><surname>Zarrieß</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Claßen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IJCAI</title>
				<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="3278" to="3284" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title/>
	</analytic>
	<monogr>
		<title level="j">A 1 {a 1</title>
		<imprint/>
	</monogr>
	<note>2. A 1 A 2 A 3. 3. A 1 A 2 A 3</note>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">A 1 ≤ir 1 .A 2 , 5. ≤ir 1 .A 1 A 2 , 6. A 1 ∃r.Self, 7. ∃r</title>
	</analytic>
	<monogr>
		<title level="j">Self. A</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
		</imprint>
	</monogr>
	<note>8. A 1 (a 1</note>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title/>
	</analytic>
	<monogr>
		<title level="j">r</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">1</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">1 , r 2 ), where A 1 , A 2 , A 3 ∈ N C ∪</title>
	</analytic>
	<monogr>
		<title level="j">Disj(r</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
		</imprint>
	</monogr>
	<note>r 3 ∈ N R , i ∈ N and a 1 , a 2 ∈ N I . Let m be the maximal size of any ontology that is the result of normalizing an ontology of size n. Note that m is polynomial in n. The number of concept names, role names and individuals occurring in any normalized ontology O of size m is bounded by m, and every number occurring in O is bounded by m as well. This means, we can pick a set X = {X 1. X m } ⊆ N C ∪ {</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
