<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">On the Complexity of Finding Good Proofs for Description Logic Entailments</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Christian</forename><surname>Alrabbaa</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Theoretical Computer Science</orgName>
								<orgName type="institution">TU Dresden</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Franz</forename><surname>Baader</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Theoretical Computer Science</orgName>
								<orgName type="institution">TU Dresden</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Stefan</forename><surname>Borgwardt</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Theoretical Computer Science</orgName>
								<orgName type="institution">TU Dresden</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Patrick</forename><surname>Koopmann</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Theoretical Computer Science</orgName>
								<orgName type="institution">TU Dresden</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alisa</forename><surname>Kovtunova</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Theoretical Computer Science</orgName>
								<orgName type="institution">TU Dresden</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">On the Complexity of Finding Good Proofs for Description Logic Entailments</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">1B3879F210EB3BB2897573455A4FFC53</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T21:44+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>If a Description Logic (DL) system derives a consequence, then one can in principle explain such an entailment by presenting a proof of the consequence in an appropriate calculus. Intuitively, good proofs are ones that are simple enough to be comprehensible to a user of the system. In recent work, we have introduced a general framework in which proofs are represented as labeled, directed hypergraphs, and have investigated the complexity of finding small proofs. However, large size is not the only reason that can make a proof complex. In the present paper, we introduce other measures for the complexity of a proof, and analyze the computational complexity of deciding whether a given consequence has a proof of complexity at most q. This problem can be NP-complete even for EL, but we also identify measures where it can be decided in polynomial time.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The first work on explaining DL entailments is probably the PhD thesis of McGuinness <ref type="bibr" target="#b17">[18]</ref>, where the results obtained by the structural subsumption algorithm of the CLASSIC system <ref type="bibr" target="#b7">[8]</ref> are translated into proofs in a formal calculus. The thesis also investigates how to create shorter, better understandable proofs by pruning away unimportant parts. In <ref type="bibr" target="#b5">[6]</ref>, proofs of subsumptions generated by a tableau-based system are translated into sequent proofs. The authors then investigate how to make the sequent proofs shorter. More recent work on explaining DL entailment was often focused on computing so-called justifications, i.e., minimal subsets of the knowledge base (KB) from which the consequence in question follows (see, e.g., <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b23">24]</ref>). The basic assumption is here that, whereas KBs may be very large and have many consequences, a single consequence often follows from a small subset of the KB by an easy derivation. While this is true in certain applications <ref type="bibr" target="#b4">[5]</ref>, in general it may be quite hard for a user to see without help why a consequence follows from a given justification <ref type="bibr" target="#b13">[14]</ref>. On the one hand, this has triggered research into assessing the complexity of a justification, i.e., how hard it is to derive the given consequence from the justification <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b18">19]</ref>. On the other hand, it has rekindled the interest in generating proofs appropriate for human consumption. For example, the explanation plugin for the ontology editor Protégé described in <ref type="bibr" target="#b14">[15]</ref> cannot only produce justfications, but can also display proofs, provided that proofs of an appropriate form are returned by the employed reasoner, an assumption that is, e.g., satisfied by the reasoner ELK <ref type="bibr" target="#b15">[16]</ref>. While these proofs are represented in the formal syntax employed by Protégé, the work reported in <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b21">22,</ref><ref type="bibr" target="#b22">23]</ref> uses ontology verbalization techniques to translate proofs into natural language text.</p><p>Recently, we have investigated the complexity of deciding whether proofs of a certain size exist, which is measured in the number of steps involved in the proof <ref type="bibr" target="#b0">[1]</ref>. Instead of concentrating on a specific DL and proof calculus or reasoner for this DL, we used a general framework in which proofs are represented as labeled, directed, acyclic hypergraphs whose vertices are labeled with axioms and hyperedges correspond to sound derivation steps. Note that such proofs need not be tree-shaped since a single axiom can be reused in multiple inference steps, thereby reducing the overall size of the proof. To abstract away from implementation details, we assume that a reasoner (called deriver to distinguish it from an actual implemented system) generates a so-called derivation structure, which consists of possible proof steps, from which actual proofs can be constructed. For example, if we consider the consequence-based reasoning approaches for the DLs EL and ELI described in <ref type="bibr" target="#b2">[3]</ref>, then we could obtain a derivation structure for a given KB and consequence by collecting all (finitely many) instantiated classification rules. The EL reasoner ELK actually returns such a derivation structure, which however only contains the rule instances that have actually been used during the reasoning process.</p><p>In this paper, we extend the results of <ref type="bibr" target="#b0">[1]</ref> by considering more measures of proof complexity (other than size), with the ultimate goal of finding proofs that are easy to understand for users. In a recent user study <ref type="bibr" target="#b6">[7]</ref>, for example, it was observed that proof understandability can be influenced by the structure (e.g. linear vs. several independent reasoning threads), the complexity of the individual reasoning steps (e.g. whether a combination of quantifiers is involved), the representation of the domain (e.g. abstract concept and role names vs. long domain-specific names), and the overall format (e.g. graphical vs. textual).</p><p>In this paper, we consider a few possible approaches to defining proof complexity, where we focus on the overall structure of the proof. For this reason, we are not interested in the complexity/weight of individual axioms or inference steps, which has already been investigated in the literature <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b19">20]</ref>. For most of the paper, we use the size of an axiom (or group of axioms) as a proxy for their complexity, but it is easy to substitute other measures from the literature.</p><p>We now list a few approaches for defining proof complexity, which we discuss throughout this paper.</p><p>4. The hardness of a proof corresponds to its (weighted) tree size, i.e., the number (weighted sum) of vertices in its tree unraveling <ref type="bibr" target="#b0">[1]</ref>. 5. A proof is as hard to understand as the hardest inference step. 6. The hardness of a proof is the sum (product, average, . . . ) of inference step weights. 7. Proof complexity is determined by the number of TBox axioms that are used, i.e., the size of a corresponding justification. 8. A proof is more complex if there are more inference steps between the TBox axioms and the conclusion. We could compute the maximal distance (depth), or the maximal sum of weights along such a path. 9. A linear proof is easier to follow, hence we could measure the pathwidth of a proof (when viewing a proof as an undirected graph obtained by replacing each directed hyperedge with several undirected edges between the conclusion and each premise). 10. We could consider a similary adapted notion of (undirected) treewidth as a measure for the complexity of a proof, which reflects its degree of connectivity. 11. Multiple adjacent reasoning steps using the same rule schema are easier to understand than when these steps are interspersed with other types of inference rules. 12. For the presentation to a user, it may also be relevant whether a proof is planar, i.e., it can be drawn without intersecting (hyper)edges.</p><p>There are of course many more ways in which "proof complexity" could be measured. This paper takes a few steps towards classifying the complexity of finding "best" proofs w.r.t. any such measure. Before we can introduce this problem formally, we recall the basic definitions from <ref type="bibr" target="#b0">[1]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Preliminaries</head><p>We assume a basic familiarity with description logics <ref type="bibr" target="#b2">[3]</ref>. Most of our theoretical discussions apply to an arbitrary (description) logic; from now on, we fix one such logic L, where we focus on the terminological part, i.e., the TBox. Following the notation from <ref type="bibr" target="#b0">[1]</ref>, S L denotes all TBox axioms that can be formulated in L. We assume that the size |α| of such an axiom α is defined in some way, e.g. by the number of symbols in α. Given a TBox T and axiom α, a justification for α in T is a minimal subset J ⊆ T such that J |= α. Already for EL-ontologies, for which finding a single justification is possible in polynomial time, there may be still exponentially many justifications, and finding a justification of size ≤ n is NP-complete <ref type="bibr" target="#b3">[4]</ref>.</p><p>While justifications can pinpoint the reasons for an entailment in a TBox, we are interested in more detailed proofs such as, for example, the one in Figure <ref type="figure">1</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Hypergraphs</head><p>As in <ref type="bibr" target="#b0">[1]</ref>, we view proofs as directed hypergraphs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 1 (Hypergraph).</head><p>A (finite, directed, labeled) hypergraph <ref type="bibr" target="#b20">[21]</ref> is a triple H = (V, E, ), where -V is a finite set of vertices, -E is a set of hyperedges (S, d) with source vertices S ⊆ V and target vertex d ∈ V , and -: V → S L is a labeling function that assigns axioms to vertices.</p><p>The size of H, denoted |H|, is measured by the size of the labels of its hyperedges:</p><formula xml:id="formula_0">|H| := (S,d)∈E |(S, d)|, where |(S, d)| := | (d)| + v∈S | (v)|. A vertex v ∈ V is called a leaf if it</formula><p>has no incoming hyperedges, i.e., there is no (S, v) ∈ E, and v is a sink if it has no outgoing hyperedges, i.e., there is no (S, d) ∈ E such that v ∈ S. We denote the set of all leafs and the set of all sinks in H as leaf (H) and sink(H), respectively.</p><p>A hypergraph</p><formula xml:id="formula_1">H = (V , E , ) is called a subgraph of H = (V, E, ) if V ⊆ V , E ⊆ E and = | V .</formula><p>In this case, we also say that H contains H and write</p><formula xml:id="formula_2">H ⊆ H. Given two hypergraphs H 1 = (V 1 , E 1 , 1 ) and H 2 = (V 2 , E 2 , 2 ) s.t. 1 (v) = 2 (v) for every v ∈ V 1 ∩ V 2 , their union is defined as H 1 ∪ H 2 := (V 1 ∪ V 2 , E 1 ∪ E 2 , 1 ∪ 2 ).</formula><p>Definition 2 (Cycle, Tree). Given a hypergraph H = (V, E, ) and s, t ∈ V , a path P of length q &gt; 0 in H from s to t is a sequence of vertices and hyperedges</p><formula xml:id="formula_3">P = (d 0 , (S 1 , d 1 ), d 1 , (S 2 , d 2 ), . . . , d q−1 , (S q , d q ), d q ),</formula><p>where d 0 = s, d q = t, and d j−1 ∈ S j for all j, 1 ≤ j ≤ q. If there is such a path in H, we say that t is reachable from s in H. If t = s, then P is called a cycle.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>The hypergraph H is acyclic if it does not contain a cycle. The hypergraph H is connected if every vertex is connected to every other vertex by a series of paths and reverse paths.</head><p>A tree H = (V, E, ) with root t ∈ V is a hypergraph in which t is a sink and is reachable from every vertex v ∈ V \ {t} by exactly one path.</p><p>In particular, the root is the only sink in a tree, and all trees are acyclic and connected. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 3 (Homomorphism). Let</head><formula xml:id="formula_4">H = (V, E, ), H = (V , E , ) be two hypergraphs. A homomorphism from H to H , denoted h : H → H , is a mapping h : V → V s.t. for all (S, d) ∈ E, one has h(S, d) := ({h(v) | v ∈ S}, h(d)) ∈ E and, for all v ∈ V , it holds that (h(v)) = (v). Such an h is an isomorphism if it is a bijection and, its inverse, h − : H → H, is also a homomorphism. A B A B B ∃r.A A ∃r.A A B ∃r.A</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Proofs</head><p>The following definition formalizes basic requirements for hyperedges to be considered valid inference steps from a given TBox.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 4 (Derivation Structure</head><formula xml:id="formula_5">). A derivation structure D = (V, E, ) over a TBox T is a hypergraph that is -grounded, i.e., every leaf v in D is labeled by (v) ∈ T ; and -sound, i.e., for every (S, d) ∈ E, the entailment { (s) | s ∈ S} |= (d) holds.</formula><p>We now define proofs as special derivation structures that derive a goal axiom.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 5 (Proof).</head><p>Given an axiom η and a TBox T , a proof for T |= η is a derivation structure P = (V, E, ) over T that contains exactly one sink v η ∈ V , which is labeled by η, and is acyclic.</p><p>A tree proof is a proof that is a tree. A subproof of a hypergraph H is a subgraph of H that is a proof.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figures 2 and 3 both prove A B ∃r.</head><p>A from T = {A B, B ∃r.A}. The first proof is presented in hypergraph notation (cf. Figure <ref type="figure">1</ref>) as a tree and the second one as a hypergraph without axiom label repetition, where the TBox axioms are marked with a thick border. Both of them can be seen as proofs in the sense of Definition 5 and they use the same inference steps, but have different numbers of vertices.</p><p>Two useful operations are the following of removal and replacement of subproofs above a vertex. Given a proof P = (V, E, ) and a vertex v ∈ V , the subproof of P with sink v is the largest subgraph P v = (V v , E v , v ) ⊆ P, where V v contains all vertices in V that have a path to v in P. Dually, we denote by</p><formula xml:id="formula_6">P −v = (V −v , E −v , −v ) the largest subgraph of P such that -leaf (P −v ) = leaf (P) ∪ {v} and -sink(P −v ) = sink(P).</formula><p>Intuitively, we obtain P −v by removing every vertex from P from which every path to the sink goes through v. P −v need not be a proof since v is now a leaf, but (v) may not be an axiom from T . Finally, for a proof P = (V , E , ) with sink v such that V ∩ V = {v} and (v) = (v), we define the proof P[v → P ] as P −v ∪ P . The intuitive idea here is that v, as every vertex in V , is labeled with an axiom that has its own proof above it. And P −v is obtained by removing the proof above v; vertices are kept only if they are used to prove other vertices in P. Then, P[v → P ] is the result of "replacing" the proof above v by P . Moreover, since P and P have only one vertex in common, it is guaranteed that the resulted hypergraph P[v → P ] is acyclic. Additionally, unravelling P into a tree can be seen as the result of recursively applying P[v → P v ] to every vertex v in P.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Derivers</head><p>As in <ref type="bibr" target="#b0">[1]</ref>, we rely on a deriver (imagine a DL reasoner) that, given a TBox T and a goal axiom η, produces a derivation structure D that contains "all possible proofs". This structure describes all instances of inference rules that are relevant for constructing proofs for η, without us having to know how inferences are performed. The quest for a good proof thus becomes a search for a good proof expressible by the inference steps in D. Since reasoners may not be complete for proving arbitrary axioms of L, we restrict η to a subset C L ⊆ S L of supported consequences.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 6 (Deriver).</head><p>A deriver D is given by a set C L ⊆ S L and a function that assigns derivation structures to pairs (T , η) of TBoxes T ⊆ S L and axioms η ∈ C L , such that T |= η iff D(T , η) contains a proof for T |= η. A proof P for T |= η is called admissible w.r.t. D(T , η) if there is a homomorphism h : P → D(T , η). We call D a polynomial deriver if there exists a polynomial p(x) such that the size of D(T , η) is bounded by p(|T | + |η|).</p><p>We assume w.l.o.g. that D(T , η) does not contain two vertices with the same label; this does not affect our results.</p><p>Elk <ref type="bibr" target="#b15">[16]</ref> can be seen as a polynomial deriver for EL in this sense, because it allows to instantiate the inference rules with only a polynomial number of ELconcepts, namely the subconcepts appearing in T or η. The derivation structure Elk(T , η) contains all allowed instances of these rules, where each axiom is represented by a unique vertex. Since the number of premises in each rule is bounded by 2, the size of this structure is polynomial. Elk is complete only for goal axioms of the form C D, where D is a concept name and C is a subconcept from T . To prove other kinds of entailments T |= η, one first has to adapt T and η, thus leading to a different derivation structure Elk(T , η).</p><p>There are consequence-based derivers for other DLs, but they may not be polynomial <ref type="bibr" target="#b8">[9]</ref>. In <ref type="bibr" target="#b0">[1]</ref>, we described a polynomial deriver for expressive description logics that is based on forgetting <ref type="bibr" target="#b16">[17]</ref> and may take double-exponential time to compute the polynomial derivation structure D(T , η). In <ref type="bibr" target="#b0">[1]</ref>, we also investigated exponential derivers, but here we focus on the case of polynomial derivers.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Measuring Proof Complexity</head><p>Taking into consideration the variety of examples in the introduction, we define a general class of complexity measures for proofs. Our goal is to find proofs that minimize these measures, i.e., lower numbers are better. Definition 7. A (complexity) measure is a function m : P L → Q ≥0 , where P L is the set of all proofs over L and Q ≥0 is the set of non-negative rational numbers.</p><p>We call m a Ψ-measure if, for every P ∈ P L ,</p><p>[P] m(P) is computable in polynomial time in the size of P,</p><p>[SI] every subproof of a homomorphic image of P weighs no more than P, i.e., m(P ) ≤ m(P) for any homomorphism h : P → P and P ⊆ h(P) such that P ∈ P L .</p><p>Intuitively, a Ψ-measure m does not increase when the proof gets smaller, either when parts of the proof are removed (i.e., for a subproof) or when parts are merged (i.e., for a homomorphic image). We cannot replace the property [SI] by two separate properties for each of these cases since the image of a proof P may not be acyclic, but has relevant acyclic subgraphs that should also not have larger weight than P. Due to [SI], there always exists a proof P for T |= η that is minimal w.r.t. m as well as non-redundant, i.e., there is no subproof P ⊂ P for T |= η. Moreover, two isomorphic proofs have the same weight w.r.t. m.</p><p>In Table <ref type="table" target="#tab_0">1</ref>, we provide examples of how the complexity measures from the introduction can be defined formally, where by P ∈ P we denote that P is a path in P, and by |P | the length of P . Since P is acyclic, there is always at least one such path. In this table, we use the size of an axiom or edge as a proxy for its complexity; other values from Q ≥0 could be used instead. The table covers all measures from the introduction except for Items 4 and 9-12. Tree size (Item 4) is defined recursively; we discuss this measure in the context of a more general family of "local" measures later in Section 4. The remaining items <ref type="bibr" target="#b8">(9)</ref><ref type="bibr" target="#b9">(10)</ref><ref type="bibr" target="#b10">(11)</ref><ref type="bibr" target="#b11">(12)</ref> are not Ψ-measures: the problems behind Items 9 and 10, i.e., to determine the pathwidth or treewidth for proofs which are graphs are NP-hard <ref type="bibr" target="#b1">[2]</ref>. Moreover, note that treewidth is not interesting in our context, since we can always obtain a tree-shaped proof (with minimal treewidth 1) by unraveling. A similar argument applies to Item 12 (planarity) since every tree is planar. We will discuss Item 11 separately in Section 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Lemma 8. Tree size and all measures in</head><p>We now formally define our main problem: finding optimal admissible proofs w.r.t. a measure m and D(T , η). Definition 9 (Optimal Proof). Let D be a deriver and m be a measure. Given a TBox T and an axiom η ∈ C L , an admissible proof P w.r.t. D(T , η) is called optimal w.r.t. m if m(P) is minimal among all such proofs. The associated decision problem, denoted OP(D, m), is to decide, given T and η as above and q ∈ Q, whether there is an admissible proof P w.r.t. D(T , η) with m(P) ≤ q.</p><p>If D is a polynomial deriver, we add the superscript poly to OP. If m is a Ψ-measure, we add a subscript Ψ, e.g. OP poly Ψ (D, m). For the complexity analysis, we assume the deriver to be given in the form of a method that allows to access elements of D(T , η). The complexity of this method is not our concern here, we simply assume in the following that elements of D(T , η) can be accessed in constant time. Since all of our complexity bounds are at least P, our results remain the same for the derivers whose derivation structure can be constructed in polynomial time even if we take the complexity of computing the derivation structure into account.</p><p>We can show that if P is optimal w.r.t. a Ψ-measure m and D(T , η), then the homomorphic image of P in D(T , η) is also a proof. Thus, to decide OP Ψ (D, m) we can focus on proofs that are subgraphs of D(T , η). This is shown by the following lemma, which generalizes a result from <ref type="bibr" target="#b0">[1]</ref>. The full proof can be found in the appendix. Lemma 10. For any Ψ-measure m, if there is an admissible proof P w.r.t. D(T , η) with m(P) ≤ q for some q ∈ Q, then there exists a subproof Q of D(T , η) for T |= η with m(Q) ≤ q.</p><p>Proof (Sketch). Let P be any such proof. Then there is a homomorphism h : P → D(T , η). If h(P) is a proof, then the claim immediately follows from [SI]. Otherwise, h(P) contains cycles, which must be due to different vertices in P with the same label that are connected by a path. We can iteratively remove such paths to obtain another proof P * that can be homomorphically mapped to a subproof Q of h(P) ⊆ D(T , η), and therefore m(Q) ≤ m(P) ≤ q by [SI].</p><p>In particular, this shows that an optimal proof always exists (recall that derivation structures are always finite). We can now show the following generic upper bound for our decision problem.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Theorem 12. OP poly</head><p>Ψ (D, m) is in NP.</p><p>Proof. We start by guessing a subgraph P of D(T , η). This can be done in non-deterministic polynomial time in the size of D(T , η), which is polynomial in the size of T and η. This subgraph P is necessarily sound, and checking the remaining properties of Definitions 4 and 5 and that m(P) ≤ q can be done in polynomial time in the size of P, in particular due to [P] in Definition 7.</p><p>Note that, one can similarly show an upper bound of NP for pathwidth, since checking m(P) ≤ q is in NP <ref type="bibr" target="#b1">[2]</ref>. (First guessing P and then verifying m(P) ≤ q can be performed by a polynomially bounded sequence of non-deterministic steps.) Matching lower bounds hold for OP poly Ψ (D, m) with a fixed D and m being vertex size <ref type="bibr" target="#b0">[1]</ref> or justification size <ref type="bibr" target="#b3">[4]</ref>. As we already noted, all the complexity measures from Table <ref type="table" target="#tab_0">1</ref>, including vertex size and justification size, are Ψ-measures.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Theorem 13 ([1, 4]).</head><p>For m being vertex size or justification size, there is a polynomial deriver D such that OP poly Ψ (D, m) is NP-complete.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Local Measures</head><p>In this section, we consider a property that allows us to reduce the upper bound from NP to P. This also generalizes a result from <ref type="bibr" target="#b0">[1]</ref>. Recall that intuitively, P[v → P ] denotes the result of replacing in P the proof above v by P .</p><p>Definition 14. Let P = (V, E, ) be a proof and v ∈ V . We say that a Ψ-measure m is local for v in P if for every proof Intuitively, if the measure is local, then we can replace subproofs by easier or harder alternative proofs, and the weight of the full proof will change accordingly. Measures that can be defined in a monotone, recursive way on the graph structure usually have this property:</p><formula xml:id="formula_7">P v = (V , E , ) with sink v such that V ∩ V = {v}, (v) = (v</formula><p>the measures hardest axiom, hardest inference step, from Table <ref type="table" target="#tab_0">1</ref>; tree size defined for an ayclic</p><formula xml:id="formula_8">H = (V, E, ) by m(H) := v∈sink(H) m(H, v),</formula><p>where m(H, v) := 1 + (S,v)∈E w∈S m(H, w); weighted tree size, defined similarly by m(H)</p><formula xml:id="formula_9">:= v∈sink(H) m(H, v) with m(H, v) := | (v)| + (S,v)∈E w∈S m(H, w).</formula><p>As the names of the last two measures indicate, local measures cannot distinguish between hypergraphs and their unravelings into trees, since unraveling of a proof P can be seen as recursively computing P[v → P v ] for all vertices v in P. That is fact to show that whenever v is chosen in Line 9, then P(v) is a minimal proof for (v) in D(T , η). This step uses an inductive argument that is based on the locality property of the measure m. Finally, by Lemma 10 we do not need to consider proofs that are not contained in D(T , η).</p><p>Since we can actually compute an optimal proof in polynomial time, the following complexity result follows.</p><p>Theorem 16. For any local Ψ-measure m, OP poly Ψ (D, m) is in P.</p><p>A matching lower bound was shown in <ref type="bibr" target="#b0">[1]</ref> for the problem of finding an admissible tree proof of minimal vertex size. This is equivalent to finding an arbitrary proof of minimal tree size, since a tree proof (with the same tree size) can always be obtained by unravelling, and for tree proofs the measures tree size and vertex size coincide. Since tree size is a local Ψ-measure, we obtain the following.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Theorem 17 ([1]</head><p>). For m being tree size, there is a polynomial deriver D such that OP poly Ψ (D, m) is P-complete.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Chunks</head><p>In this section, we discuss Item 11 from the introduction, which is part of ongoing work. To define such a measure, we need to extend the definition of hypergraphs by a function ρ that assigns to each hyperedge (S, d) a label, the idea being that ρ(S, d) is the name of the inference rule that was instantiated to obtain the inference step { (s)|s∈S}</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>(d)</head><p>. Based on this, we define the notion of chunks.</p><p>Definition 18. Given a proof P, a chunk of P is a maximal connected subgraph H ⊆ P such that ρ(e) = ρ(e ) for all hyperedges e, e in H .</p><p>Every proof has a unique decomposition into chunks that do not share hyperedges.</p><p>A proof can thus be measured by the chunk count, which is the number of its chunks. This measure follows the idea that proofs with less alternations between inference rules are easier to understand. Consider for example proofs P 1 and P 2 depicted in Figure <ref type="figure">4</ref>. Both are proofs for the same entailment A ∃r.F , and P 1 has three chunks, while P 2 has only one (all inference steps follow the same rule schema), hence P 2 is easier to understand. Of course one can consider also more complicated measures that also take the size of the chunks into account.</p><p>Clearly, the measure satsifies [P]. However, we need extend the notion of homomorphism to take into account the edge labeling function ρ. Nevertheless, we conjecture that we can extend the NP upper bound in Section 3 to cover this measure by carefully adapting Condition [SI]. Unfortunately, the chunk count is not local, because it is not invariant under unraveling (which may copy a single chunk into multiple ones).</p><formula xml:id="formula_10">A F1 F2 F1 ∃r.F2 A F1 A ∃r.F2 F2 F A ∃r.F A B B C A C C D A D D E A E E ∃r.F</formula><p>A ∃r.F Fig. <ref type="figure">4</ref>. Two proofs P1 (on the left) and P2 of the same entailment A ∃r.F .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Conclusion</head><p>We have investigated the complexity of finding optimal proofs for description logic entailments w.r.t. a variety of measures. Obviously, size is not enough to evaluate the difficulties users face when trying to understand proofs, and existing measures for justification complexity (e.g. <ref type="bibr" target="#b12">[13]</ref>) cannot capture the structural aspects of this problem. We identified two classes of structural measures, Ψ-measures and local Ψ-measures, and showed that finding optimal proofs w.r.t. such measures is in NP and in P, respectively, for polynomial derivers. In the future, we will investigate more measures, try to find more tractable subclasses, and identify properties of measures that make them P-hard or NP-hard. Most importantly, we will evaluate which measures are most relevant for practical purposes, to actually help users understand description logic proofs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A Proofs</head><p>Lemma 8. Tree size and all measures in Table <ref type="table" target="#tab_0">1</ref> except for depth and worst path are Ψ-measures. Indeed, depth, worst path and treewidth do not satisfy</p><formula xml:id="formula_11">[SI].</formula><p>Proof. It should be easy to see that, given a proof P, all the measures in Table <ref type="table" target="#tab_0">1</ref> can be computed according to the formulas in polynomial time in the size of P.</p><p>Tree size also satisfies the property [P], see <ref type="bibr" target="#b0">[1]</ref>. Since a homomorphism preserves edges and vertex labels, weights corresponding to the hardest, sum and size measures of subproofs of a homomorphic image is no greater than the weight of the proof.</p><p>In order to see why [SI] does not hold for depth and worst path, we use the following example in Figure <ref type="figure">5</ref>. For simplicity we omit vertex labeling; we assume only (v) = (v ). Clearly, the graph on the left has depth 4, while its homomorphic image and a subproof obtained by omitting the cycle on the right have 5. A similar argument can be applied to worst path since depth is its special case (when the weight of an edge is always equal to 1).</p><p>The intuition behind why [SI] does not hold for treewidth is as follows. A connected graph G has treewidth 1 if and only if it is a tree. However, its homomorphic image of a tree is not necessary a tree. One can construct a tree proof and a homomorphism s.t. there is a subproof in the homomorphic image which is not a tree and, thus, has a treewidth greater than 1.</p><p>Despite the strong connection between treewidth and pathwidth, i.e., for any graph, its pathwidth is no smaller than its treewidth, we cannot conjecture a similar result for pathwidth. Cycles (if any) created by a homomorphism which also could increase pathwidth are not part of subrpoofs for [SI]. Lemma 10. For any Ψ-measure m, if there is an admissible proof P w.r.t. D(T , η) with m(P) ≤ q for some q ∈ Q, then there exists a subproof Q of D(T , η) for T |= η with m(Q) ≤ q.</p><p>Proof. Let P be such a proof with associated homomorphism h : P → D(T , η). If h(P) is acyclic, then by [SI] in Definition 7 we have m(h(P)) ≤ m(P) ≤ q. Since v v h(v) h(v ) Fig. <ref type="figure">5</ref>. A homomorphism that increases depth and worst path of the graph P has a unique sink v η , it must be mapped to a unique sink h(v η ) in h(P), and thus h(P) is the desired subproof of D(T , η).</p><p>If h(P) is not acyclic, our goal is to find another admissible proof P * w.r.t. D(T , η) that uses a subset of the vertices of P such that h(P * ) ⊆ h(P) is acyclic, and therefore satisfies the requirements of the lemma by [SI] in Definition 7. For this purpose, consider an arbitrary cycle in h(P), which must be due to two vertices v, v in P such that h(v) = h(v ) and there is a path between v and v (or due to multiple such pairs of vertices). Since P is acyclic, we can assume that there is a path from v to v , but no path from v to v. We now consider the two subproofs P v and P v . As there is a path from v to v , we have P v ⊂ P v . Since h(v) = h(v ), both vertices are labeled with the same axiom. The idea of the following construction is to remove P v from P and replace it with P v , which effectively removes all paths from v to v .</p><p>More formally, we first consider the hypergraph H = P −v ∪ P v and then, in the hyperedges (S, d) in H that still contain v ∈ S, we replace v by v, effectively merging the two vertices, remove v from the set of vertices, and thus obtain a hypergraph P . If there was no such hyperedge, then v was the sink of P, i.e., (v ) = η, and v will now be the new sink in P with (v) = (v ) = η. We now show that P is also an admissible proof w.r.t. D(T , η). Our construction does not produce new leafs, and hence P is still grounded. Clearly, all remaining edges are sound since they were already sound in P. Moreover, P is acyclic since all cycles in P can be traced back to paths in P that involve both v and v ; but we have assumed that there are no paths from v to v, and have destroyed all paths from v to v . As argued above, we have also kept the property that there is exactly one sink, which is labeled with η. Observe that h is also a homomorphism from P to D(T , η) (when restricted to the vertex set of P ), because h(v) = h(v ), and moreover h(P ) ⊆ h(P).</p><p>The resulting proof P has less vertices than P since we have removed at least v . This means that, after finitely many such operations, we can obtain from P the desired proof P * such that h(P * ) ⊆ h(P) is acyclic. Since h(P * ) also has a unique sink labeled by η, it is a subproof of D(T , η) and m(h(P * )) ≤ m(P) ≤ q by [SI] in Definition 7.</p><p>Corollary 11. If T |= η, then, for every deriver D and Ψ-measure m, there is an optimal proof for T |= η w.r.t. D and m.</p><p>Proof. By Definition 6, the derivation structure D(T , η) contains at least one proof for T |= η. Since D(T , η) is finite, there are finitely many proofs for T |= η contained in D(T , η). The finite set of all weights of these proofs always has a minimum. Finally, if there were an admissible proof weighing less than this minimum, it would contradict Lemma 10.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Lemma 15.</head><p>For any local Ψ-measure m and polynomial deriver D, Algorithm 1 computes an optimal proof in time polynomial in the size of T and η.</p><p>Proof. We can show the following facts about this algorithm.</p><p>(I) Whenever P(v) is defined, then it is a proof for (v) contained in D(T , η).</p><p>We prove this by induction on the order in which the hypergraphs P(v) are constructed by Algorithm Item (II) implies that each vertex v ∈ V can be removed from Q at most once: in order for v to be added again to Q in Line 17, there would need to exist a proof other than P(v) with the same sink v but a smaller weight, but according to (II), after choosing v in Line 9, the algorithm does not construct any proofs with a weight smaller than m(P(v)) (for any sink). Therefore, in Line 14, during the complete run of the algorithm, each edge (S, d) ∈ E will be used at most once. Moreover, all primitive operations in the algorithm can be done in polynomial time, such as checking acyclicity of hypergraphs in Line 15 or finding the minimal value m(P(v)) in Line 9. It follows that Algorithm 1 terminates in time polynomial in the size of D(T , η), which is polynomial in the size of T and η. (IV) Every vertex v ∈ V that is the sink of a proof P contained in D(T , η) is added to Q at some point.</p><p>We prove this by induction on the structure of P. If P contains only v, then either (v) ∈ T , and hence v is added to Q in Line 4, or otherwise there is an edge (∅, v) in P (and hence in E), in which case v is added to Q in Line 6.</p><p>If P has more than one vertex, then it must contain at least one edge e = (S, v) ∈ E, where each s ∈ S is the sink of a subproof of P in D(T , η). By induction we know that each s ∈ S is added to Q at some point during the algorithm. By Item (III), they must also be removed from Q at some point afterwards, and hence eventually k(e) reaches |S| in Line 13. If P(v) was already defined at this point, then v had already been added to Q earlier. Otherwise, v is now added to Q in Line 17. (V) When the algorithm terminates and P(v) is defined, then m(P(v)) is minimal among all proofs for (v) contained in D(T , η). By (I), P(v) is a proof of this form. Assume to the contrary that there is a proof P for (v) contained in D(T , η) such that m(P) &lt; m(P(v)).</p><p>Then P and P(v) must both have the sink v, because we assume that D(T , η) contains no two vertices with the same label. Assume moreover that i) P is an optimal proof for (v), that is, m(P) ≤ m(P ) for every other proof P for (v) in D(T , η) (cf. Corollary 11), and ii) among all other vertices v ∈ V for which there exists a proof P for (v ) in D(T , η) such that m(P ) &lt; m(P(v )), we also have m(P) ≤ m(P ) and whenever m(P) = m(P ), then |P| ≤ |P |. Since P is optimal, we know that it has a unique last inference step (S, v). We show that, for every vertex w ∈ S, an optimal proof was assigned to P(w) before v was chosen in Line 9. We first show that for every w ∈ S, the subproof P w must be similarly minimal. If this were not the case, then there would be a proof P w for (w) in D(T , η) such that m(P w ) &lt; m(P w ). But then, by locality, we have m(P[w → P w ]) &lt; m(P), where P w is constructed from P w by renaming all vertices except w. Since P[w → P w ] is an admissible proof for (v) w.r.t. D(T , η), by Lemma 10 we know that D(T , η) contains a proof P for (v) with m(P ) ≤ m(P[w → P w ]) &lt; m(P), contradicting the optimality of P. Hence, P w must be optimal among all proofs for (w) in D(T , η). Moreover, by [SI], we have m(P w ) ≤ m(P) and |P w | &lt; |P|. By Assumption ii) and (IV), this means that P(w) is also optimal w.r.t. m among all proofs for (w) in D(T , η). Since both P w and P(w) are optimal in this sense, we obtain that m(P(w)) = m(P w ) ≤ m(P) &lt; m(P(v)), which by (II) means that w must have been chosen (in Line 9) before v. To summarize, for every vertex w ∈ S, we know that an optimal proof for (w) with weight m(P w ) has already been assigned to P(w) before v is chosen in Line 9, and moreover each w ∈ S was chosen before v. But then, for one of these vertices w (the last one to be processed), a proof P is constructed from the subproofs P(w) and the edge (S, v) in Line 14. If we consider the variant P of P where each of the subproofs P(w) is renamed such that it shares only the vertex w with the rest of the proof, then by locality we have m(P ) = m(P ). Moreover, since m(P(w)) = m(P w ) holds for all w ∈ S, locality also yields that m(P ) = m(P). Since P was constructed as a candidate for P(v) by the algorithm, we know that m(P(v)) ≤ m(P ) = m(P), which contradicts our assumption that m(P) &lt; m(P(v)). We obtain that P(v) must be optimal.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .Fig. 2 .Fig. 3 .</head><label>123</label><figDesc>Fig. 1. A proof in the classical representation</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Corollary 11 .</head><label>11</label><figDesc>If T |= η, then, for every deriver D and Ψ-measure m, there is an optimal proof for T |= η w.r.t. D and m.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>) and m(P v ) m(P v ), we have m(P[v → P v ]) m(P), for any choice of relation ∈ {&lt;, ≤, =, ≥, &gt;}.A Ψ-measure m is local if it is local for every vertex in every proof.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>except for depth and worst path are Ψ-measures. Indeed, depth, worst path and treewidth do not satisfy [</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>SI].</head><label></label><figDesc></figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 1 .</head><label>1</label><figDesc>Formal definitions of some measures m for proofs P = (V, E, ).</figDesc><table><row><cell>Name</cell><cell>m(P)</cell></row><row><cell>Hardest axiom</cell><cell>maxv∈V | (v)|</cell></row><row><cell>Hardest inference step</cell><cell>max (S,d)∈E |(S, d)|</cell></row><row><cell>Sum of axiom complexities</cell><cell>v∈V | (v)|</cell></row><row><cell>Sum of edge complexities</cell><cell>(S,d)∈E |(S, d)|</cell></row><row><cell>Vertex size</cell><cell>|V |</cell></row><row><cell>Justification size</cell><cell>|{v ∈ V | (v) ∈ T }|</cell></row><row><cell>Depth</cell><cell>maxP ∈P |P |</cell></row><row><cell>Worst Path</cell><cell>maxP ∈P (S,d)∈P |(S, d)|</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head></head><label></label><figDesc>1. The ones in Line 4 consist of a single leaf v, which is labeled by a TBox axiom, and hence are sound, grounded, acyclic, and have the single sink v. Similarly, P(v) in Line 6 is always a proof since it consists of a single edge from D(T , η), has no leafs, and has v as the only sink. Consider now the hypergraph P constructed in Line 14 as a possible candidate for P(v) (where v = d). At this point, all P(s), s ∈ S, are already defined since the counter k(e) can only reach |S| if each s ∈ S has already been chosen in Line 9, and thus P(s) must have been defined. Hence, by induction, each P(s) is a proof for (s) contained in D(T , η), and, because we assume that D(T , η) contains no two vertices with the same label, must have s as sink. This shows that the hypergraph P constructed in Line 14 is sound, grounded, and has a single sink, namely v. Finally, P(v) is only updated to P in Line 17 if P is acyclic and therefore it is a proof. (II) If vertex v is chosen before vertex w in Line 9, then m(P(v)) ≤ m(P(w)).</figDesc><table /><note>We show that after choosing v in Line 9 the algorithm cannot produce a new proof P in Line 14 with m(P) &lt; m(P(v)), and thus the smallest weight min{m(P(w)) | w ∈ Q} can never decrease. Consider the proof P from Line 15. Since v ∈ S, we have P(v) ⊂ P, and therefore m(P(v)) ≤ m(P) by [SI]. (III) Algorithm 1 terminates in polynomial time.</note></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgements This work was supported by the DFG in grant 389792660 as part of TRR 248 (https://perspicuous-computing.science), and QuantLA, GRK 1763 (https://lat.inf.tu-dresden.de/quantla). We also want to thank the anonymous reviewers for their many helpful comments.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm 1: A Dijkstra-like algorithm</head><p>Input: A derivation structure D(T , η) = (V, E, ), a local Ψ-measure m Output: An optimal proof of T |= η w.r.t. D(T , η) and m</p><p>// P(v) is optimal for (v) the reason why they are especially well-suited for cases where proofs are presented to users in the form of trees. This is for example the case for the Elk-proof plugin for Protégé <ref type="bibr" target="#b14">[15]</ref>.</p><p>Algorithm 1 describes a Dijkstra-like approach that is inspired by the algorithm in <ref type="bibr" target="#b9">[10]</ref> for finding minimal hyperpaths w.r.t. so-called additive weighting functions, which represent a subclass of local Ψ-measures. It progressively defines proofs P(v) for (v) that are contained in D(T , η). If it reaches a new vertex v in this process, this vertex is added to the set Q. In each step, a vertex with minimal weight m(P(v)) is chosen and removed from Q. Whenever all source vertices of a hyperedge (S, d) have been processed in this way, a new proof for (d) is constructed by joining the proofs for the source vertices using the new hyperedge, and this is compared to the best previously known proof for (d). All proofs constructed in this way are tree proofs, but this restriction does not matter since m is local. For Line 18, recall that we assumed D(T , η) to contain no two vertices with the same label, and hence it contains a unique vertex v η with label η. Lemma 15. For any local Ψ-measure m and polynomial deriver D, Algorithm 1 computes an optimal proof in time polynomial in the size of T and η.</p><p>Proof (Sketch). The main argument is that the algorithm is monotone in the sense that the smallest weight min{m(P(w)) | w ∈ Q} can never decrease. Hence, easier proofs (according to m) will be constructed before harder ones. This shows that each vertex v will be removed at most once from Q in Line 17, and hence the algorithm terminates in polynomial time. Moreover, one can use the above</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Finding small proofs for description logic entailments: Theory and practice</title>
		<author>
			<persName><forename type="first">Christian</forename><surname>Alrabbaa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Franz</forename><surname>Baader</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stefan</forename><surname>Borgwardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Patrick</forename><surname>Koopmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alisa</forename><surname>Kovtunova</surname></persName>
		</author>
		<idno type="DOI">10.29007/nhpp</idno>
	</analytic>
	<monogr>
		<title level="m">LPAR-23: 23rd International Conference on Logic for Programming, Artificial Intelligence and Reasoning</title>
		<title level="s">EPiC Series in Computing</title>
		<editor>
			<persName><forename type="first">Elvira</forename><surname>Albert</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Laura</forename><surname>Kovacs</surname></persName>
		</editor>
		<imprint>
			<publisher>EasyChair</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">73</biblScope>
			<biblScope unit="page" from="32" to="67" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Complexity of finding embeddings in a k-tree</title>
		<author>
			<persName><forename type="first">Stefan</forename><surname>Arnborg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Derek</forename><forename type="middle">G</forename><surname>Corneil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrzej</forename><surname>Proskurowski</surname></persName>
		</author>
		<idno type="DOI">10.1137/0608024</idno>
	</analytic>
	<monogr>
		<title level="j">SIAM J. Algebraic Discrete Methods</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="277" to="284" />
			<date type="published" when="1987-04">April 1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">An Introduction to Description Logic</title>
		<author>
			<persName><forename type="first">Franz</forename><surname>Baader</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ian</forename><surname>Horrocks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Carsten</forename><surname>Lutz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Uli</forename><surname>Sattler</surname></persName>
		</author>
		<idno type="DOI">10.1017/9781139025355</idno>
		<imprint>
			<date type="published" when="2017">2017</date>
			<publisher>Cambridge University Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Pinpointing in the description logic EL +</title>
		<author>
			<persName><forename type="first">Franz</forename><surname>Baader</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rafael</forename><surname>Peñaloza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Boontawee</forename><surname>Suntisrivaraporn</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-540-74565-5_7</idno>
	</analytic>
	<monogr>
		<title level="m">Advances in Artificial Intelligence, 30th Annual German Conference on AI, KI 2007</title>
				<meeting><address><addrLine>Osnabrück, Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007-09-10">2007. September 10-13, 2007. 2007</date>
			<biblScope unit="page" from="52" to="67" />
		</imprint>
	</monogr>
	<note>Proceedings</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Debugging SNOMED CT using axiom pinpointing in the description logic EL +</title>
		<author>
			<persName><forename type="first">Franz</forename><surname>Baader</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Boontawee</forename><surname>Suntisrivaraporn</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-410/Paper01.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proc. of the 3rd Conference on Knowledge Representation in Medicine (KR-MED&apos;08): Representing and Sharing Knowledge Using SNOMED</title>
				<meeting>of the 3rd Conference on Knowledge Representation in Medicine (KR-MED&apos;08): Representing and Sharing Knowledge Using SNOMED</meeting>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="volume">410</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Explaining ALC subsumption</title>
		<author>
			<persName><forename type="first">Alexander</forename><surname>Borgida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Enrico</forename><surname>Franconi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ian</forename><surname>Horrocks</surname></persName>
		</author>
		<ptr target="http://www.frontiersinai.com/ecai/ecai2000/pdf/p0209.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 14th European Conference on Artificial Intelligence</title>
				<meeting>the 14th European Conference on Artificial Intelligence<address><addrLine>Berlin, Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2000">August 20-25, 2000. 2000</date>
			<biblScope unit="page" from="209" to="213" />
		</imprint>
	</monogr>
	<note>ECAI 2000</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">In the eye of the beholder: Which proofs are best?</title>
		<author>
			<persName><forename type="first">Stefan</forename><surname>Borgwardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anke</forename><surname>Hirsch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alisa</forename><surname>Kovtunova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Frederik</forename><surname>Wiehr</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 33th Int. Workshop on Description Logics (DL&apos;20</title>
				<meeting>of the 33th Int. Workshop on Description Logics (DL&apos;20</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Living with CLASSIC: When and how to use a KL-ONE-like language</title>
		<author>
			<persName><forename type="first">Ronald</forename><forename type="middle">J</forename><surname>Brachman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Deborah</forename><forename type="middle">L</forename><surname>Mcguinness</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><forename type="middle">F</forename><surname>Patel-Schneider</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lori</forename><forename type="middle">Alperin</forename><surname>Resnick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexander</forename><surname>Borgida</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Principles of Semantic Networks</title>
				<editor>
			<persName><forename type="first">John</forename><forename type="middle">F</forename><surname>Sowa</surname></persName>
		</editor>
		<imprint>
			<publisher>Morgan Kaufmann</publisher>
			<date type="published" when="1991">1991</date>
			<biblScope unit="page" from="401" to="456" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">15 years of consequence-based reasoning</title>
		<author>
			<persName><forename type="first">David</forename><surname>Tena Cucala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bernardo</forename><forename type="middle">Cuenca</forename><surname>Grau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ian</forename><surname>Horrocks</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-22102-7_27</idno>
	</analytic>
	<monogr>
		<title level="m">Description Logic, Theory Combination, and All That -Essays Dedicated to Franz Baader on the Occasion of His 60th Birthday</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="573" to="587" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Directed hypergraphs and applications</title>
		<author>
			<persName><forename type="first">Giorgio</forename><surname>Gallo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Giustino</forename><surname>Longo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stefano</forename><surname>Pallottino</surname></persName>
		</author>
		<idno type="DOI">10.1016/0166-218X(93)90045-P</idno>
	</analytic>
	<monogr>
		<title level="j">Discrete Applied Mathematics</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="177" to="201" />
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Width parameters beyond tree-width and their applications</title>
		<author>
			<persName><forename type="first">Petr</forename><surname>Hlinený</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sang-Il</forename><surname>Oum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Detlef</forename><surname>Seese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Georg</forename><surname>Gottlob</surname></persName>
		</author>
		<idno type="DOI">10.1093/comjnl/bxm052</idno>
		<ptr target="https://doi.org/10.1093/comjnl/bxm052,doi:10.1093/comjnl/bxm052" />
	</analytic>
	<monogr>
		<title level="j">Comput. J</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="326" to="362" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Justification Based Explanation in Ontologies</title>
		<author>
			<persName><forename type="first">Matthew</forename><surname>Horridge</surname></persName>
		</author>
		<ptr target="https://www.research.manchester.ac.uk/portal/files/54511395/FULL_TEXT.PDF" />
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
		<respStmt>
			<orgName>University of Manchester, UK</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">PhD thesis</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Toward cognitive support for OWL justifications</title>
		<author>
			<persName><forename type="first">Matthew</forename><surname>Horridge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Samantha</forename><surname>Bail</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bijan</forename><surname>Parsia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Uli</forename><surname>Sattler</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.knosys.2013.08.021</idno>
	</analytic>
	<monogr>
		<title level="j">Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">53</biblScope>
			<biblScope unit="page" from="66" to="79" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Justification oriented proofs in OWL</title>
		<author>
			<persName><forename type="first">Matthew</forename><surname>Horridge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bijan</forename><surname>Parsia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ulrike</forename><surname>Sattler</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-642-17746-0_23</idno>
	</analytic>
	<monogr>
		<title level="m">The Semantic Web -ISWC 2010 -9th International Semantic Web Conference, ISWC 2010</title>
				<meeting><address><addrLine>Shanghai, China</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2010">November 7-11, 2010. 2010</date>
			<biblScope unit="page" from="354" to="369" />
		</imprint>
	</monogr>
	<note>, Revised Selected Papers, Part I</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Towards reusable explanation services in protege</title>
		<author>
			<persName><forename type="first">Yevgeny</forename><surname>Kazakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pavel</forename><surname>Klinov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexander</forename><surname>Stupnikov</surname></persName>
		</author>
		<ptr target="http://www.ceur-ws.org/Vol-1879/paper31.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proc. of the 30th Int. Workshop on Description Logics (DL&apos;17)</title>
				<editor>
			<persName><forename type="first">Alessandro</forename><surname>Artale</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Birte</forename><surname>Glimm</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Roman</forename><surname>Kontchakov</surname></persName>
		</editor>
		<meeting>of the 30th Int. Workshop on Description Logics (DL&apos;17)</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="volume">1879</biblScope>
		</imprint>
	</monogr>
	<note>CEUR Workshop Proceedings</note>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">The incredible ELKfrom polynomial procedures to efficient reasoning with EL ontologies</title>
		<author>
			<persName><forename type="first">Yevgeny</forename><surname>Kazakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Markus</forename><surname>Krötzsch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Frantisek</forename><surname>Simancik</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10817-013-9296-3</idno>
	</analytic>
	<monogr>
		<title level="j">J. Autom. Reasoning</title>
		<imprint>
			<biblScope unit="volume">53</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="61" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Foundations for uniform interpolation and forgetting in expressive description logics</title>
		<author>
			<persName><forename type="first">Carsten</forename><surname>Lutz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Frank</forename><surname>Wolter</surname></persName>
		</author>
		<idno type="DOI">10.5591/978-1-57735-516-8/IJCAI11-170</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IJCAI 2011</title>
				<meeting>IJCAI 2011</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="989" to="995" />
		</imprint>
	</monogr>
	<note>IJCAI/AAAI</note>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Explaining Reasoning in Description Logics</title>
		<author>
			<persName><forename type="first">Deborah</forename><forename type="middle">L</forename><surname>Mcguinness</surname></persName>
		</author>
		<idno type="DOI">10.7282/t3-q0c6-5305</idno>
		<imprint>
			<date type="published" when="1996">1996</date>
			<pubPlace>NJ, USA</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Rutgers University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">PhD thesis</note>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Measuring the understandability of deduction rules for OWL</title>
		<author>
			<persName><forename type="first">Tu</forename><surname>Anh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thi</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Richard</forename><surname>Power</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paul</forename><surname>Piwek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sandra</forename><surname>Williams</surname></persName>
		</author>
		<ptr target="http://www.ida.liu.se/~patla/conferences/WoDOOM12/papers/paper4.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the First International Workshop on Debugging Ontologies and Ontology Mappings, WoDOOM 2012</title>
				<meeting>the First International Workshop on Debugging Ontologies and Ontology Mappings, WoDOOM 2012<address><addrLine>Galway, Ireland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2012-10-08">October 8, 2012. 2012</date>
			<biblScope unit="page" from="1" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Predicting the understandability of OWL inferences</title>
		<author>
			<persName><forename type="first">Tu</forename><surname>Anh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thi</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Richard</forename><surname>Power</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paul</forename><surname>Piwek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sandra</forename><surname>Williams</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-642-38288-8_8</idno>
	</analytic>
	<monogr>
		<title level="m">The Semantic Web: Semantics and Big Data, 10th International Conference, ESWC 2013</title>
				<meeting><address><addrLine>Montpellier, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013">May 26-30, 2013. 2013</date>
			<biblScope unit="page" from="109" to="123" />
		</imprint>
	</monogr>
	<note>Proceedings</note>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Finding the K shortest hyperpaths</title>
		<author>
			<persName><forename type="first">Lars</forename><surname>Relund Nielsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kim</forename><surname>Allan Andersen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Daniele</forename><surname>Pretolani</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.cor.2003.11.014</idno>
	</analytic>
	<monogr>
		<title level="j">Computers &amp; OR</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="1477" to="1497" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Towards explicative inference for OWL</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">G</forename><surname>Marvin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Birte</forename><surname>Schiller</surname></persName>
		</author>
		<author>
			<persName><surname>Glimm</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-1014/paper_36.pdf" />
	</analytic>
	<monogr>
		<title level="m">Informal Proceedings of the 26th International Workshop on Description Logics</title>
				<meeting><address><addrLine>Ulm, Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013">July 23 -26, 2013. 2013</date>
			<biblScope unit="page" from="930" to="941" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Testing the adequacy of automated explanations of EL subsumptions</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">G</forename><surname>Marvin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Florian</forename><surname>Schiller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Birte</forename><surname>Schiller</surname></persName>
		</author>
		<author>
			<persName><surname>Glimm</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-1879/paper43.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 30th International Workshop on Description Logics</title>
				<meeting>the 30th International Workshop on Description Logics<address><addrLine>Montpellier, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">July 18-21, 2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Non-standard reasoning services for the debugging of description logic terminologies</title>
		<author>
			<persName><forename type="first">Stefan</forename><surname>Schlobach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ronald</forename><surname>Cornet</surname></persName>
		</author>
		<ptr target="http://ijcai.org/Proceedings/03/Papers/053.pdf.SincebyLemma10" />
	</analytic>
	<monogr>
		<title level="m">Proc. of the 18th Int. Joint Conf. on Artificial Intelligence (IJCAI 2003)</title>
				<editor>
			<persName><forename type="first">Georg</forename><surname>Gottlob</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Toby</forename><surname>Walsh</surname></persName>
		</editor>
		<meeting>of the 18th Int. Joint Conf. on Artificial Intelligence (IJCAI 2003)<address><addrLine>Acapulco, Mexico</addrLine></address></meeting>
		<imprint>
			<publisher>Morgan Kaufmann</publisher>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="355" to="362" />
		</imprint>
	</monogr>
	<note>, Items (III)-(V). show that Algorithm 1 returns such a proof in Line 18</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
