<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Semantic Queries Explaining Opaque Machine Learning Classifiers</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Jason</forename><surname>Liartis</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Electrical and Computer Engineering</orgName>
								<orgName type="laboratory">Artificial Intelligence and Learning Systems Laboratory</orgName>
								<orgName type="institution">National Technical University of Athens</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Edmund</forename><surname>Dervakos</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Electrical and Computer Engineering</orgName>
								<orgName type="laboratory">Artificial Intelligence and Learning Systems Laboratory</orgName>
								<orgName type="institution">National Technical University of Athens</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Orfeas</forename><surname>Menis -Mastromichalakis</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Electrical and Computer Engineering</orgName>
								<orgName type="laboratory">Artificial Intelligence and Learning Systems Laboratory</orgName>
								<orgName type="institution">National Technical University of Athens</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alexandros</forename><surname>Chortaras</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Electrical and Computer Engineering</orgName>
								<orgName type="laboratory">Artificial Intelligence and Learning Systems Laboratory</orgName>
								<orgName type="institution">National Technical University of Athens</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giorgos</forename><surname>Stamou</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Electrical and Computer Engineering</orgName>
								<orgName type="laboratory">Artificial Intelligence and Learning Systems Laboratory</orgName>
								<orgName type="institution">National Technical University of Athens</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Semantic Queries Explaining Opaque Machine Learning Classifiers</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">23250D000313108063F5802BF01E6F3D</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T17:29+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>XAI</term>
					<term>Explainability</term>
					<term>Knowledge Graphs</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The success of deep learning on solving a multitude of different problems has highlighted the importance of explainability. In many critical applications, such as in medicine, deep learning cannot be ethically, or lawfully utilized due to its intrinsic opacity. On the other hand, description logics and knowledge representation technologies provide a transparent and interepretable framework for describing and classifying data. In this paper we present a methodology for utilizing description logics for explaining black-box classifiers. Specifically, given a dataset, a knowledge which describes it and a black-box classifier, we attempt to mimic the black-box with conjunctive queries over the knowledge. These queries are then presented as global explanations of the black-box classifier.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Over the past decade, Artificial Intelligence and in particular Deep Neural Networks have accomplished impressive achievements in various tasks such as Image Recognition and Natural Language Processing in numerous domains like medicine, finance, arts, and many more. In order to tackle challenging problems, the models become deeper and deeper and their structure complexity is constantly rising, making them inherently opaque. While traditional machine learning models, such as decision trees, are interpretable by design, modern deep neural networks are hard to explain due to their complex architecture and operation. This opacity raises ethical and legal concerns regarding the real-life use of such models and has lead to the emergence of eXplainable Artificial Intelligence (XAI), to make the operation of opaque AI systems more comprehensible to humans <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4]</ref>. XAI is already a major topic in many events of the AI community and is constantly engaging more researchers due to its immediate and important effect on the application of AI systems.</p><p>Knowledge Graphs (KG) and Description Logics (DL) offer an advanced, adaptable and interpretable framework of high expressiveness that can be employed to explain complex neural networks exploiting years of research and improvement on the area. Knowledge graphs <ref type="bibr" target="#b4">[5]</ref>, as a scalable common understandable structured representation of a domain based on the way humans mentally perceive the world, have emerged as a promising complement or extension to machine learning approaches for achieving explainability <ref type="bibr" target="#b3">[4]</ref>.</p><p>In this paper, we introduce a novel explainability framework for representing model-agnostic knowledge graph-based explanations, as conjunctive queries (resulting to explanations like "animals in images with household items are classified as domestic animals"), approaching the task as a Query Reverse Engineering (QRE) problem, deriving the explanation-queries from the respective outputs of the model. We discuss the creation of appropriate datasets for our methods, and we propose algorithms to compute such explanations. Additionally, we investigate evaluation methods and metrics, and finally, we evaluate the methodology and algorithms experimenting with data from the CLEVR-Hans3 <ref type="bibr" target="#b5">[6]</ref> dataset, extracting explanations for deep learning models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>Approaches to explainability vary with regard to data domain (images, text, tabular), form of explanations (rule-based, counterfactual, feature importance etc.), and scope (global, local) <ref type="bibr" target="#b2">[3]</ref>. Closely related to this work are global rule-based explanation methods. These attempt to extract rules based on the predictions of a black-box classifier on a dataset. Many of these methods are based on statistics <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref> or extracting rules from decision trees <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10]</ref>, while others are based on logics <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref>. Extensionally related to this work are also global prototype explanation methods <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b13">14]</ref> which present specific representative samples from a dataset as explanations of black-box classifiers.</p><p>A key feature of the algorithms proposed in Section 5 is the computation of the Least Common Subsumer (LCS). This problem has been extensively studied for various decription logic expressivities <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b17">18]</ref>. Furthermore, this work makes use of the strong theoretical and practical results in the area of semantic query answering <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b19">20,</ref><ref type="bibr" target="#b20">21,</ref><ref type="bibr" target="#b21">22]</ref> for evaluating and applying the proposed algorithms in a practical setting.</p><p>Numerous works have addressed the Query Reverse Engineering problem, both for knowledge bases and traditional databases <ref type="bibr" target="#b22">[23,</ref><ref type="bibr" target="#b23">24,</ref><ref type="bibr" target="#b24">25]</ref>, and tools have been developed to offer an easy and user friendly way to perform QRE on knowledge bases <ref type="bibr" target="#b25">[26]</ref>. The key difference between these works and ours is that we are able to find an approximate solution in cases where there is no exact answer, thanks to the heuristic approach of our algorithms (for more details see Section 5).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Background</head><p>Let 𝒱 = ⟨CN, RN, IN⟩ be a vocabulary, with CN, RN, IN mutually disjoint finite sets of concept, role and individual names, respectively. Let also 𝒯 and 𝒜 be a terminology (TBox) and an assertional database (ABox), respectively, over 𝒱 using a Description Logics (DL) dialect ℒ, i.e. a set of axioms and assertions that use elements of 𝒱 and constructors of ℒ. The pair ⟨𝒱, ℒ⟩ is a DL-language, and 𝒦 = ⟨𝒯 , 𝒜⟩ is a knowledge base (KB) over this language. The semantics of KBs are defined in the standard way using interpretations <ref type="bibr" target="#b26">[27]</ref>. Given a domain ∆, an interpretation ℐ = (∆ ℐ , • ℐ ) assigns a set 𝐶 ℐ ⊆ ∆ ℐ to concept 𝐶, a set 𝑟 ℐ ⊆ ∆ ℐ × ∆ ℐ to role 𝑟, and an object 𝑎 ℐ ∈ ∆ to individual 𝑎. ℐ is a model of an ABox 𝒜 iff 𝑎 ℐ ∈ 𝐶 ℐ for all 𝐶(𝑎) ∈ 𝒜, and (𝑎 ℐ , 𝑏 ℐ ) ∈ 𝑟 ℐ for all 𝑟(𝑎, 𝑏) ∈ 𝒜. ℐ is a model of a TBox 𝒯 if it satisfies all axioms in 𝒯 .</p><p>Given a vocabulary 𝒱, a conjunctive query (simply, a query) 𝑞 is an expression { ⟨𝑥 1 , . . . 𝑥 𝑘 ⟩ | ∃𝑦 1 . . . ∃𝑦 𝑙 .(𝑐 1 ∧ . . . ∧ 𝑐 𝑛 ) }, where 𝑘, 𝑙 ≥ 0, 𝑛 ≥ 1, 𝑥 𝑖 , 𝑦 𝑖 are variable names, each 𝑐 𝑖 is an atom 𝐶(𝑢) or 𝑟(𝑢, 𝑣), where 𝐶 ∈ CN, 𝑟 ∈ RN, 𝑢, 𝑣 ∈ IN ∪ {𝑥 𝑖 } 𝑘 𝑖=1 ∪ {𝑦 𝑖 } 𝑙 𝑖=1 and all 𝑥 𝑖 , 𝑦 𝑖 appear in at least one atom. The vector ⟨𝑥 1 , . . . 𝑥 𝑘 ⟩ is the head of 𝑞, its elements are the answer variables, and {𝑐 1 , . . . , 𝑐 𝑛 } is the body of 𝑞. If 𝑞 has no answer variables it is boolean, and if its body is a singleton, it is atomic. In this paper we focus on non-boolean queries having one answer variable and in which all arguments or all 𝑐 𝑖 are variables, which are called instance queries. For simplicity we write instance queries as 𝑞 = {𝑐 1 , ..., 𝑐 𝑛 }, considering always 𝑥 as the answer variable. Given a KB 𝒦, a query 𝑞 and an interpretation ℐ of 𝒦, an individual 𝑎 ∈ IN is an answer to 𝑞 in ℐ if it there is a successful variable substitution, substituting 𝑎 ℐ for 𝑥. This is formally defined as a match, a mapping 𝜋 : VN(𝑞) → ∆ ℐ such that 𝜋(𝑢) ∈ 𝐶 ℐ for all 𝐶(𝑢) ∈ 𝑞, and (𝜋(𝑢), 𝜋(𝑣)) ∈ 𝑟 ℐ for all 𝑟(𝑢, 𝑣) ∈ 𝑞. Furthermore, it is a certain answer iff it is an answer under all interpretations that are models of 𝒦. The set of certain answers to 𝑞 is cert(𝑞, 𝒦). In some DL dialects, determining the certain answers to a query can be reduced to determining the answers of a special model, called the canonical model <ref type="bibr" target="#b27">[28]</ref> [29] 𝒞 𝒯 ,𝒜 of 𝒦 = ⟨𝒯 , 𝒜⟩. In the dialect we use for our experiments, called Horn 𝒮ℋℐ𝒬, this process is even simpler since we can guarantee that the canonical model is finite.</p><p>Consider a vocabulary 𝒱 = ⟨CN, RN, IN⟩, a knowledge base 𝒦 = ⟨𝒯 , 𝒜⟩, where 𝒯 is the TBox and 𝒜 the ABox of 𝒦 using a DL dialect ℒ on 𝒱, and the set 𝒬 of all (conjunctive) queries over 𝒱. Let 𝑄 ⊆ 𝒬 be a set of queries over 𝒦. With a slight abuse of notation, we write cert(𝑄, 𝒦) to denote the set of all answers to the queries of 𝑄, i.e. cert(𝑄, 𝒦) = ∪ 𝑞∈𝑄 cert(𝑞, 𝒦). We can partially order 𝒬 using query subsumption: A query 𝑞 2 subsumes a query 𝑞 1 (we write 𝑞 1 ≤ 𝑆 𝑞 2 ) iff there is a substitution 𝜃 s.t. 𝑞 2 𝜃 ⊆ 𝑞 1 . If 𝑞 1 , 𝑞 2 are mutually subsumed, they are syntactically equivalent (𝑞 1 ≡ 𝑆 𝑞 2 ).</p><p>Let be 𝑞 an instance query and 𝑞 ′ ⊆ 𝑞. If 𝑞 ′ is a minimal subset of 𝑞 s.t. 𝑞 ′ ≤ 𝑆 𝑞, then 𝑞 ′ is a condensation of 𝑞 (cond(𝑞)). If that minimal 𝑞 ′ is the same set as 𝑞, then 𝑞 is condensed <ref type="bibr" target="#b29">[30]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Explaining Opaque ML Classifiers</head><p>We are now ready to define and explain the operation of machine learning (ML) classifiers. In this section we only consider boolean classifiers in our definitions, that classify individuals to two classes, but the generalization from boolean to multi-class classifiers is considered trivial. Definition 1. Given a set 𝒟 ⊆ IN, a classifier F defined on 𝒟 is a mapping from 𝒟 to {0, 1}, i.e. F : 𝒟 → {0, 1}. We call 𝒟 the domain of the classifier and with F(𝒟) we denote the set of all elements of 𝒟 that are classified to 1, i.e.</p><formula xml:id="formula_0">F(𝒟) = {𝑎 ∈ 𝒟 | F(𝑎) = 1}.</formula><p>Note that deep neural networks typically classify objects, taking as input other sources of information (images, etc) not included in their semantic description in 𝒦. On the other hand, knowledge graphs usually contain this information in the form of datatype properties of objects. Throughout the theoretical analysis we omit this information from 𝒦, for simplicity, however for the evaluation we compute F(𝒟) from images of objects (see Section 6 for details). Definition 2. Let 𝒟 ⊆ IN and F be a classifier on 𝒟, 𝒦 a KB over a vocabulary 𝒱 = ⟨CN, RN, IN⟩. A query 𝑞 over 𝒱, is a FO-explanation (first-order explanation), or simply an explanation, of F over 𝒦, iff cert(𝑞, 𝒦) = F(𝒟).</p><p>Explanations of ML classifiers do not always exist. Moreover, they are not always unique or helpful in practice. In the following, we introduce the notion of approximate explanations , in order to relax the demand to exactly match the output of the classifier. A query 𝑞, is an approximate FO-explanation, or simply an approximate explanation of F, over 𝒦, iff cert(𝑞, 𝒦) ∩ F(𝒟) ̸ = ∅.</p><p>Of special interest are approximate explanations that guarantee that all their answers are classified by F as positive. A query 𝑞, is an under-FO-explanation, or simply an under-explanation, of F over 𝒦, iff it is an approximate explanation of F and cert(𝑞, 𝒦) ⊆ F(𝒟).</p><p>Of interest are also approximate explanations for which all individuals classified by F as positive are in their answer set. A query 𝑞, is an over-FO-explanation, or simply an over-explanation, of F over 𝒦, iff it is an approximate explanation of F and F(𝒟) ⊆ cert(𝑞, 𝒦). In most cases multiple approximate explanations exist, so we need to find a way to evaluate them. In the following paragraph we propose a set of evaluation metrics for such explanations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Explanation Evaluation Metrics</head><p>We can define similarity measures that take into account the syntactical form of the queries, their certain answers over any ABox, or their certain answers over a specific knowledge base. For the syntactic similarity of two queries, there is a lot of work in the area (see for example the work on query refinement and graph distance). For answer-based similarity for any ABox, it is not obvious how to define similarity measures, since it is difficult to consider all ABoxes or to capture all the TBox information by checking the syntax. The case of answer-based similarity for a specific knowledge base is more intuitive and can be addressed using either the well-known Jaccard similarity coefficient defined on sets, or other popular metrics from the area of machine learning like precision and recall. So, we define the following three similarity measures on 𝒬 for a query-explanation 𝑞 of a classifier F over a specific knowledge base 𝒦 and a set 𝒟 ⊆ IN as follows:</p><p>Degree A Jaccard distance-based similarity measure, it's obvious that the degree of an (exact) explanation is equal to 1. deg(𝑞, F) = |cert(𝑞,𝒦)∩F(𝒟)| |cert(𝑞,𝒦)∪F(𝒟)| Precision A similarity measure inspired by the precision of ML classifiers, it's obvious that the precision of an under-explanation is equal to 1. pre(𝑞, F) = |cert(𝑞,𝒦)∩F(𝒟)|</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>|cert(𝑞,𝒦)|</head><p>Recall A similarity measure inspired by the recall of ML classifiers, it's obvious that the recall of an over-explanation is equal to 1. rec(𝑞, F) = |cert(𝑞,𝒦)∩F(𝒟)| |F(𝒟)|</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Computing Explanations</head><p>In this section we describe in detail the proposed algorithms, and introduce the concept of an Explanation Dataset, which plays a key role for effectively utilizing the proposed algorithms in a practical setting.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Algorithms</head><p>In order to unify explanations we use the notion of the least common subsumer (LCS). Given two queries 𝑞 1 , 𝑞 2 , their least common subsumer LCS(𝑞 1 , 𝑞 2 ) is defined as the query 𝑞 for which 𝑞 1 , 𝑞 2 ≤ 𝑆 𝑞 and ∀𝑞 ′ : 𝑞 1 , 𝑞 2 ≤ 𝑆 𝑞 ′ ⇔ 𝑞 ≤ 𝑆 𝑞 ′ . The least common subsumer is the most specific generalization of 𝑞 1 , 𝑞 2 .</p><p>In order to manipulate queries and compute explanations, we use the representation of interpretations and queries as labeled graphs. For interpretations, each element 𝑎 ℐ ∈ ∆ ℐ is represented by a node with a label which contains every concept 𝐶 ∈ CN for which 𝑎 ℐ ∈ 𝐶 ℐ . Two nodes 𝑎 ℐ , 𝑏 ℐ are connected with an edge with label 𝑟 if (𝑎 ℐ , 𝑏 ℐ ) ∈ 𝑟 ℐ . This graph can be described by a triple (𝑉, 𝐸, 𝐿) where 𝑉 = ∆ ℐ are the nodes 𝑎 ℐ , 𝐸 ⊆ ∆ ℐ × RN × ∆ ℐ are the edges (𝑎 ℐ , 𝑟, 𝑏 ℐ ), and 𝐿 : 𝑉 → 2 CN is the labeling function 𝐿(𝑎 ℐ ) = {𝐶, 𝐷, . . . } of the nodes. The graph representation for queries is defined in the same way, but with nodes corresponding to variables and labels and edges to the conjuncts containing those variables.</p><p>These representations are useful because a lot of the concepts we have presented so far can be rephrased in terms of homomorphisms between labeled graphs. Given two labeled graphs</p><formula xml:id="formula_1">𝐺 1 = (𝑉 1 , 𝐸 1 , 𝐿 1 ), 𝐺 2 = (𝑉 2 , 𝐸 2 , 𝐿 2 ), a mapping ℎ : 𝑉 1 → 𝑉 2 is called a homomorphism iff it respects the structure of 𝐺 1 , or more formally: (i) ∀𝑣 ∈ 𝑉 1 , 𝐿 1 (𝑣) ⊆ 𝐿 2 (ℎ(𝑣)) and (ii) ∀𝑢, 𝑣 ∈ 𝑉 1 , (𝑢, 𝑟, 𝑣) ∈ 𝐸 1 ⇒ (ℎ(𝑢), 𝑟, ℎ(𝑣)) ∈ 𝐸 2 .</formula><p>If such a mapping exists we say that 𝐺 1 is homomorphic to 𝐺 2 and we write 𝐺 1 → 𝐺 2 .</p><p>A match 𝜋 can now be rephrased as a homomorphism from the query graph of 𝑞 to the interpretation graph of 𝐼, and 𝑎 is an answer to 𝑞 if there is such a homomorphism with 𝜋(𝑥) = 𝑎 ℐ and a certain answer if there is a homomorphism to the canonical model with 𝜋(𝑥) = 𝑎 𝒞 𝒯 ,𝒜 . Subsumption can also be described in terms of homomorphisms since its definition implies that</p><formula xml:id="formula_2">𝐺 𝑞 1 → 𝐺 𝑞 2 ⇔ 𝑞 2 ≤ 𝑆 𝑞 1</formula><p>, where 𝐺 𝑞 is the graph of query 𝑞.</p><p>In order to calculate the LCS we use an extension of the Kronecker product of graphs to labeled graphs. Given two labeled graphs 𝐺 1 = (𝑉 1 , 𝐸 1 , 𝐿 1 ), 𝐺 2 = (𝑉 2 , 𝐸 2 , 𝐿 2 ) the Kronecker product 𝐺 = 𝐺 1 × 𝐺 2 of those graphs is: 𝐺 = (𝑉, 𝐸, 𝐿), 𝑉 = 𝑉 1 × 𝑉 2 (Cartesian product of sets),</p><formula xml:id="formula_3">((𝑢 1 , 𝑟, 𝑣 1 ) ∈ 𝐸 1 , (𝑢 2 , 𝑟, 𝑣 2 ) ∈ 𝐸 2 ) ⇔ ((𝑢 1 , 𝑢 2 ), 𝑟, (𝑣 1 , 𝑣 2 )) ∈ 𝐸, 𝐿((𝑣 1 , 𝑣 2 )) = 𝐿 1 (𝑣 1 ) ∩ 𝐿 2 (𝑣 2 )</formula><p>As with the Kronecker product of unlabeled graphs, it holds that 𝐻 → 𝐺 1 , 𝐺 2 ⇔ 𝐻 → 𝐺 1 × 𝐺 2 , this implies that the graph of LCS(𝑞 1 , 𝑞 2 ) is 𝐺 𝑞 1 × 𝐺 𝑞 2 , with the node (𝑥, 𝑥) becoming the new answer variable and the other nodes of 𝐺 𝑞 1 × 𝐺 𝑞 2 are renamed arbitrarily. Calculating the LCS using the Kronecker product involves 𝑂(𝑛 2 𝑚 2 ) operations, where</p><formula xml:id="formula_4">𝑛 = |𝑉 1 |, 𝑚 = |𝑉 2 |.</formula><p>Even though the Kronecker product between two queries computes the LCS, the query it produces is not minimal with respect to the number of variables. Since these queries are intended to be shown to humans as explanations, the minimization of the number of variables is imperative. However, condensing a query is coNP-complete <ref type="bibr" target="#b29">[30]</ref>. For this reason, we utilize Algorithm 1 which removes redundant conjuncts and variables, however without a guarantee of producing a fully condensed query.</p><p>For each pair of variables present in a query, Algorithm 1 checks if unifying the two variables is equivalent to deleting one of the two, in which case the variable and conjuncts are deleted. Verifying the correctness of Algorithm 1 amounts to verifying the claim of line 6. By unifying variable 𝑣 𝑗 with 𝑣 𝑖 , all conjuncts of the form 𝐶(𝑣 𝑗 ) become 𝐶(𝑣 𝑖 ) and all conjuncts of the forms 𝑟(𝑣 𝑗 , 𝑣 𝑘 ), 𝑟(𝑣 𝑘 , 𝑣 𝑗 ), 𝑟(𝑣 𝑗 , 𝑣 𝑗 ) become respectively 𝑟(𝑣 𝑖 , 𝑣 𝑘 ), 𝑟(𝑣 𝑘 , 𝑣 𝑖 ), 𝑟(𝑣 𝑖 , 𝑣 𝑖 ). Line 6 checks that those conjuncts are already present in the query and therefore that unifying 𝑣 𝑗 with 𝑣 𝑖 is equivalent to deleting 𝑣 𝑗 . Unifying two variables of 𝑞 produces a query that is subsumed by 𝑞, while deleting a variable produces a query that subsumes 𝑞, therefore Algorithm 1 produces syntactically equivalent queries. For the complexity of 1 refer to the appendix. We can also use the graph representation of the canonical model to introduce the notion of the most specific query (MSQ) of an individual 𝑎. That is the query that contains as much information as possible about 𝑎, it is the least query in terms of subsumption that has that 𝑎 as a certain answer and it can be constructed by converting the connected component of the graph of 𝒞 𝒯 ,𝒜 that contains 𝑎 into a query graph with the node of 𝑎 𝒞 𝒯 ,𝒜 becoming the answer variable. Every query that has 𝑎 as a certain answer must be homomorphic to that connected component and therefore to the MSQ as well.</p><p>For generating explanations of black-box classifiers, we propose Algorithm 2, which generates candidate approximate explanations for sets of individuals, by utilizing a heuristic for disjointness which is discussed later in this section. For generating explanations, Algorithm 2 first populates a list with the MSQ of each individual. Then it removes the two least disjoint (according to the heuristic) queries and replaces them with their LCS (computed as the Kronecker product) only if it has fewer variables than a pre-set threshold, after it is minimized with Algorithm 1. The LCS of the two least disjoint queries is also added to the set of candidate explanations to be returned. This process is repeated until there are fewer than two queries left to check for disjointness.</p><p>For heuristically approximating disjointness between two queries we use Algorithm </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Explanation Dataset</head><p>The connecting component between our work and machine learning classifiers which drives this explanation framework is the explanation dataset. It is essentially a semantically enriched dataset compatible with the classifier under investigation. It consists of data that can be fed to the classifier (e.g. images for image classifiers) along with semantic descriptions of this data expressed with description logics. The additional information of this enriched dataset allows us to produce explanations for the classifiers exploiting the algorithms mentioned in Section 5.1.</p><p>There isn't a standard procedure for the semantic enrichment of data, so we need to find a way to create these explanation datasets. Thankfully, enriched datasets already exist within the premises of machine learning like the Visual Genome <ref type="bibr" target="#b30">[31]</ref> or the CLEVR <ref type="bibr" target="#b31">[32]</ref> datasets, which contain images along with metadata for each image, such as annotations or sets of questionsanswers. It is usually easy to express these metadata in a description logic, in most cases mapping them to individuals, concepts, roles and axioms. Such an example is the creation of an explanation dataset from the CLEVR-Hans3 dataset <ref type="bibr" target="#b5">[6]</ref> shown in 6.1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Experiments and Discussion</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1.">Explanation Dataset Creation</head><p>CLEVR-Hans3 <ref type="bibr" target="#b5">[6]</ref> is a confounded image classification dataset, designed to evaluate algorithms that detect and fix biases of classifiers. It consists of CLEVR <ref type="bibr" target="#b31">[32]</ref> images divided into three Algorithm 3: DISJ Input: A pair of queries 𝑞 1 = (𝑉 1 , 𝐸 1 , 𝐿 1 ), 𝑞 2 = (𝑉 2 , 𝐸 2 , 𝐿 2 ) for which to calculate a very rough estimate of how disjoint they are. Output: The estimate of the disjointness. 1 disj ← 0 2 For every variable in 𝑞 1 find how much it differs in terms of labels and number of edges from its closest match in 𝑞 2 :</p><formula xml:id="formula_5">3 foreach 𝑣 1 ∈ 𝑉 1 do 4 min_diff ← +∞ 5 foreach 𝑣 2 ∈ 𝑉 2 do 6 diff ← |𝐿 1 (𝑣 1 ) ∖ 𝐿 2 (𝑣 2 )| 7 for 𝑟 ∈ RN do 8 diff ← diff + max {|{(𝑣 1 , 𝑟, 𝑢 1 ) ∈ 𝐸 1 , 𝑢 1 ∈ 𝑉 1 }| − |{(𝑣 2 , 𝑟, 𝑢 2 ) ∈ 𝐸 2 , 𝑢 2 ∈ 𝑉 2 }|, 0} 9 diff ← diff + max {|{(𝑢 1 , 𝑟, 𝑣 1 ) ∈ 𝐸 1 , 𝑢 1 ∈ 𝑉 1 }| − |{(𝑢 2 , 𝑟, 𝑣 2 ) ∈ 𝐸 2 , 𝑢 2 ∈ 𝑉 2 }|, 0} end min_diff ← min(min_diff, diff) end disj ← disj + min_diff end</formula><p>Repeat the above but with 𝑞 1 and 𝑞 2 reversed. return disj classes, of which two are confounded. The membership of a class is based on combinations of objects' attributes and relations. Thus, within the dataset, consisting of train, validation, and test splits, all train, and validation images of confounded classes will be confounded with a specific attribute. The rules that the classes follow are the following, with the confounding factors in parentheses: (i) Large (Gray) Cube and Large Cylinder, (ii) Small Metal Cube and Small (Metal) Sphere, (iii) Large Blue Sphere and Small Yellow Sphere.</p><p>We created our explanation dataset using the test set of CLEVR-Hans3, consisting of 750 images for each class. Exploiting the description of the images provided in json files, we constructed a vocabulary ⟨CN, RN, IN⟩, with all the images and the objects therein as individuals (IN), the concepts defining the size, color, shape, and material of each object, as well as two indicative concepts Image and Object as concept names (CN), and the role "contains(Image, Object)" indicating the existence of an object in a specific image as the only role name (RN). We then created a knowledge base over this vocabulary, with the ABox containing the semantic description of all images and the respective objects, and the TBox containing certain rather trivial inclusion axioms. The sets CN, RN and the Tbox of our knowledge base and the respective vocabulary are the following: CN = {Image, Object, Cube, Cylinder, Sphere, Metal, Rubber, Blue,Brown, Cyan, Gray, Green, Purple, Red, Yellow, Large, Small}, RN = {contains(Image, Object)}, 𝒯 = {x ⊑ Object} (where 𝑥 ̸ ∈ {Image, Object}). </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2.">Setting</head><p>For CLEVR-Hans3 we used the same classifier and training procedure as the one used by the creators of the dataset in <ref type="bibr" target="#b32">[33]</ref>. The classifier is a deep CNN, specifically ResNet34 <ref type="bibr" target="#b33">[34]</ref>. The performance of the classifier is shown in Table <ref type="table" target="#tab_2">1</ref>. After training the classifier we acquired its predictions on the test set and generated explanations for each class with Algorithm 2. We also loaded the explanation dataset in GraphDB<ref type="foot" target="#foot_0">1</ref> for getting certain answers of queries, and evaluating explanations. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.3.">Results</head><p>The explanations generated for the classifier on the test set of CLEVR-Hans3 are shown in Table <ref type="table" target="#tab_3">2</ref>, where we show the query, the value of each metric and the numbers of positive and negative individuals. The term positive individuals refers to the certain answers of the query that are classified to the respective class, while the term negative individuals refers to the rest of the certain answers. In our representation of the queries in Table <ref type="table" target="#tab_3">2</ref> we have omitted the answer variable x, along with all conjuncts of the form x contains y and conjuncts of the form Object(y), for brevity. The algorithm found an under-explanation (precision=1) and an over-explanation (recall=1) for each class, with the best explanation degree achieved for class 3, which lacks a confounding factor. Over-explanations and under-explanations are of particular interest since they can be translated into global rules which the classifier follows on the particular dataset. For instance the under-explanation for class 1 is translated into the rule "If the image contains a Large Gray Cube, a Large Cylinder and a Large Metal Object then it is classified to class 1.", while the over-explanation for the same class is translated into the rule "If the image does not contain a Large Cube then it is not classified to class 1". We can see that over-explanations tend to be more general, in order to include all the predictions of the classifier, while on the other side, under-explanations tend to be much more specific. That's why we find approximate explanations to be very useful for the general description of a classifier. Both over-and under-explanations tend to have low degree, while in general explanations with high degree provide us with a more accurate approximation of what the classifier has learned. So it is also useful to consider approximate explanations of high degree, even though they cannot be translated to rules like overand under-explanations.</p><p>It is interesting to note that the over-explanation produced for class 1 contains a Large Cube but not a Large Cylinder. This gives us the information that in the training process the classifier learned to pay more attention to the presence of cubes rather than the presence of cylinders. The elements of the under-explanation that differ from the true rule of class 1 can be a great starting point for a closer inspection of the classifier. We expected the presence of a Gray Cube from the confounding factor introduced in the training and validation sets, but in a real world scenario similar insights can be reached by inspecting the queries. In our case, we further inquired the role that the Gray Cube and the Large Metal Object play in the under-explanation by removing either of them from the query and examining its new performance. In Table <ref type="table" target="#tab_4">3</ref> we can see that the gray color was essential for the under-explanation while the Large Metal Object was not, and in fact its removal improved the under-explanation and returned almost the entire class.</p><p>Another result that piqued our attention is the approximate explanation for class 3 which is the actual rule that describes this class. This explanation returns two negative individuals which we can also see in the confusion matrix of the classifier and we were interested to examine what sets these two individuals apart. We found that both of these individuals are answers to the query "y1 is Large, Gray, Cube". This shows us once again the large effect the confounding factor of class 1 had on the classifier.</p><p>Our overall results show that the classifier tends to emphasize low level information such as color and shape and ignores high level information such as texture and the combined presence of multiple objects. This is the reason why the confounding factor of class 1 had an important effect to the way images are classified, while the confounding factor of class 2 seems to have had </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusion</head><p>We introduced a theoretical framework for representing global explanations for ML classifiers in the form of conjunctive queries. We also developed algorithms for computing explanations for limited knowledge bases, and investigated some of their quality-related properties. Using CLEVR-Hans3, we were able to generate multiple explanations for a deep learning classifier.</p><p>In several cases, we found the explanations to be useful for detecting potential biases, and providing a more intuitive approach for evaluating classifiers besides performance metrics which are typically used.</p><p>One of the conclusions we can draw from our experiments is the importance of developing methods evaluating explanations, which we plan to explore in future work. The three metrics used in this paper are simple and intuitive, however none take under consideration the humanreadability factor which is crucial for explainability.</p><p>Finally, the quality and usefulness of explanations generated with the proposed methodology depends on the characteristics of the explanation dataset. Thus, in future work we also plan to investigate what constitutes a "good" explanation dataset, in addition to experimenting on more specialized domains in which explainability is critical, such as in medical applications.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The three classes of CLEVR-Hans3 with their rules and the confounding factors in parentheses.</figDesc><graphic coords="9,172.63,84.19,250.01,243.08" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>Check if unifying 𝑣 𝑗 of 𝑞 ′ with 𝑣 𝑖 of 𝑞 ′ would be the same as deleting it:7 if 𝐿(𝑣 𝑗 ) ⊆ 𝐿(𝑣 𝑖 ) and ((𝑣 𝑗 , 𝑟, 𝑣 𝑘 ) ∈ 𝐸 ⇒ (𝑣 𝑖 , 𝑟, 𝑣 𝑘 ) ∈ 𝐸, 𝑘 ̸ = 𝑗) and((𝑣 𝑘 , 𝑟, 𝑣</figDesc><table><row><cell cols="2">Algorithm 1: MINIMIZE</cell></row><row><cell cols="2">Input: Query 𝑞 = (𝑉, 𝐸, 𝐿) to be minimized.</cell></row><row><cell cols="2">Output: The minimized query 𝑞 ′ .</cell></row><row><cell cols="2">1 𝑛 ← 𝑉</cell></row><row><cell cols="2">2 𝑞 ′ ← 𝑞</cell></row><row><cell>3 do</cell><cell></cell></row><row><cell>4</cell><cell>𝑞 ← 𝑞 ′</cell></row><row><cell>5</cell><cell>foreach pair 0 &lt; 𝑖, 𝑗 ≤ 𝑛, 𝑖 ̸ = 𝑗 do</cell></row><row><cell>6</cell><cell></cell></row></table><note>𝑗 ) ∈ 𝐸 ⇒ (𝑣 𝑘 , 𝑟, 𝑣 𝑖 ) ∈ 𝐸, 𝑘 ̸ = 𝑗) and ((𝑣 𝑗 , 𝑟, 𝑣 𝑗 ) ∈ 𝐸 ⇒ (𝑣 𝑖 , 𝑟, 𝑣 𝑖 ) ∈ 𝐸) then 8 Delete variable 𝑗 from 𝑞 ′ . 9 end end while 𝑞 ′ ̸ = 𝑞 return 𝑞 ′</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>3. Algorithm 3 computes a rough estimate of how disjoint two queries are. It compares every variable 𝑣 1 of 𝑞 1 with every variable 𝑣 2 of 𝑞 2 counting how many concept and role conjuncts containing 𝑣 1 would certainly be removed if 𝑣 1 was unified with 𝑣 2 . It keeps the best such count for 𝑣 1 and Algorithm 2: EXPLAIN Input: A set of individuals {𝑖 1 , 𝑖 2 , . . . 𝑖 𝑛 } ⊆ IN and a threshold 𝑡 of maximum query size. Output: A set of queries as explanations of the individuals. 𝑞 1 , 𝑞 2 ← arg min{disj(𝑞, 𝑞 ′ ) | 𝑞, 𝑞 ′ ∈ queries} Remove 𝑞 1 , 𝑞 2 from 'queries'. 𝑞 ← minimize(LCS(𝑞 1 , 𝑞 2 ))adds it to the estimate of the disjointness. This process is then symmetrically repeated for the variables of 𝑞 2 .</figDesc><table><row><cell cols="2">1 explanations ← ∅</cell></row><row><cell cols="2">2 queries ← {msq(𝑖 𝑗 )} 𝑛 𝑗=1</cell></row><row><cell cols="2">3 while 'queries' has two or more elements do</cell></row><row><cell>4</cell><cell>Find the least disjoint pair of queries:</cell></row><row><cell>6</cell><cell></cell></row><row><cell>8</cell><cell>if the number of variables in 𝑞 is ≤ 𝑡 then</cell></row><row><cell>9</cell><cell>explanations ← explanations ∪ {𝑞}</cell></row><row><cell>10</cell><cell>queries ← queries ∪ {𝑞}</cell></row><row><cell>11</cell><cell>end</cell></row><row><cell cols="2">12 end</cell></row><row><cell cols="2">13 return explanations</cell></row></table><note>5 7</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 1</head><label>1</label><figDesc>Performance of ResNet34 on CLEVR-Hans3.</figDesc><table><row><cell></cell><cell cols="2">Test set metrics</cell><cell></cell><cell cols="2">Confusion martix</cell><cell></cell></row><row><cell cols="7">True label Precision Recall F1-score Class 1 Class 2 Class 3</cell></row><row><cell>Class 1</cell><cell>0.94</cell><cell>0.16</cell><cell>0.27</cell><cell>118</cell><cell>511</cell><cell>121</cell></row><row><cell>Class 2</cell><cell>0.59</cell><cell>0.98</cell><cell>0.54</cell><cell>5</cell><cell>736</cell><cell>9</cell></row><row><cell>Class 3</cell><cell>0.85</cell><cell>1.00</cell><cell>0.92</cell><cell>2</cell><cell>0</cell><cell>748</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 2</head><label>2</label><figDesc>Optimal explanations with regard to the three metrics on CLEVR-Hans3.</figDesc><table><row><cell>Metric</cell><cell>Query</cell><cell cols="4">Precision Recall Degree Positives</cell></row><row><cell></cell><cell></cell><cell>Class 1</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Best Precision</cell><cell>y1 is Large, Cube, Gray. y3 is Large, Metal. y2 is Large, Cylinder.</cell><cell>1.00</cell><cell>0.66</cell><cell>0.66</cell><cell>83</cell></row><row><cell>Best Recall</cell><cell>y1 is Large, Cube.</cell><cell>0.09</cell><cell>1.00</cell><cell>0.09</cell><cell>125</cell></row><row><cell>Best Degree</cell><cell>y1 is Large, Cube, Gray. y3 is Large, Metal. y2 is Large, Cylinder.</cell><cell>1.00</cell><cell>0.66</cell><cell>0.66</cell><cell>83</cell></row><row><cell></cell><cell></cell><cell>Class 2</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>y1 is Small, Sphere.</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Best Precision</cell><cell>y2 is Large, Rubber. y4 is Small, Brown. y3 is Small, Metal, Cube.</cell><cell>1.00</cell><cell>0.09</cell><cell>0.09</cell><cell>116</cell></row><row><cell></cell><cell>y5 is Small, Rubber, Cylinder.</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Best Recall</cell><cell>y1 is Cube.</cell><cell>0.63</cell><cell>1.00</cell><cell>0.63</cell><cell>1247</cell></row><row><cell>Best Degree</cell><cell>y1 is Metal, Cube. y2 is Small, Metal.</cell><cell>0.78</cell><cell>0.8</cell><cell>0.65</cell><cell>1005</cell></row><row><cell></cell><cell></cell><cell>Class 3</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>y1 is Metal, Blue.</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Best Precision</cell><cell>y2 is Large, Blue, Sphere. y4 is Small, Rubber. y3 is Yellow, Small, Sphere.</cell><cell>1.00</cell><cell>0.42</cell><cell>0.42</cell><cell>365</cell></row><row><cell></cell><cell>y5 is Metal, Sphere.</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Best Recall</cell><cell>y1 is Large. y2 is Sphere.</cell><cell>0.42</cell><cell>1.00</cell><cell>0.42</cell><cell>878</cell></row><row><cell>Best Degree</cell><cell>y1 is Yellow, Small, Sphere. y2 is Large, Blue, Sphere.</cell><cell>0.99</cell><cell>0.85</cell><cell>0.85</cell><cell>748</cell></row><row><cell cols="2">a much smaller one.</cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 3</head><label>3</label><figDesc>Two modified versions of the class 1 under-explanation produced by removing conjuncts.</figDesc><table><row><cell>Query</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://graphdb.ontotext.com/</note>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Complexity of Algorithms</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A.1. Minimize</head><p>Given 𝑛 as the number of variables present in a query, then loop 3 will be executed at most 𝑛 times, since at each loop either a variable is deleted or the algorithm returns. Loop 5 checks all pairs of variables (𝑂(𝑛 2 )), and condition 7 requires 𝑂(𝑛) set comparisons and 2|RN| comparisons of rows and columns of adjacency matrices. Thus the complexity of Algorithm 1 is 𝑂(𝑛 4 ).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A.2. Disj</head><p>If the number of variables present in each query is 𝑛 and 𝑚, then the time complexity of the algorithm is 𝑂(𝑛 • 𝑚) due to the two outer loops, while the inner loop and other computations require constant time with some careful pre-processing of the queries so that the edge count is readily available.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A.3. Explain</head><p>Regarding the complexity of Algorithm 2, having computed the complexity of its components, let 𝑛 be the number of individuals, 𝑚 be the maximum number of variables present in an individual's MSQ and 𝑡 be the threshold for the number of variables, above which we discard the resulting queries. Finding the connected component which corresponds to each individual's MSQ requires 𝑂(𝑚 2 ) operations, thus initializing the queries list can be done in 𝑂(𝑛𝑚 2 ). Then, at each loop, the algorithm removes two queries from the list, and depending on if the condition is satisfied, it adds a query to the list, thus the loop will be executed at most 𝑛 − 1 times. If we have stored the values of disj(𝑞, 𝑞 ′ ) after computing them for the first time, then we can find the pair of queries which minimize it with 𝑂(𝑛 2 ) operations, while computing them for the first time has a time complexity of 𝑂(𝑚 2 ) for each pair of queries as shown in the previous paragraph. Initially there are 𝑛(𝑛−1) 2 pairs, while on iteration 𝑖, by adding a new query to the list, 𝑛 − 𝑖 new pairs are created. The maximum number of variables for created queries is 𝑡 ≥ 𝑚, thus all executions of finding the least disjoint pair of queries will require 𝑂(𝑛 3 + 𝑛 2 𝑡 2 ) operations. Finally computing the LCS can be done in 𝑂(𝑡 4 ), while the complexity of minimization is also 𝑂(𝑡 4 ), resulting in the time complexity of Algorithm 2 as 𝑂(𝑛 3 + 𝑛 2 𝑡 2 + 𝑛𝑡 4 ).</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Turek</surname></persName>
		</author>
		<ptr target="https://www.darpa.mil/program/explainable-artificial-intelligence" />
		<title level="m">Explainable artificial intelligence (xai), Defense Advanced Research Projects Agency</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Interpretable machine learning: definitions, methods, and applications</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">J</forename><surname>Murdoch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kumbier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Abbasi-Asl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Yu</surname></persName>
		</author>
		<idno>CoRR abs/1901.04592</idno>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A survey of methods for explaining black box models</title>
		<author>
			<persName><forename type="first">R</forename><surname>Guidotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Monreale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ruggieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Turini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giannotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pedreschi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Comput. Surv</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="page">42</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">B</forename><surname>Arrieta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">D</forename><surname>Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Ser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bennetot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tabik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barbado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>García</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gil-Lopez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Molina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Benjamins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chatila</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Herrera</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Inf. Fusion</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="page" from="82" to="115" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Hogan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Blomqvist</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Cochez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Amato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>De Melo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gutiérrez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E L</forename><surname>Gayo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kirrane</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Neumaier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Polleres</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Navigli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Ngomo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Rashid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rula</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Schmelzeisen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Sequeda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Staab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zimmermann</surname></persName>
		</author>
		<idno>CoRR abs/2003.02320</idno>
		<title level="m">Knowledge graphs</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">W</forename><surname>Stammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Schramowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kersting</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2011.12854</idno>
		<title level="m">Right for the right concept: Revising neurosymbolic concepts by interacting with their explanations</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Scalable bayesian rule lists</title>
		<author>
			<persName><forename type="first">H</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Rudin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">I</forename><surname>Seltzer</surname></persName>
		</author>
		<ptr target="http://proceedings.mlr.press/v70/yang17h.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 34th International Conference on Machine Learning, ICML 2017</title>
				<meeting>the 34th International Conference on Machine Learning, ICML 2017<address><addrLine>Sydney, NSW, Australia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017-08-11">6-11 August 2017, 2017</date>
			<biblScope unit="page" from="3921" to="3930" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Rulematrix: Visualizing and understanding classifiers with rules</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Ming</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Qu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Bertini</surname></persName>
		</author>
		<idno type="DOI">10.1109/TVCG.2018.2864812</idno>
		<ptr target="https://doi.org/10.1109/TVCG.2018.2864812.doi:10.1109/TVCG.2018.2864812" />
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Vis. Comput. Graph</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="342" to="352" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Extracting tree-structured representations of trained networks</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">W</forename><surname>Craven</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Shavlik</surname></persName>
		</author>
		<ptr target="http://papers.nips.cc/paper/1152-extracting-tree-structured-representations-of-trained-networks" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems 8</title>
				<meeting><address><addrLine>NIPS, Denver, CO, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1995">November 27-30, 1995, 1995</date>
			<biblScope unit="page" from="24" to="30" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hooker</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1610.09036</idno>
		<title level="m">Interpreting models via single tree approximation</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Explaining trained neural networks with semantic web technologies: First steps</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Sarker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Doran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Raymer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<ptr target="CEUR-WS.org" />
	</analytic>
	<monogr>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>NeSy</editor>
		<imprint>
			<date type="published" when="2003">2003. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Human-driven FOL explanations of deep learning</title>
		<author>
			<persName><forename type="first">G</forename><surname>Ciravegna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giannini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Maggini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Melacci</surname></persName>
		</author>
		<idno type="DOI">10.24963/ijcai.2020/309</idno>
		<ptr target="https://doi.org/10.24963/ijcai.2020/309.doi:10.24963/ijcai.2020/309" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020</title>
				<meeting>the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="2234" to="2240" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Examples are not enough, learn to criticize! criticism for interpretability</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Koyejo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Khanna</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper/2016/hash/5680522b8e2bb01943234bce7bf84534-Abstract.html" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016</title>
				<meeting><address><addrLine>Barcelona, Spain</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">December 5-10, 2016. 2016</date>
			<biblScope unit="page" from="2280" to="2288" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Efficient data representation by selecting prototypes with importance weights</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">S</forename><surname>Gurumoorthy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Dhurandhar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A</forename><surname>Cecchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">C</forename><surname>Aggarwal</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICDM.2019.00036</idno>
		<ptr target="https://doi.org/10.1109/ICDM.2019.00036.doi:10.1109/ICDM.2019.00036" />
	</analytic>
	<monogr>
		<title level="m">2019 IEEE International Conference on Data Mining, ICDM 2019</title>
				<meeting><address><addrLine>Beijing, China</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">November 8-11, 2019, 2019</date>
			<biblScope unit="page" from="260" to="269" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Computing least common subsumers in description logics</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">W</forename><surname>Cohen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Borgida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hirsh</surname></persName>
		</author>
		<ptr target="http://www.aaai.org/Library/AAAI/1992/aaai92-117.php" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th National Conference on Artificial Intelligence</title>
				<meeting>the 10th National Conference on Artificial Intelligence<address><addrLine>San Jose, CA, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1992">July 12-16, 1992, 1992</date>
			<biblScope unit="page" from="754" to="760" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Structural subsumption and least common subsumers in a description logic with existential and number restrictions</title>
		<author>
			<persName><forename type="first">R</forename><surname>Küsters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Molitor</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11225-005-3705-5</idno>
		<ptr target="https://doi.org/10.1007/s11225-005-3705-5.doi:10.1007/s11225-005-3705-5" />
	</analytic>
	<monogr>
		<title level="j">Stud Logica</title>
		<imprint>
			<biblScope unit="volume">81</biblScope>
			<biblScope unit="page" from="227" to="259" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Computing the least common subsumer w.r.t. a background terminology</title>
		<author>
			<persName><forename type="first">F</forename><surname>Baader</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Sertkaya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Turhan</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.jal.2006.03.002</idno>
		<ptr target="https://doi.org/10.1016/j.jal.2006.03.002.doi:10.1016/j.jal.2006.03.002" />
	</analytic>
	<monogr>
		<title level="j">J. Appl. Log</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="392" to="420" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">A tableaux-based method for computing least common subsumers for expressive description logics</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Donini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Colucci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">D</forename><surname>Noia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">D</forename><surname>Sciascio</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-477/paper_22.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd International Workshop on Description Logics (DL 2009)</title>
				<meeting>the 22nd International Workshop on Description Logics (DL 2009)<address><addrLine>Oxford, UK</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009">July 27-30, 2009, 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Tractable reasoning and efficient query answering in description logics: The DL-Lite family</title>
		<author>
			<persName><forename type="first">D</forename><surname>Calvanese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">D</forename><surname>Giacomo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lembo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lenzerini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rosati</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Autom. Reason</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="page" from="385" to="429" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Computing datalog rewritings beyond horn ontologies</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">C</forename><surname>Grau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Motik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Stoilos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Horrocks</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IJCAI, IJCAI/AAAI</title>
				<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="832" to="838" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Inside the query space of DL knowledge bases</title>
		<author>
			<persName><forename type="first">A</forename><surname>Chortaras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Giazitzoglou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Stamou</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Description Logics</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">2373</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Resolution-based rewriting for horn-SHIQ ontologies</title>
		<author>
			<persName><forename type="first">D</forename><surname>Trivela</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Stoilos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chortaras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Stamou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Knowl. Inf. Syst</title>
		<imprint>
			<biblScope unit="volume">62</biblScope>
			<biblScope unit="page" from="107" to="143" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Reverse engineering SPARQL queries</title>
		<author>
			<persName><forename type="first">M</forename><surname>Arenas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">I</forename><surname>Diaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">V</forename><surname>Kostylev</surname></persName>
		</author>
		<idno type="DOI">10.1145/2872427.2882989</idno>
		<idno>doi:10.1145/2872427.2882989</idno>
		<ptr target="https://doi.org/10.1145/2872427.2882989" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 25th International Conference on World Wide Web, WWW 2016</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Bourdeau</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Hendler</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Nkambou</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Horrocks</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><forename type="middle">Y</forename><surname>Zhao</surname></persName>
		</editor>
		<meeting>the 25th International Conference on World Wide Web, WWW 2016<address><addrLine>Montreal, Canada</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2016">April 11 -15, 2016. 2016</date>
			<biblScope unit="page" from="239" to="249" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Query reverse engineering</title>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">T</forename><surname>Tran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">Y</forename><surname>Chan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Parthasarathy</surname></persName>
		</author>
		<idno type="DOI">10.1007/s00778-013-0349-3</idno>
		<ptr target="https://doi.org/10.1007/s00778-013-0349-3.doi:10.1007/s00778-013-0349-3" />
	</analytic>
	<monogr>
		<title level="j">VLDB J</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="721" to="746" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Query-based entity comparison in knowledge graphs revisited</title>
		<author>
			<persName><forename type="first">A</forename><surname>Petrova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">V</forename><surname>Kostylev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">C</forename><surname>Grau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Horrocks</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-30793-6_32</idno>
		<idno>doi:</idno>
		<ptr target="10.1007/978-3-030-30793-6\_32" />
	</analytic>
	<monogr>
		<title level="m">The Semantic Web -ISWC 2019 -18th International Semantic Web Conference</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">C</forename><surname>Ghidini</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">O</forename><surname>Hartig</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Maleshkova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Svátek</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><forename type="middle">F</forename><surname>Cruz</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Hogan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Song</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Lefrançois</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Gandon</surname></persName>
		</editor>
		<meeting><address><addrLine>Auckland, New Zealand</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">October 26-30, 2019. 2019</date>
			<biblScope unit="volume">11778</biblScope>
			<biblScope unit="page" from="558" to="575" />
		</imprint>
	</monogr>
	<note>Proceedings, Part I</note>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Sparqlbye: Querying rdf data by example</title>
		<author>
			<persName><forename type="first">G</forename><surname>Diaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Arenas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Benedikt</surname></persName>
		</author>
		<idno type="DOI">10.14778/3007263.3007302</idno>
		<idno>doi:10.14778/3007263.3007302</idno>
		<ptr target="https://doi.org/10.14778/3007263.3007302" />
	</analytic>
	<monogr>
		<title level="m">Proc. VLDB Endow</title>
				<meeting>VLDB Endow</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="1533" to="1536" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<title level="m">The Description Logic Handbook: Theory, Implementation, and Applications</title>
				<editor>
			<persName><forename type="first">F</forename><surname>Baader</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Calvanese</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><forename type="middle">L</forename><surname>Mcguinness</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Nardi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><forename type="middle">F</forename><surname>Patel-Schneider</surname></persName>
		</editor>
		<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<title level="m" type="main">An Introduction to Description Logics and Query Rewriting</title>
		<author>
			<persName><forename type="first">R</forename><surname>Kontchakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zakharyaschev</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-10587-1_5</idno>
		<idno>doi:</idno>
		<ptr target="10.1007/978-3-319-10587-1_5" />
		<imprint>
			<date type="published" when="2014">2014</date>
			<publisher>Springer International Publishing</publisher>
			<biblScope unit="page" from="195" to="244" />
			<pubPlace>Cham</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<monogr>
		<title level="m" type="main">An Introduction to Description Logic</title>
		<author>
			<persName><forename type="first">F</forename><surname>Baader</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Horrocks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lutz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Sattler</surname></persName>
		</author>
		<idno type="DOI">10.1017/9781139025355</idno>
		<imprint>
			<date type="published" when="2017">2017</date>
			<publisher>Cambridge University Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Removing redundancy from a clause</title>
		<author>
			<persName><forename type="first">G</forename><surname>Gottlob</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">G</forename><surname>Fermüller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artif. Intell</title>
		<imprint>
			<biblScope unit="volume">61</biblScope>
			<biblScope unit="page" from="263" to="289" />
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<title level="m" type="main">Visual genome: Connecting language and vision using crowdsourced dense image annotations</title>
		<author>
			<persName><forename type="first">R</forename><surname>Krishna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Groth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Johnson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kravitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Kalantidis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L.-J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Shamma</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1602.07332</idno>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b31">
	<monogr>
		<title level="m" type="main">CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning</title>
		<author>
			<persName><forename type="first">J</forename><surname>Johnson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hariharan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Van Der Maaten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Fei-Fei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">L</forename><surname>Zitnick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">B</forename><surname>Girshick</surname></persName>
		</author>
		<idno>CoRR abs/1612.06890</idno>
		<ptr target="http://arxiv.org/abs/1612.06890.arXiv:1612.06890" />
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<title level="m" type="main">Right for the right concept: Revising neurosymbolic concepts by interacting with their explanations</title>
		<author>
			<persName><forename type="first">W</forename><surname>Stammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Schramowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kersting</surname></persName>
		</author>
		<idno>CoRR abs/2011.12854</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<title level="m" type="main">Deep residual learning for image recognition</title>
		<author>
			<persName><forename type="first">K</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sun</surname></persName>
		</author>
		<idno>CoRR abs/1512.03385</idno>
		<ptr target="http://arxiv.org/abs/1512.03385.arXiv:1512.03385" />
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
