<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Conceptual Edits as Counterfactual Explanations</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Giorgos</forename><surname>Filandrianos</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Electrical and Computer Engineering</orgName>
								<orgName type="laboratory">AILS lab</orgName>
								<orgName type="institution">National Technical University of Athens</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Konstantinos</forename><surname>Thomas</surname></persName>
							<email>konstantinos.thomas@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">School of Electrical and Computer Engineering</orgName>
								<orgName type="laboratory">AILS lab</orgName>
								<orgName type="institution">National Technical University of Athens</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Edmund</forename><surname>Dervakos</surname></persName>
							<email>eddiedervakos@islab.ntua.gr</email>
							<affiliation key="aff0">
								<orgName type="department">School of Electrical and Computer Engineering</orgName>
								<orgName type="laboratory">AILS lab</orgName>
								<orgName type="institution">National Technical University of Athens</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giorgos</forename><surname>Stamou</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Electrical and Computer Engineering</orgName>
								<orgName type="laboratory">AILS lab</orgName>
								<orgName type="institution">National Technical University of Athens</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Conceptual Edits as Counterfactual Explanations</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">9286120F087D365AEDB92DA0F8FBCD79</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T09:15+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Counterfactual Explanations</term>
					<term>XAI</term>
					<term>Knowledge Graphs</term>
					<term>Description Logics</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We propose a framework for generating counterfactual explanations of black-box classifiers, which answer the question "What has to change for this to be classified as X instead of Y?" in terms of given domain knowledge. Specifically, we identify minimal and meaningful "concept edits" which, when applied, change the prediction of a black-box classifier to a desired class. Furthermore, by accumulating multiple counterfactual explanations from interesting regions of a dataset, we propose a method to estimate a "global" counterfactual explanation for that region and a desired target class. We implement algorithms and show results from preliminary experiments employing CLEVR-Hans3 and COCO as datasets. The resulting explanations were useful, and even managed to unintendedly reveal a bias in the classifier's training set, which was unknown to us.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Public concerns about biases within machine learning (ML) models have created an increased demand for transparent AI <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. End users are quickly realizing that, for them to confidently be able to count on the impressive outputs of AI models, those outputs need to be accompanied by proper explanations. Being able to assess the reasons behind an AI model's suggestion is an essential component of the trust that is needed for any organization, government, or professional to assuredly count on AI tools and incorporate them into their workflow. Unsurprisingly, this "black box" problem which gained popular traction with the introduction of ML tools to endusers, was already a pain point for researchers in the Deep Learning field for many years <ref type="bibr" target="#b2">[3]</ref>. Vetting a deep model for flaws and biases has been analogous to trying to perform an autopsy on a brain in the hope of discerning its thoughts.</p><p>One of the more interesting techniques that are being tested in efforts of observing the causation behind model outputs are counterfactual explanations. Counterfactual explanations answer the question of "What would have to change for something to be classified as X instead of Y ". A real, GDPR inspired <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>, example would be asking a bank's AI model that declined our loan "What would I have to change for my loan to be approved?". Those types of questions have a whole spectrum of plausible combinations as answers. The algorithm's goal is to find the one requiring the least amount of change, customized for that particular situation, while also being feasible and actionable in the real world. A key element of counterfactuals is their reliance on notions of distance and similarity <ref type="bibr" target="#b5">[6]</ref>. In our approach, we propose employing a measure of conceptual distance of data samples combined with the amount of change at the output of a black-box classifier as a criterion for getting counterfactual explanations.</p><p>There are many approaches to counterfactual explanations in recent literature. Poyiadzi et al. <ref type="bibr" target="#b6">[7]</ref> define counterfactual explanations as "feasible paths" in the data, which respect the data distribution and satisfy feasibility and actionability constraints. We take inspiration from this work and attempt to incorporate the constraints in our approach. Goyal et al. <ref type="bibr" target="#b7">[8]</ref> propose a method for detecting which regions in an image should be changed, by "opening" the black box and utilizing the features extracted in the first layers of a deep neural network. They approach the problem as a "minimum edit" problem which is close to the way we approach the problem, with some important differences being: we do not require access to the model's weights, the explanations we provide have the form of concept edits instead of pixel edits, and our approach is suited for any domain besides images. Another method for the visual domain proposed by Zhao et al. <ref type="bibr" target="#b8">[9]</ref>, uses a text-to-image generative adversarial network to generate counterfactual visual explanations. A similarity of this approach with ours is that they utilize external knowledge and don't solely rely on the given features and classes for a model. For numeric tabular data, Gomez et al. <ref type="bibr" target="#b9">[10]</ref> propose a heuristic method for detecting the minimal set of changes required for the prediction of a classifier to change, and provide a visualization tool for end-users. This approach is similar to ours in the sense that they compute a minimal set of changes, however instead of being applied on continuous numerical features, ours is applied on concepts that are independent of the features which the classifier accepts at its input. For further reading, we refer to literature surveys on counterfactual explanations <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref>.</p><p>The majority of AI today is data-driven, sub-symbolic machine learning. Models tend to be convoluted, algebraic matrices that are difficult for humans to interpret. Knowledge graphs <ref type="bibr" target="#b12">[13]</ref>, on the other hand, provide symbolic, background knowledge in a machine-readable and human-understandable format. Knowledge representation techniques seem like a promising complement to machine learning for providing meaningful explanations <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>. For instance Silva et al. <ref type="bibr" target="#b15">[16]</ref> utilize knowledge graphs such as WordNet <ref type="bibr" target="#b16">[17]</ref> in a composite approach for text entailment. They simultaneously outperform the state-of-the-art while explaining their predictions, thanks to the external knowledge. Liartis et al. <ref type="bibr" target="#b17">[18]</ref> and Dervakos et al. <ref type="bibr" target="#b18">[19]</ref> provide explanations for black-box classifiers, by attempting to mimic the classifier's behaviour with semantic query answering over external knowledge. Daniels et al. <ref type="bibr" target="#b19">[20]</ref> propose exploiting the WordNet hierarchy to perform scene classification from images with neural networks in an explainable fashion. Alirezaie et al. <ref type="bibr" target="#b20">[21]</ref> utilize external ontological knowledge to explain the errors of a satellite image classifier. For further reading on knowledge graphs as a tool for explainability, we refer to the recent survey by Tiddi et al. <ref type="bibr" target="#b21">[22]</ref>. Following this line of work, our approach to counterfactual explanations makes use of external knowledge graphs, in the form of concept hierarchies. Specifically, our contributions can be summarized as follows:</p><p>• We introduce a theoretical framework for representing and computing counterfactual explanations with respect to concepts that characterize data samples and are linked with external knowledge, in the form of concept hierarchies. • We propose using a conceptual edit distance which is adaptable by the assignment of costs via user input, in order to satisfy any real-world constraints. This edit distance is based on the concept hierarchy, and is closely related to other semantic distance/similarity measures which have been proposed over the years <ref type="bibr" target="#b22">[23]</ref> • We accumulate multiple counterfactual explanations, in order to generate a "global" explanation for a specific class (which we call generalized counterfactual explanations).</p><p>To our knowledge, we are the first to explore global counterfactual explanations. • We propose and implement algorithms for generating explanations in this context, and show results from preliminary experiments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Background</head><p>For describing concepts and the relationships between them we use the notation from description logics <ref type="bibr" target="#b23">[24]</ref>. Specifically, in this work we utilize data annotations which use a vocabulary linked to concept hierarchies, in the form of taxonomies. We choose the formalism of description logics because in the future we plan to expand our framework to work with more expressive knowledge besides taxonomies of concepts. Given a set of concept names CN, each of which represents an atomic concept, a concept C is defined as: C := A|⊤|⊥, where A ∈ CN, ⊤ is the universal concept and ⊥ is the bottom concept. For defining relationships between concepts we use a TBox 𝑇 , which is a set of terminological axioms of the form: A ⊑ B, where A and B are atomic concepts, and ⊑ denotes inclusion. Inclusion is transitive, meaning: (𝐴 ⊑ 𝐵 and 𝐵 ⊑ 𝐶) ⇒ 𝐴 ⊑ 𝐶. Such a TBox may be represented as a directed graph 𝐺 = (𝑉, 𝐸), where there is a 1−1 matching between vertices of the graph and concepts: 𝑉 ↔ CN ∪ {⊤}, and there is an edge from vertex 𝑣 1 which matches to concept A 1 to vertex 𝑣 2 which matches to concept A 2 iff A 1 ⊑ A 2 ∈ 𝑇 . Furthermore, we consider any concept which appears only on the right of inclusion axioms in the given TBox, is connected with an incoming edge from the node corresponding to ⊤. We will refer to this graph as the TBox graph. When we ignore the direction of edges in the TBox graph, we will refer to it as an undirected TBox graph. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Interpreting Black-Box Classifiers with Terminology Based Counterfactual Explanations</head><p>An overview of our framework is shown in Figure <ref type="figure" target="#fig_0">1</ref>. In order to generate explanations for a black-box classifier, we need a terminology in terms of which we express the explanations, in addition to a set of testing items for the classifier. Specifically, we need a dataset, where for each sample we require: a) features which can be fed to the classifier, and b) a semantic description of the sample in the form of a set of concepts, in terms of which we will provide the explanations.</p><p>In the general case, this set of concepts is linked to external knowledge, in the form of a TBox. For instance, in an explanation dataset for image classifiers, the first element (𝑥 𝑖 ) of a tuple represents an image, while the second element (𝐶 𝑖 ) might be a set of concepts that describe objects in the image. In Section 5 we experiment on image classification explanation datasets CLEVR-Hans3 <ref type="bibr" target="#b24">[25]</ref> and COCO <ref type="bibr" target="#b25">[26]</ref>. In another example, the first element of a tuple might be raw text, which is fed to a black-box natural language model, while the second element might be a set of concepts from external knowledge such as WordNet <ref type="bibr" target="#b16">[17]</ref> or ConceptNet <ref type="bibr" target="#b26">[27]</ref> , or even domain-specific knowledge such as SNOMED-CT <ref type="bibr" target="#b27">[28]</ref> for the medical domain, leading to a dataset similar to the one used in <ref type="bibr" target="#b28">[29]</ref>. Given an explanation dataset, we can answer the question "What has to change in order to be classified as X instead of Y?" in terms of concepts (𝐶 𝑖 ), instead of features (𝑥 𝑖 ). In many cases, this leads to more intuitive explanations, especially in cases in which 𝑥 𝑖 is sub-symbolic raw data (pixels, audio signals, etc), and 𝐶 𝑖 is linked to useful knowledge. More specifically, the explanations which we generate will have the form of edits on a set of concepts, where the cost of each edit is determined by the distance between concepts in the TBox graph.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 2 (Concept Distance).</head><p>Let CN be a set of atomic concepts, 𝑇 be a TBox for CN, and 𝐺 𝑇 be the corresponding undirected TBox graph. The distance from concept A to concept B, where A, B ∈ CN ∪ {⊤} is defined as the length of the shortest path on 𝐺 𝑇 from the vertex 𝑣 A to the vertex 𝑣 B , where 𝑣 A , 𝑣 B are the vertices corresponding to atomic concepts A, B. We write 𝑑 𝑇 (A, B) to denote concept distance.</p><p>For example if we were given the TBox {Cat ⊑ Mammal, Dog ⊑ Mammal, Ant ⊑ Insect, Mammal ⊑ Animal, Insect ⊑ Animal}, then the concept distance from Cat to Dog would be 2, with the path on 𝐺 𝑇 being Cat → Mammal → Dog. The concept distance from Cat to Ant would be 4 with the shortest path being Cat → Mammal → Animal → Insect → Ant. Finally, the concept distance from Cat to ⊤ would be 3 with the path Cat → Mammal → Animal → ⊤.</p><p>We will use the notion of concept distance for assigning cost to edit operations on sets of concepts. These edit operations will end up being part of the counterfactual explantions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 3 (Concept Set Edit).</head><p>Let CN be a set of atomic concept names, 𝑇 be a TBox for CN, and 𝒜 ⊆ CN be a set of concepts. A concept set edit on 𝒜 is any of:</p><formula xml:id="formula_0">• Replacement of concept A ∈ 𝒜 with concept B ̸ ∈ 𝒜. We write 𝑒 A→B (𝒜) to denote re- placement of A from 𝒜 with B. • Deletion of concept A ∈ 𝒜 from 𝒜. We write 𝑒 A→⊤ (𝒜) to denote deletion of A from 𝒜. • Insertion of concept B ̸ ∈ 𝒜 into 𝒜. We write 𝑒 ⊤→B (𝒜) to denote insertion of B into 𝒜</formula><p>The cost of a concept set edit 𝑒 𝑥→𝑦 is defined as the concept distance from 𝑥 to 𝑦: 𝑑 𝑇 (𝑥, 𝑦), where 𝑥, 𝑦 ∈ CN ∪ {⊤}. The resulting set of concepts 𝑒 𝑥→𝑦 (𝒜) is called a transformation of 𝒜.</p><p>As mentioned in Section 2, we allow for the assignment of positive weights to the undirected TBox graph. This is done to allow for the incorporation of additional constraints to better reflect the actionability and feasibility of changes in the real world. In this work, we do not systematically do this, as we consider it to be given. For example for a given application, it might be useful to make the deletion of an Animal concept (𝑒 Animal→⊤ ) more costly than the replacement of a Cat concept with a Mammal concept (𝑒 Cat→Mammal ), so we would appropriately tweak the edge weights of the undirected TBox graph.</p><p>As is apparent from the notation 𝑒 𝑥→𝑦 , we treat the deletion of concept A from a set as if being equivalent to its replacement with the universal concept ⊤, while the insertion of concept B is treated as if replacing a ⊤ concept with B. This entails that inserting or deleting a concept is more costly the further away it is from the ⊤ vertex in the TBox graph, which is a measure of how specific the concept is. Continuing the previous example, given a set 𝒜, then inserting a Cat concept into the set would have a cost of 𝑑 𝑇 (⊤, Cat) = 3, while inserting an Animal concept into the set would have a cost of 𝑑 𝑇 (⊤, Animal) = 1. Using the concept set edit, we can define a concept set edit distance between sets of concepts, as the minimum cost of a set of concept set edits which when applied to the first set of concepts, transform it into the second.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 4 (Concept Set Edit Distance).</head><p>Let CN be a set of concept names, 𝑇 be a TBox on CN and 𝒜, ℬ be sets of concepts 𝒜, ℬ ⊆ CN. The concept set edit distance from 𝒜 to ℬ is defined as the minimum cost of a set of concept edits which transform 𝒜 into ℬ.</p><p>Intuitively, the concept set edit distance represents the minimum cost of converting every concept present in the first set, into every concept present in the second. For example, given two sets of concepts 𝒜 = {Cat, Insect} and ℬ = {Animal}, and the TBox from the previous example, then their concept set edit distance will be 𝐷 𝑇 (𝒜,</p><formula xml:id="formula_1">ℬ) = min{[𝑑 𝑇 (Cat, Animal) + 𝑑 𝑇 (Insect, ⊤)], [𝑑 𝑇 (Cat, ⊤) + 𝑑 𝑇 (Insect, Animal)]} = min {(2 + 2), (3 + 1)} = 4.</formula><p>In the context of our framework, the concept set edit distance is used to measure how conceptually similar two elements of an explanation dataset are and is one of the two key components for generating counterfactual explanations. The second component involves the black-box classifier which we want to explain. Specifically, we want counterfactual explanations to represent small conceptual changes (small concept set edit distance) which lead to large changes in the output of the classifier. For this reason, we define the significance of transforming an element of an explanation dataset into another.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 5 (Significance of Transformation).</head><p>Let 𝐹 be a classifier, 𝑇 be a TBox and 𝑎 = (𝑥 𝑎 , 𝐶 𝑎 ), 𝑏 = (𝑥 𝑏 , 𝐶 𝑏 ) be two elements of an explanation dataset for 𝐹 . The significance of transforming 𝑎 into 𝑏 is defined as:</p><formula xml:id="formula_2">𝜎(𝑎, 𝑏) = |𝐹 (𝑥𝑎)−𝐹 (𝑥 𝑏 )| 𝐷 𝑇 (𝐶𝑎,𝐶 𝑏 )</formula><p>Significance of transformations is the measure we use to determine what constitutes a good counterfactual explanation. The local explanations will have the form of a sequence of samples of the explanation dataset (similarly to the approach by Poyiadzi et al. <ref type="bibr" target="#b6">[7]</ref>), but they will be accompanied by sets of concept set edits, which show what has to change conceptually in the data sample for the classification to change. In practice, we construct a directed graph where there is a node for each sample in the explanation dataset, and edges between pairs of nodes 𝑎, 𝑏 have a cost of 1 𝜎(𝑎,𝑏) and as a label the set of edits corresponding to the conceptual distance 𝐷 𝑇 . We then compute the shortest path from the given sample to any sample in the desired class, as described in Section 4.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 6 (Local Counterfactual Explanation).</head><p>Let 𝐷 = {𝑥 𝑖 , 𝐶 𝑖 } be an explanation dataset for a classifier 𝐹 , and 𝐺 = (𝑉, 𝐸) be a directed graph, where there is a 1-1 correspondence between elements of 𝐷 and the set of vertices 𝑉 . The set of edges 𝐸 contains an edge of weight 1/𝜎(𝑎, 𝑏) for every ordered pair of elements 𝑎, 𝑏 ∈ 𝐷. Each edge (𝑎, 𝑏) also has as a label a sets of concept set edits which optimally transform 𝐶 𝑎 into 𝐶 𝑏 . A counterfactual explanation from element 𝑒 to a class 𝐻 is a path from the node corresponding to 𝑒 to any element 𝑓 for which 𝐹 (𝑓 ) = 𝐻. Counterfactual explanations corresponding to a shortest path from 𝑒 to any 𝑓 are called optimal counterfactual explanations.</p><p>The shorter the distance of a path corresponding to a local counterfactual explanation, the better the explanation is considered since short distances represent significant transformations.</p><p>Finally, besides acquiring a local explanation on how a single sample should be changed for it to be classified in a specific class, we are also concerned with more general explanations which give us an overview of which type of edits are more likely to lead towards being predicted as a specific class, given a generalization of the initial sample. For example, a counterfactual explanation for a PhD student who is also a musician and was declined a loan might not be informative enough. This extension to global explanations would be able to answer the question "What do musicians usually change to have their loan accepted" and "What do PhD students usually change to have their loan accepted", which combined with its local explanation might be useful for the user to better understand why the black-box is making these decisions.</p><p>Definition 7 (Region of Explanation Dataset). Let CN be a set of concept names, 𝒬 be a set of concepts 𝒬 ⊆ CN, and 𝐷 = {𝑥 𝑖 , 𝐶 𝑖 } be an explanation dataset. A region of 𝐷 with description 𝒬 is the subset 𝑅 𝒬 ⊆ 𝐷 of the explanation dataset for which:</p><formula xml:id="formula_3">(𝑥 𝑖 , 𝐶 𝑖 ) ∈ 𝑅 𝒬 ⇐⇒ ∀𝑐 1 ∈ 𝒬, ∃𝑐 2 ∈ 𝐶 𝑖 : 𝑐 2 ⊑ 𝑐 1</formula><p>A region of an explanation dataset is a subset of it which satisfies specific constraints, it is essentially a query. For example given a region description 𝒞 = {Animal}, then the region 𝑅 𝒬 will contain any samples (𝑥 𝑖 , 𝐶 𝑖 ) of the explanation dataset which in their semantic description 𝐶 𝑖 contain any concept 𝑐 which is included in Animal according to the TBox. Generalized counterfactual explanations will then be statistical measures on all optimal local counterfactual explanations from elements of a region. Specifically, they will measure how often a concept is introduced (either via replacement or insertion) and subtract how often they are removed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 8 (Generalized Counterfactual Explanation).</head><p>Let 𝑅 𝒬 be a region of an explanation dataset, and 𝐸 𝑅 𝒬 be the multi set containing the labels of optimal local counterfactual explanations from each element of 𝑅 𝒬 to the desired class. Given a set of concepts 𝒞 ⊆ CN, a generalized counterfactual explanation is an assignment of importance to every concept C ∈ 𝒞.</p><p>, where the importance of a concept C is defined as:</p><formula xml:id="formula_4">|{𝑒 𝑥→C ∈𝐸 𝑅 𝒬 }|−|{𝑒 C→𝑥 ∈𝐸 𝑅 𝒬 }| |𝑅 𝒬 |</formula><p>, where 𝑥 ∈ CN For example consider an explanation dataset for a classifier which determines if an image depicts a bedroom or a veterinarian's office. A region of this explanation dataset with a description of {Animal} might contain three elements: (𝑥 1 , {Cat, Dog}), (𝑥 2 , {Insect}), (𝑥 3 , {Human, Sofa}). Let the first image be classified as veterinarian's office, while the other two are classified as bedroom. The optimal local counterfactual explanations from each element to the class veterinarians office might have labels: 𝐸 1 = ∅ (since 𝑥 1 is already classified in the desired class), 𝐸 2 = {𝑒 ⊤→Human , 𝑒 Insect→Cat } and 𝐸 3 = {𝑒 Human→Cat , 𝑒 Sofa→⊤ }. Then the multiset 𝐸 𝑅 𝒞 containing the labels of all optimal counterfactual explanations will be 𝐸 𝑅 𝒞 = {𝑒 ⊤→Human , 𝑒 Insect→Cat , 𝑒 Human→Cat , 𝑒 Sofa→⊤ }. Then, a generalized counterfactual explanation for this region would be: a) Cat with importance 2 3 = 2 3 − 0, b) Insect and Sofa with importance − 1 3 = 0 − 1 3 , and c) Human with importance 0 = 1 3 − 1 3 . Negative importance indicates that the concept is usually removed, while positive importance that it is introduced.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Computing Counterfactual Explanations</head><p>For generating the proposed counterfactual explanations in practice, we need a classifier 𝐹 , an explanation dataset 𝐷, and a TBox 𝑇 . There are three steps to computing explanations. The first step is to create the graph mentioned in Definition 6, the second step is to find appropriate paths in this graph, and the third step (only for generalized counterfactual explanations) is to accumulate these paths and compute importance (from Definition 8). We show an outline of how we create the graph in Algorithm 1. For computing the concept distance between two concepts we find the shortest path on the undirected TBox graph using Dijkstra's algorithm with a complexity of 𝑂(|CN| + |𝑇 | log |CN|). For computing Concept Set Edit Distance (Definition 4) from set of concepts 𝒜 to set of concepts ℬ, we first remove common elements from both sets, then we create a bipartite graph in 𝑂(|𝒜||ℬ|), where each element of 𝒜 is connected with an edge to all elements of ℬ with a weight for each edge corresponding to the concept distance, and then we compute the minimum weight full matching of the bipartite graph by using an implementation of Karp's algorithm <ref type="bibr" target="#b29">[30]</ref> for the problem with a time complexity of 𝑂(|𝒜||ℬ| log |ℬ|). Thus, to create the graph with Algorithm 1, the total time required will end up being 𝑂((𝑛 + 𝑡 log 𝑛)𝑚 4 𝑘 2 log 𝑚), where 𝑛 = |CN|, 𝑚 is the maximum cardinality of a set of concepts, 𝑘 is the size of the explanation dataset and 𝑡 is the size of the TBox. The creation of this graph is only done once per explanation dataset and TBox. To then compute local counterfactual explanations (Definition 6), we use Dijkstra's algorithm to find the shortest path, on the already constructed graph (including edge costs and labels) <ref type="foot" target="#foot_0">1</ref> .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Experiments</head><p>The experimental objective we found was most in tune with counterfactual explanations was to test a classifier for biases. For example, if we trained a classifier on a set of images that included a "grey cube" in all the images of a specific class, would our counterfactual explanation result in an "add a grey cube" answer when asked how should we alter an image for it to belong in that class? This is precisely what we did with the CLEVR-Hans3 <ref type="bibr" target="#b24">[25]</ref> dataset. Due to the control it provides on the generated images, and their accompanying description, it was the logical first step. Indeed, we found that our counterfactual algorithm was consistently able to detect these biases. Once this more technical task was accomplished, we sought to experiment on a more intuitive task. The advantage of such a task would be that it simulates a more real-world problem than 3D colored objects but, as with many real-world problems, it does not have an impartially correct bias. For example, what defines a bedroom as being a bedroom, and what makes a kitchen, a kitchen, in the eyes of the classifier? According to our counterfactual model, for instance, the classifier seems to think that a "bed" and a "refrigerator" are the defining factors of the above classes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">CLEVR-Hans3</head><p>The goal when experimenting with this dataset was twofold. First, we observe some image sequences representing local counterfactuals (ignoring the edit labels), to compare the results with results generated by the implementation provided for FACE <ref type="bibr" target="#b6">[7]</ref>. Second, we want to see if the generalized counterfactuals can easily detect the (known) bias of the classifier.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm 1: Explanation Graph Construction</head><p>Data: A classifier 𝐹 , an explanation dataset 𝐷, an undirected TBox Graph 𝐺 𝑇 Result: Explanation Graph 𝐺 𝐸 //the explanation graph will have a node for each element in the explanation dataset Initialize Directed Graph 𝐺 𝐸 = (𝑉 𝐸 = 𝐷, 𝐸 𝐸 = ∅); foreach <ref type="bibr">(</ref> //Add an edge to the explanation graph 𝐺 𝐸 with weight 1  𝜎 and as a label the edits corresponding to the minimum weight full match</p><formula xml:id="formula_5">𝐸 𝐸 = 𝐸 𝐸 ∪ {(𝑣 𝑖 , 𝑣 𝑗 , 1 𝜎(𝑖,𝑗) , {𝑒 𝑐𝑚→𝑐𝑛 })} end end return G 𝐸</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.1.">Setting</head><p>CLEVR-Hans3 is a dataset of images of sets of 3D geometric shapes which are split into three classes. For each image, information is provided regarding the objects which are present concerning their shape (Sphere, Cube, Cylinder), their size (Large, Small), their material (Metallic, Rubber) and their color (Blue, Yellow, Brown, Grey, Green, Purple, Cyan, Red). The three classes contain images that depict: a) a Large Cube and a Large Cylinder, b) a Small Metal Cube and a Small Sphere, and c) a Large Blue Sphere and a Small Yellow Sphere. Furthermore, the first two classes are confounded in the training set, with an intentionally added bias. For the first class, in the training set, the Large Cube is always Grey, while in the test set the color of the Large Cube is random. For the second class, the material of the Small Sphere is Metal in the training set, while in the test set the material of the Small Sphere is random. This means that we expect classifiers trained on the training set to be biased towards the confounding factor. As a classifier we trained a resnet34 <ref type="bibr" target="#b30">[31]</ref> model which achieved 99% accuracy on the validation set (which is confounded), while for the test set the per-class F1 scores were class 0: 0.27, class 1: 0.54, class 2: 0.92. As expected the performance is poor for the confounded classes.</p><p>We created two explanation datasets, one from the training set (in order to be compared with FACE which is intended to be run on the training set), and one from the test set in order to attempt to detect the biases acquired in training. As a set of concept names CN, we defined a concept for every combination of shape, size, material, and color (including the absence of any of the above), leading to |CN| = 4 × 3 × 3 × 9 = 324. As a TBox, we added an inclusion axiom from each concept in CN to any other concept with the same description where one element is missing. For example GrayCube ⊑ Gray and GrayCube ⊑ Cube. This way we assigned sets of concepts to each element in the dataset, based on the descriptions provided by the creators of the dataset in the corresponding json files.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.2.">Local Counterfactuals</head><p>In Figure <ref type="figure">2</ref> we show local counterfactual explanations generated for three randomly selected images (first column) which were classified in class 1 (Small Metal Cube, Small Sphere -where the Small Sphere is always Metal only in the train set) with the target class being class 0 (Large Cube, Large Cylinder, where the Large Cube is always Grey only in the train set), at first using the FACE algorithm <ref type="bibr" target="#b6">[7]</ref> (second column) and then using our proposed algorithm (third column). A first observation is that neither of the results is very intuitive, and we argue that the form of the explanations (sequence of samples from the training set) is the reason. A second observation is that our approach tends to keep the number of objects present in an image constant, which makes sense due to the cost of adding and deleting concepts instead of replacing them, while FACE which relies on the distribution of the dataset and operates on pixels, having no knowledge of the objects depicted, tends to transition to images which contain many objects. A final observation is that in both methodologies the color of the Large Cube in the target image is always Grey, which is expected since this experiment ran on the training set.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.3.">Generalized Counterfactuals</head><p>In Figures <ref type="figure" target="#fig_2">3,4</ref> we show two generalized counterfactual explanations. The first (fig. <ref type="figure" target="#fig_1">3</ref>) shows the importance of concepts for the region of the explanation dataset constructed from the test set of CLEVR-Hans3 which classified in class 1, with the target class being class 0, while the second (fig. <ref type="figure" target="#fig_2">4</ref>) shows the importance of concepts for the same region, with the target class being class 2. As mentioned in section 3, negative importance indicates that a concept tends to be removed for the given region and target class, while positive importance indicates that it tends to be inserted.</p><p>A first observation is that the bias of the classifier is immediately detected for the confounded class 0. As mentioned previously, the confounding factor for class 0 is that the Large Cube is always Grey in the train set. This is apparent from the first three bars of the plot on the left, where the most important insertions seem to be the concepts: (Gray, GrayLargeCube, GrayLarge). The reason for which GrayLargeCube has a larger importance than GrayLarge is because, for some local counterfactuals, GrayLarge objects (which are not necessarily Cube) might be removed, thus lowering the importance of this concept. Class 2 on the other hand (Large Blue Sphere, Small Yellow Sphere) is not confounded, and the classifier is not expected to be biased (test F1 score of 0.92). The most important removals seem to be: combinations of Cube, Small, Metal -which makes sense since the source region contains images classified in</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Source Image</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>FACE Counterfactual Our Counterfactual</head><p>Figure <ref type="figure">2</ref>: Counterfactuals for 3 randomly selected images (first column) which classified in class 1 with target class 0, using the FACE algorithm (second column) and our proposed method (third column) class 1 (Small Metal Cube, Small Sphere, where the Small Sphere is always Metal in the train set). The most important insertions seem to be: Blue, Yellow, Sphere, and combinations of Blue, Large, Sphere which coincides with the definition of the class.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">COCO</head><p>As a second experiment, we decided to explore some more intuitive examples and thus decided to take advantage of the COCO dataset <ref type="bibr" target="#b25">[26]</ref>, which contains real-world images, annotated with objects, which we can automatically link to external knowledge such as WordNet.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.1.">Setting</head><p>Examining COCO's labels in the process of determining a class transformation that will utilize them, we concluded that the two classes that should be used are "Restaurant" related and "Bedroom" related images. Specifically, for the restaurant-related class we gathered all images from COCO that contained the concepts: 1. {dining table, person, pizza} (1000+ images) 2. {dining table, person, wine glass} (1200+ images). For the bedroom-related class we gathered all images that contained the label combinations of: 1. {bed, person} (1300+ images) 2. {bed, book} (800+ images) 3. {bed, teddy bear} (300+ images). On top of that, we wanted to make sure that For each image in COCO, a description of the objects present in that image is provided. To create the explanation dataset, we automatically linked these object descriptions with WordNet synsets by using the NLTK python package<ref type="foot" target="#foot_1">2</ref> . We used WordNet synsets as the set of concept names CN, and the hyponym-hypernym hierarchy as a TBox. We then acquired the image classifier pre-trained on the PLACES dataset <ref type="bibr" target="#b31">[32]</ref>, provided by the creators of the dataset<ref type="foot" target="#foot_2">3</ref> , for scene classification, and made predictions on the aforementioned subset of COCO. This is the black-box classifier for which we provide explanations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.2.">Local Counterfactuals</head><p>In the first row of Figure <ref type="figure" target="#fig_3">5</ref> we show a local counterfactual explanation for an image classified as a "Bedroom" to the target class "Playhouse", which requires only one Concept Edit (𝑒 ⊤→Child ). This example is interesting because "Playhouse" is an erroneous prediction (the ground truth for the second image should be "Bedroom"), thus immediately we detect a potential bias of the classifier, that if a Child is added to an image of a "Bedroom" it might be classified as a "Playhouse". Similarly, in the second row of Figure <ref type="figure" target="#fig_3">5</ref> we show a local counterfactual explanation for an image which is classified as "Bedroom" to the target class "Veterinarian's Office", and the resulting target image is an erroneous prediction. The resulting edit is simply to add a Cat. Finally, in Figure <ref type="figure">6</ref> we show a counterfactual explanation, where the path on the graph has two steps. The source image is classified as a "Bedroom" and the target class is "Computer Room". This shows a smooth transition from the source image to the target class, by first adding a person (there are already two laptops in the source image), and then adding two more people and two more laptops.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.3.">Generalized Counterfactuals</head><p>In Figures <ref type="figure" target="#fig_5">7,8</ref> we see two examples of generalized counterfactual explanations on the COCO dataset. As before, each bar's numeric value shows the importance of the insertion (positive) or removal (negative) of that specific concept, in the process of transforming from a source region of an explanation dataset, to a target class. Without revealing the source region and the target class for each figure, we can try to work out what those are, just by looking at the most frequent additions and removals. On the first (fig. <ref type="figure" target="#fig_4">7</ref>), which is the more trivial of the two, we see that the most common removals from the source images were concepts relevant to {furniture, bed, animal, carnivore, dog}, while the most Figure <ref type="figure">6</ref>: Counterfactual explanation for changing the prediction of the image on the left from "Bedroom" to "Computer Room", which requires two steps common additions were the concepts {home appliance, refrigerator, white goods, consumer goods}. From this, we can assume that the source region was likely bedroom images (with a bias towards pets) and the target class was probably a kitchen. The true classes were, indeed, "bedroom" and "kitchen". On the second (fig. <ref type="figure" target="#fig_5">8</ref>), we see that most frequent removals revolved around {instrumentality, artifact, electronic, furniture, telecommunications, TV, broadcasting, kitchen} and the most common additions around {carnivore, animal, mammal, feline, cat, dog}. Knowing that we are dealing with a classifier of rooms and places, we'd probably guess a kitchen for the source and a location with domestic animals for the target. The actual classes were "bedroom" targeting "veterinarian", which raises an interesting question: why did we see "kitchen" instead of "bed" in the bedroom class? The answer is that no beds were actually removed, since veterinarian office images tend to include beds. On the other hand, our dataset contains a number of studio-apartment bedroom images which had part of the kitchen appearing in the photo -kitchens that are mostly missing from a vet's office and had to be removed. Another thing to note is that those examples were not cherry-picked. During our experiments we could, most of the time, estimate the source region and target class by looking at the edit frequencies. Notably, the most confusing results were when we tested the "computer room" target and found out that the generalized counterfactual explanation was very often adding people, but never laptops or computers. After investigating what seemed like a bug, we realized that most images from our dataset which were classified as a "computer room" had no computers in them, but people working in lab-appearing rooms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3.">Discussion</head><p>In our experiments we got interesting results, where both local and generalized counterfactual explanations seem to be informative, understandable and usable. In the CLEVR-Hans3 case (sec.5.1) we were able to detect the foreknown biases of the classifier, while in the COCO case (sec.5.2) we even detected unknown biases (for example the depiction of people was more important than that of laptops for the class "computer room"), and further insight into the classifier, which we had not thought about (for example that the classifier expects veterinarian's offices to depict beds among other objects). By comparing with the FACE algorithm (sec.5.1.2), we got a hint of the merits of considering explanations using high-level external terminology instead of low-level features, since even without stating the concept edits corresponding to counterfactual paths, we found the resulting images more intuitive and easy to compare with the source images.</p><p>An apparent drawback of the proposed framework, for it to be used in practice, is its reliance on the existence of semantically annotated data (i.e. an explanation dataset). Such datasets do exist for various domains, but they are not abundant. We have identified two ways of mitigating this drawback, which will be explored further in future work. The first is to semantically annotate data automatically by employing information extraction methods, such as object detection or scene-graph generation for images, or other methods which automatically link entities to knowledge (for instance from text to encyclopedic knowledge <ref type="bibr" target="#b32">[33]</ref>). The second way of mitigating this drawback, which is more suited for decision-critical domains such as medicine, is to invest resources for the manual annotation and curation of explanation datasets. We believe that having data manually characterized by domain experts could improve user awareness and trust of the generated explanations.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: System Architecture</figDesc><graphic coords="4,89.29,123.22,445.60,298.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Generalized Counterfactual Explanation for the region of the explanation dataset for CLEVR-Hans3 which is classified in class 1, with the target class being class 0</figDesc><graphic coords="12,94.24,84.19,406.80,422.64" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Generalized Counterfactual Explanation for the region of the explanation dataset for CLEVR-Hans3 which is classified in class 1, with the target class being class 2</figDesc><graphic coords="13,94.24,84.19,406.80,417.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Counterfactual explanation for changing the prediction of the image on the left from 'Bedroom' to 'Playhouse' is simply to add a child (𝑒 ⊤→Child ) (top) and from 'Bedroom' to 'veterinarians office' is simply to add a cat (𝑒 ⊤→Cat ) (bottom).</figDesc><graphic coords="14,130.96,200.14,333.35,121.16" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Generalized Counterfactual Explanations for the region of the explanation dataset for COCO which is classified as "bedroom", with the target class being "kitchen"</figDesc><graphic coords="16,94.24,84.19,406.80,398.88" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Generalized Counterfactual Explanations for the region of the explanation dataset for COCO which is classified as "bedroom", with target class "veterinarian"</figDesc><graphic coords="17,94.24,84.19,406.80,401.76" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="15,122.32,84.19,350.64,144.72" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>𝑥 𝑖 , 𝐶 𝑖 ) ∈ 𝐷 do foreach (𝑥 𝑗 , 𝐶 𝑗 ) ∈ 𝐷 ∖ {(𝑥 𝑖 , 𝐶 𝑖 )} do Initialize Graph 𝐺 𝐶 = (𝑉 𝐶 = 𝐶 𝑖 ∪ 𝐶 𝑗 , 𝐸 𝐶 = ∅); foreach 𝑘 ∈ 𝐶 𝑖 do foreach 𝑙 ∈ 𝐶 𝑗 do //Compute concept distance using TBox graph 𝑑 𝑇 (𝑘, 𝑙) = |ShortestPath(𝐺 𝑇 , 𝑘, 𝑙)| //Add an edge to 𝐺 𝐶 with weight 𝑑 𝑇 𝐸 𝐶 = 𝐸 𝐶 ∪ {(𝑘, 𝑙, 𝑑 𝑇 )} end end //Compute minimum weight full matching of the bipartite graph 𝐺 𝐶 {(𝑐 𝑚 , 𝑐 𝑛 )}, 𝑤 = MinFullMatch(𝐺 𝐶 ) //Concept Set Edit Distance 𝐷 𝑇 (𝐶 𝑖 , 𝐶 𝑗 ) = 𝑤 //Compute inverse significance</figDesc><table><row><cell>1 𝜎(𝑖,𝑗) =</cell><cell>𝐷 𝑇 (𝐶𝑖,𝐶𝑗 ) |𝐹 (𝑥𝑖)−𝐹 (𝑥𝑗 )|</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Code is available at: https://github.com/geofila/Conceptual-Edits-as-Counterfactual-Explanations</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://www.nltk.org/howto/wordnet.html</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">http://places2.csail.mit.edu/index.htm</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>We have introduced a novel framework for representing and computing counterfactual explanations and have shown some preliminary results. There are many directions which we plan to explore in future work. First of all we plan to expand this framework further into Description Logics, to include roles and individuals, allow for more complex axioms in the TBox, and to explore how this effects the resulting explanations, both theoretically and practically. Furthermore, we plan to expand our evaluation framework to include datasets from multiple domains and applications, focusing on those where explainability is imperative, such as medical applications. We aim to experiment providing explanations for text, audio and tabular data. Ideally, the evaluation framework will include human evaluators in the future. Finally, we aim to study the properties of explanation datasets, as they are defined in our framework, and as they have been approached in other works <ref type="bibr" target="#b17">[18]</ref>, <ref type="bibr" target="#b18">[19]</ref>. We will explore the effects of the size of an explanation dataset, by for example using the full COCO dataset. We will also experiment with using the same explanation dataset, linked to a different TBox (for example ConceptNet instead of WordNet), which will require us to also experiment with different notions of "conceptual" or "semantic" distance.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A survey of methods for explaining black box models</title>
		<author>
			<persName><forename type="first">R</forename><surname>Guidotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Monreale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ruggieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Turini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giannotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pedreschi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Comput. Surv</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="page">42</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">B</forename><surname>Arrieta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">D</forename><surname>Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Ser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bennetot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tabik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barbado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>García</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gil-Lopez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Molina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Benjamins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chatila</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Herrera</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Inf. Fusion</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="page" from="82" to="115" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)</title>
		<author>
			<persName><forename type="first">A</forename><surname>Adadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Berrada</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="52138" to="52160" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">European union regulations on algorithmic decision-making and a &quot;right to explanation</title>
		<author>
			<persName><forename type="first">B</forename><surname>Goodman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">R</forename><surname>Flaxman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AI Mag</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="page" from="50" to="57" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Counterfactual explanations without opening the black box: Automated decisions and the GDPR</title>
		<author>
			<persName><forename type="first">S</forename><surname>Wachter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">D</forename><surname>Mittelstadt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Russell</surname></persName>
		</author>
		<idno>CoRR abs/1711.00399</idno>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Counterfactuals</title>
		<author>
			<persName><forename type="first">D</forename><surname>Lewis</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
			<publisher>John Wiley &amp; Sons</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">FACE: feasible and actionable counterfactual explanations</title>
		<author>
			<persName><forename type="first">R</forename><surname>Poyiadzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Sokol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Santos-Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">D</forename><surname>Bie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Flach</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AIES, ACM</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="344" to="350" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Counterfactual visual explanations</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ernst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Batra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Parikh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Machine Learning Research</title>
				<meeting>Machine Learning Research<address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">97</biblScope>
			<biblScope unit="page" from="2376" to="2384" />
		</imprint>
	</monogr>
	<note>ICML</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Generating natural counterfactual visual explanations</title>
		<author>
			<persName><forename type="first">W</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Oyama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kurihara</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IJCAI, ijcai.org</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="5204" to="5205" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Vice: visual counterfactual explanations for machine learning models</title>
		<author>
			<persName><forename type="first">O</forename><surname>Gomez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Holter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Bertini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IUI, ACM</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="531" to="535" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Counterfactual explanations for machine learning: A review</title>
		<author>
			<persName><forename type="first">S</forename><surname>Verma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Dickerson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hines</surname></persName>
		</author>
		<idno>CoRR abs/2010.10596</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">A survey of contrastive and counterfactual explanation generation methods for explainable artificial intelligence</title>
		<author>
			<persName><forename type="first">I</forename><surname>Stepin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Alonso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Catalá</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pereira-Fariña</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="11974" to="12001" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Hogan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Blomqvist</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Cochez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Amato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>De Melo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gutiérrez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E L</forename><surname>Gayo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kirrane</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Neumaier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Polleres</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Navigli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Ngomo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Rashid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rula</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Schmelzeisen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Sequeda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Staab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zimmermann</surname></persName>
		</author>
		<idno>CoRR abs/2003.02320</idno>
		<title level="m">Knowledge graphs</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">On the role of knowledge graphs in explainable ai</title>
		<author>
			<persName><forename type="first">F</forename><surname>Lecue</surname></persName>
		</author>
		<idno type="DOI">10.3233/SW-190374</idno>
	</analytic>
	<monogr>
		<title level="j">Semantic Web</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="1" to="11" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">On the integration of knowledge graphs into deep learning models for a more comprehensible ai-three challenges for future research</title>
		<author>
			<persName><forename type="first">G</forename><surname>Futia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vetrò</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page">122</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Exploring knowledge graphs in an interpretable composite approach for text entailment</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">S</forename><surname>Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Freitas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Handschuh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI Conference on Artificial Intelligence</title>
				<meeting>the AAAI Conference on Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="7023" to="7030" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Wordnet: A lexical database for english</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A</forename><surname>Miller</surname></persName>
		</author>
		<idno type="DOI">10.1145/219717.219748</idno>
		<idno>doi:</idno>
		<ptr target="10.1145/219717.219748" />
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="page" from="39" to="41" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Semantic queries explaining opaque machine learning classifiers</title>
		<author>
			<persName><forename type="first">J</forename><surname>Liartis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Dervakos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Menis-Mastromichalakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chortaras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Stamou</surname></persName>
		</author>
		<ptr target="CEUR-WS.org" />
	</analytic>
	<monogr>
		<title level="m">DAO-XAI</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<imprint>
			<date type="published" when="2021">2998. 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Dervakos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Menis-Mastromichalakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chortaras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Stamou</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2202.03971</idno>
		<title level="m">Computing rule-based explanations of machine learning classifiers using knowledge graphs</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">A framework for explainable deep neural models using external knowledge graphs</title>
		<author>
			<persName><forename type="first">Z</forename><forename type="middle">A</forename><surname>Daniels</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">D</forename><surname>Frank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Menart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Raymer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II</title>
				<imprint>
			<publisher>International Society for Optics and Photonics</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">11413</biblScope>
			<biblScope unit="page">114131C</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">A symbolic approach for explaining errors in image classification tasks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Alirezaie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Längkvist</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sioutis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Loutfi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Papers and Documents of the IJCAI-ECAI-2018 Workshop on</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Knowledge graphs as tools for explainable machine learning: a survey</title>
		<author>
			<persName><forename type="first">I</forename><surname>Tiddi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Schlobach</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="page">103627</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Refinementbased similarity measure over DL conjunctive queries</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Sánchez-Ruiz-Granados</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ontañón</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>González-Calero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Plaza</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICCBR</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="volume">7969</biblScope>
			<biblScope unit="page" from="270" to="284" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m">The Description Logic Handbook: Theory, Implementation, and Applications</title>
				<editor>
			<persName><forename type="first">F</forename><surname>Baader</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Calvanese</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><forename type="middle">L</forename><surname>Mcguinness</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Nardi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><forename type="middle">F</forename><surname>Patel-Schneider</surname></persName>
		</editor>
		<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Right for the right concept: Revising neurosymbolic concepts by interacting with their explanations</title>
		<author>
			<persName><forename type="first">W</forename><surname>Stammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Schramowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kersting</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CVPR, Computer Vision Foundation / IEEE</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="3619" to="3629" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Microsoft COCO: common objects in context</title>
		<author>
			<persName><forename type="first">T</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Maire</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Belongie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hays</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Perona</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ramanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dollár</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">L</forename><surname>Zitnick</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ECCV (5</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="volume">8693</biblScope>
			<biblScope unit="page" from="740" to="755" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<title level="m" type="main">Conceptnet 5.5: An open multilingual graph of general knowledge</title>
		<author>
			<persName><forename type="first">R</forename><surname>Speer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Havasi</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
			<publisher>AAAI, AAAI Press</publisher>
			<biblScope unit="page" from="4444" to="4451" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">SNOMED clinical terms: overview of the development process and project status</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">Q</forename><surname>Stearns</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Price</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">A</forename><surname>Spackman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">Y</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AMIA, AMIA</title>
				<imprint>
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Semantic enrichment of pretrained embedding output for unsupervised IR</title>
		<author>
			<persName><forename type="first">E</forename><surname>Dervakos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Filandrianos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Thomas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mandalios</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zerva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Stamou</surname></persName>
		</author>
		<ptr target="CEUR-WS.org" />
	</analytic>
	<monogr>
		<title level="m">AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">2846</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">An algorithm to solve the m× n assignment problem in expected time o (mn log n)</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>Karp</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Networks</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="143" to="152" />
			<date type="published" when="1980">1980</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Deep residual learning for image recognition</title>
		<author>
			<persName><forename type="first">K</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sun</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CVPR, IEEE Computer Society</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="770" to="778" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Places: A 10 million image database for scene recognition</title>
		<author>
			<persName><forename type="first">B</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">À</forename><surname>Lapedriza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Khosla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Oliva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Torralba</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Pattern Anal. Mach. Intell</title>
		<imprint>
			<biblScope unit="volume">40</biblScope>
			<biblScope unit="page" from="1452" to="1464" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Wikify! linking documents to encyclopedic knowledge</title>
		<author>
			<persName><forename type="first">R</forename><surname>Mihalcea</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Csomai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the sixteenth ACM conference on Conference on information and knowledge management</title>
				<meeting>the sixteenth ACM conference on Conference on information and knowledge management</meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="233" to="242" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
