<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Understanding CNN Hidden Neuron Activations using Concept Induction over Background Knowledge</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Abhilekha</forename><surname>Dalal</surname></persName>
							<email>adalal@ksu.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">Kansas State University</orgName>
								<address>
									<settlement>Manhattan KS</settlement>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Understanding CNN Hidden Neuron Activations using Concept Induction over Background Knowledge</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">0194D514EE37D0692B9855C09AE70C73</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:57+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Explainable AI</term>
					<term>Concept Induction</term>
					<term>Convolutional Neural Network</term>
					<term>Knowledge Graph</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>A major challenge in Explainable AI is interpreting hidden neuron activations accurately. These interpretations can reveal what a deep learning system perceives as relevant in the input data, thereby addressing the black-box nature of such systems. The state of the art indicates that hidden node activations can be interpretable by humans, but there's a lack of systematic automated methods to verify these interpretations, especially those that utilize substantial background knowledge and inherently explainable methods. In this proposal, we introduce a novel model-agnostic post-hoc Explainable AI method based on a Wikipedia-derived concept hierarchy with approximately 2 million classes. Our approach utilizes OWL-reasoning-based Concept Induction for explanation generation and compares with off-the-shelf pre-trained multimodal-based explainable methods. Our results demonstrate that our method automatically provides meaningful class expressions as explanations to individual neurons in the dense layer of a Convolutional Neural Network, outperforming prior work in both quantitative and qualitative aspects.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Deep learning has revolutionized various fields such as image classification <ref type="bibr" target="#b0">[1]</ref>, speech recognition <ref type="bibr" target="#b1">[2]</ref>, translation <ref type="bibr" target="#b2">[3]</ref>, drug design <ref type="bibr" target="#b3">[4]</ref>, medical diagnosis <ref type="bibr" target="#b4">[5]</ref>, climate sciences <ref type="bibr" target="#b5">[6]</ref>. However, the opaque nature of deep learning systems poses challenges in applications involving automated decisions and safety-critical systems. For instance, concerns arise from incidents like Steve Wozniak's accusation of gender discrimination in Apple Card credit limits and biased image search results for "CEOs" <ref type="bibr" target="#b6">[7]</ref>. Safety-critical areas like self-driving cars <ref type="bibr" target="#b7">[8]</ref> and <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10]</ref> are also vulnerable to adversarial attacks <ref type="bibr" target="#b10">[11]</ref>, including altering classification results <ref type="bibr" target="#b10">[11]</ref> and manipulating the order of training images <ref type="bibr" target="#b11">[12]</ref>. Some attacks are hard to detect post facto, posing significant risks <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b13">14]</ref>.</p><p>Problem Statement: While statistical evaluations are standard for assessing deep learning performance, they fall short in providing explanations for specific system behaviors <ref type="bibr" target="#b14">[15]</ref>. Therefore, developing robust explanation methods for deep learning systems remains crucial. Despite significant progress in this area (see <ref type="bibr">Section 4)</ref>, current approaches often rely on a limited set of predefined explanation categories. This reliance on human-selected categories is problematic, as it assumes they are suitable for explaining deep learning systems, which lacks evidence. Some methods leverage deep learning models, such as LLMs, to generate explanations <ref type="bibr" target="#b15">[16]</ref>, introducing another layer of opacity. Additionally, state-of-the-art explanation systems often require modified deep learning architectures, which can lead to reduced system performance compared to unmodified versions <ref type="bibr" target="#b16">[17]</ref>.</p><p>Importance: The importance of solving this challenge cannot be overstated. Transparent and interpretable AI systems are crucial for building trust, especially in domains like healthcare, finance, and autonomous vehicles. By providing explanations, we empower users, including non-experts, to understand AI decisions, fostering better acceptance and adoption. Advancing explainable AI contributes to interdisciplinary collaboration and can enhance societal benefits while mitigating ethical risks associated with AI deployment. Therefore, it is imperative to address the challenge of developing transparent and interpretable explanation methods for deep learning systems.</p><p>The subsequent section presents the research question and objectives, building on the above core principles. 2.1 describe the contributions we have made, focusing on methods we use or plan to use to support these contributions and then describing the results 3 thus far from them.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Research Question and Contributions</head><p>Research Question: How can we develop an effective approach to explainable deep learning that can be used to assign human-understandable interpretations to the activations of hidden neurons in the deep learning model?</p><p>This proposal outlines an approach to use Concept Induction, i.e., formal logical deductive reasoning <ref type="bibr" target="#b17">[18]</ref> to automatically provide meaningful explanations for hidden neuron activation in a Convolutional Neural Network (CNN) architecture for image scene classification (on the ADE20K dataset <ref type="bibr" target="#b18">[19]</ref>), using a class hierarchy consisting of about 2 • 10 6 classes, derived from Wikipedia, as the pool of categories <ref type="bibr" target="#b19">[20]</ref>. Stating the hypothesis clearly that drives the work outlined in this proposal.</p><p>Hypothesis: Concept Induction analysis with large-scale background knowledge yields meaningful labels that stably explain neuron activation in the hidden layer of CNN architecture.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Contributions and Methodology</head><p>To achieve the above-stated hypothesis, the following objectives with the methodology followed or planned to follow are outlined:</p><p>Objective 1: Employing Concept Induction and a Wikipedia Knowledge Graph to Assign Meaningful Labels to Hidden Neurons' Activation.</p><p>We explored and evaluated three concrete methods (Concept Induction, CLIP-Dissect <ref type="bibr" target="#b15">[16]</ref>, GPT-4 <ref type="bibr" target="#b20">[21]</ref>) to generate high-level concepts for explaining hidden neuron activations. Our comprehensive methodology for Objective 1 is detailed in our paper <ref type="bibr" target="#b21">[22]</ref>.</p><p>1. Prep: Scenario and CNN Training -Utilizing the annotated ADE20K dataset <ref type="bibr" target="#b18">[19]</ref>, we trained Resnet50V2 for scene classification, achieving an accuracy of (86.46%). The annotations are only used for generating label hypotheses, not for CNN training. While highest accuracy isn't critical for our investigation, it's important for models to be practically applicable. 2. Concept Induction - <ref type="bibr" target="#b17">[18]</ref> system accepts three inputs: positive set 𝑃 and negative set 𝑁 of images from ADE20K, and a knowledge base 𝐾, all expressed as description logic theories, and all examples 𝑥 ∈ 𝑃 ∪ 𝑁 occur as individuals (constants) in 𝐾. It returns description logic class expressions 𝐸 such that 𝐾 |= 𝐸(𝑝) for all 𝑝 ∈ 𝑃 and 𝐾 ̸ |= 𝐸(𝑞) for all 𝑞 ∈ 𝑁 . For scalability, we used ECII <ref type="bibr" target="#b22">[23]</ref> heuristic Concept Induction system with Wikipedia <ref type="bibr" target="#b19">[20]</ref>. We included the images in the background knowledge by associating object annotations from ADE20K images with classes in the hierarchy, using the Levenshtein string similarity metric <ref type="bibr" target="#b23">[24]</ref> with edit distance 0.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Generating Label Hypothesesa)</head><p>In Concept Induction, we used 1,370 ADE20K images with our trained ResNet50V2, extracting activations from the dense layer with 64 neurons. Positive examples (𝑃 ) are images activating the neuron with &gt; 80% of its max activation, negative examples (𝑁 ) are those activating it with &lt; 20% of its max or not at all. ECII generates the target label for each neuron based on these sets and background knowledge. b) CLIP-Dissect employs the top 20,000 English vocabulary words as concepts. Subsequently, activations from our trained ResNet50v2 model for ADE20K test images were collected, resulting in a matrix (Number of Images × 64). Utilizing these inputs, CLIP-Dissect assigns a label to each neuron such that the neuron is most activated when the corresponding concept is present in the image, resulting in 22 distinct concepts across 64 neurons. c) GPT-4 Leveraging GPT-4, we adopt a methodology akin to <ref type="bibr" target="#b24">[25]</ref> for concept generation to differentiate image classes <ref type="bibr" target="#b25">[26]</ref>. We input image annotations from positive (𝑃 ) and negative (𝑁 ) sets into GPT-4 with prompts to discern concepts unique to 𝑃 . The prompt "Generate top three classes of objects/general scenarios that better represent what images in the positive set (𝑃 ) have but the images in the negative set (𝑁 ) do not," yields three concepts per neuron, from which we select one per class for assessment.</p><p>Objective 2: Automate Concept Label Association for Input Images using Neuron Ensembles and Non-target Activation Probabilities.</p><p>1. Concept Associations and Non-Target Activations -In pursuit of Objective 1, Step 3 generates labels for neuron activation. Each neuron's label is the target concept, with all other images considered as non-target concepts. This analysis focuses on the top three ECII responses, assessing neuron activation for non-target concepts at various cut-off values relative to each neuron's maximum activation value: &gt; 0, &gt; 20% of max, &gt; 40% of max, and &gt; 60% of max. The goal is to establish strong associations between concepts and neuron activations, understanding which concepts trigger specific neurons and to what extent. 2. Neuron Ensembles for Concept Associations -Input information can be distributed across simultaneously activated neurons, necessitating the examination of neuron ensemble activations using previously established cut-off values. However, the scale challenge arises with 2 64 potential neuron ensembles for just 64 neurons. To address this, we propose combining neurons activated for semantically related labels (with top-3 responses from ECII). For instance, if "building" activates both neuron 0 and neuron 63. We assess all images activating both neurons 0 and 63 for specified cut-off values. In cases where a concept activates more than two neurons, our analysis encompasses all possible combinations of pairs, evaluating target and non-target activations. We proceed with concepts, including neuron ensembles, that exhibit target activation exceeding 80% for further analysis 3. Validating Neuron-Concept Associations -After completing Step 1 and Step 2, we obtain probabilities for non-target concepts across all concepts, including those activating single neurons as well as neuron ensembles. This allows for identifying potential concepts and assessing associated error margins. To verify or reject these concepts, we revisit the ADE20K dataset. Using a subset of 1050 randomly chosen images, we conduct a user study via Amazon Mechanical Turk (MTurk) <ref type="bibr" target="#b26">[27]</ref> to annotate images with target concepts. We then cross-reference these designated concepts with image annotations obtained from the MTurk study. We evaluate the likelihood of neuron activations for non-target concepts. 4. Developing an Automated System -We propose developing an automated system to streamline the entire process, enabling scalability to larger datasets and exploration of a broader parameter range. The system would comprise: Concept induction: Generates class expressions/responses ranked by coverage score. Neuron activation: Calculates activation for target and non-target concepts (including neuron ensembles) at various cut-off values. Concept validation: Validates generated concepts. This automated system would analyze new images, generating a list of potential concepts with associated probabilities. Users could review the concepts and select the most relevant ones for the image. The automated approach offers several advantages, including speed, efficiency, scalability to larger datasets, and exploration of diverse parameter settings.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Evaluation and Results</head><p>Objective 1: The three approaches generate label hypotheses for all studied neurons, which we validated using new images. We search Google Images using each target label as keywords and collect 200 images per label with Imageye<ref type="foot" target="#foot_0">1</ref> . These images are split into 80% for evaluation and 20% for statistical analysis. We then determine if the target neuron activates when the retrieval label matches the target  <ref type="table" target="#tab_0">1</ref>(presents selective representation due to space constraints, complete version is available at <ref type="bibr" target="#b21">[22]</ref>.) show the percentage of target images that activated each neuron. A target label is confirmed if it activates for ≥ 80% of its target images, regardless of its activation for non-target images. Detailed paper can be found at <ref type="bibr" target="#b21">[22]</ref>. Statistical Evaluation and Result:-After generating confirmed labels from all three approaches, we assess node labeling using the remaining images, treating each neuron-label pair in Table <ref type="table" target="#tab_0">1</ref> as a hypothesis. Concept Induction, CLIP-Dissect, and GPT-4 produce 20, 8, and 27 hypotheses, respectively, based on confirmed labels. Using the Mann-Whitney U test, we compared activation strengths between images retrieved using the target label and those retrieved using other keywords. Table <ref type="table" target="#tab_1">2</ref> shows the selective representation of results obtained through Mann-Whitney U test. Concept Induction consistently outperforms other methods, as evidenced by Mann-Whitney U results and statistical analysis. For most neurons, activation values of target images significantly exceed those of non-target images (with 𝑝 &lt; 0.00001). Concept Induction rejects 19 out of 20 null hypotheses at 𝑝 &lt; 0.05, CLIP-Dissect rejects all 8 null hypotheses, and GPT-4 rejects 25 out of 27 null hypotheses at 𝑝 &lt; 0.05. More details in <ref type="bibr" target="#b21">[22]</ref>.</p><p>Objective 2: We will conduct a comprehensive statistical evaluation using the Mann-Whitney U (MWU) test for each concept across different cut-off values. This evaluation aims to compare the activation strengths of non-target concepts retrieved through Google Images(from Objective 1) with those retrieved from the ADE20K dataset. The hypothesis under consideration is that the activation strength of non-target concepts from Google Images exceeds that from the ADE20K dataset. Conversely, the null hypothesis (H0) posits that the activation strength of non-target concepts from Google Images equals that from the ADE20K dataset. For each category of cut-off values, concepts exhibiting a significant difference in activation strengths (p-value &lt; 0.005) will undergo further validation through the Wilcoxon signed-rank test across all cut-off values as a collective unit. We refine our approach and enhance concept label associations' accuracy by identifying concepts with significantly higher activation strengths.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Related Work</head><p>With the recent advances in deep learning <ref type="bibr" target="#b27">[28]</ref>, its wide usage in nearly every field, and its opaque nature make explainable AI more important than ever, and there are multiple ongoing efforts to demystify deep learning <ref type="bibr" target="#b28">[29,</ref><ref type="bibr" target="#b29">30,</ref><ref type="bibr" target="#b30">31]</ref>. Existing explainable methods can be categorized based on input data (feature) understanding, e.g., feature summarizing <ref type="bibr" target="#b31">[32,</ref><ref type="bibr" target="#b32">33]</ref>, or based on the model's internal unit representation, e.g., node summarizing <ref type="bibr" target="#b33">[34,</ref><ref type="bibr" target="#b10">11]</ref>. Those methods can be further categorized as model-specific <ref type="bibr" target="#b31">[32]</ref> or model-agnostic <ref type="bibr" target="#b32">[33]</ref>. Another kind of approach relies on human interpretation of explanatory data returned, such as counterfactual questions <ref type="bibr" target="#b34">[35]</ref>.</p><p>We focus on the understanding of internal units of the neural network-based deep learning models. Prior work has shown that internal units may indeed represent human-understandable concepts <ref type="bibr" target="#b33">[34,</ref><ref type="bibr" target="#b10">11]</ref>, but these approaches often require resource-intensive methods like semantic segmentation <ref type="bibr" target="#b35">[36]</ref> or explicit concept annotations <ref type="bibr" target="#b36">[37]</ref>. There has been research utilizing Semantic Web data for explaining deep learning models <ref type="bibr" target="#b37">[38,</ref><ref type="bibr" target="#b38">39]</ref>, and Concept Induction for generating explanations <ref type="bibr" target="#b39">[40,</ref><ref type="bibr" target="#b40">41]</ref>. However, they mainly focused on analyzing how inputs relate to outputs and generating explanations for the whole system, while we focused on understanding internal node activations.</p><p>CLIP-Dissect <ref type="bibr" target="#b15">[16]</ref>, similar to our work, takes a different approach. It utilizes the CLIP pre-trained model, employing zero-shot learning to associate images with labels. Another related work, Label-Free Concept Bottleneck Models <ref type="bibr" target="#b25">[26]</ref>, builds upon CLIP-Dissect, using GPT-4 <ref type="bibr" target="#b20">[21]</ref> for concept set generation. However, CLIP-Dissect faces challenges in accurately predicting output labels based on concepts in the last hidden layer and transferring to other modalities or domain-specific applications. The Label-Free approach inherits these limitations and may compromise explainability due to its use of a concept derivation method that lacks inherent explainability.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>Concept Induction, leveraging large-scale ontological background knowledge, provides meaningful labeling of hidden neuron activations, validated by statistical analysis. This allows us to pinpoint concepts that strongly trigger neuron responses, effectively explaining neuron activations. Our approach introduces novel possibilities for diverse label categories. Comparative analysis against CLIP-Dissect and GPT-4 showcases Concept Induction's superiority, especially in settings with labeled data. Ultimately, our work aims to thoroughly analyze hidden layers in deep learning systems, facilitating the interpretation of activations as implicit input features and explaining system input-output behavior. Moving forward, future work will focus on enhancing Concept Induction's scalability and efficiency, enabling its broader applicability across various domains.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Generated label hypotheses from all three approaches,Bold denotes neurons whose labels are considered confirmed(the full version can be found in our work at<ref type="bibr" target="#b21">[22]</ref>).</figDesc><table><row><cell></cell><cell></cell><cell cols="2">Concept Induction</cell><cell></cell><cell></cell></row><row><cell cols="6">Neuron Obtained Label(s) Images Coverage Target % Non-Target %</cell></row><row><cell>0</cell><cell>building</cell><cell>164</cell><cell>0.997</cell><cell>89.024</cell><cell>72.328</cell></row><row><cell>1</cell><cell>cross_walk</cell><cell>186</cell><cell>0.994</cell><cell>88.710</cell><cell>28.923</cell></row><row><cell>11</cell><cell>river_water</cell><cell>157</cell><cell>0.995</cell><cell>31.847</cell><cell>22.309</cell></row><row><cell></cell><cell></cell><cell cols="2">CLIP-Dissect</cell><cell></cell><cell></cell></row><row><cell>0</cell><cell>restaurants</cell><cell>140</cell><cell></cell><cell>55.000</cell><cell>59.295</cell></row><row><cell>3</cell><cell>dresser</cell><cell>171</cell><cell></cell><cell>95.322</cell><cell>66.199</cell></row><row><cell>7</cell><cell>bathroom</cell><cell>153</cell><cell></cell><cell>93.333</cell><cell>44.113</cell></row><row><cell></cell><cell></cell><cell>GPT-4</cell><cell></cell><cell></cell><cell></cell></row><row><cell>0</cell><cell>Urban Landscape</cell><cell>176</cell><cell></cell><cell>54.545</cell><cell>59.078</cell></row><row><cell>1</cell><cell>Street Scene</cell><cell>164</cell><cell></cell><cell>92.073</cell><cell>29.884</cell></row><row><cell>3</cell><cell>Bedroom</cell><cell>165</cell><cell></cell><cell>97.576</cell><cell>62.967</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Statistical Evaluation details for all three approaches(full version can be found in our work at<ref type="bibr" target="#b21">[22]</ref>).</figDesc><table><row><cell>Concept Induction</cell></row></table><note>label and if any other neurons activate. Table</note></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://chrome.google.com/webstore/detail/image-downloader-imageye/agionbommeaifngbhincahgmoflcikhm</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>The author acknowledge advisor Dr. Pascal Hitzler and partial funding under National Science Foundation grants 2119753 and 2333782.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Image classification using convolutional neural networks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ramprasath</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Anand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hariharan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Pure and Applied Mathematics</title>
		<imprint>
			<biblScope unit="volume">119</biblScope>
			<biblScope unit="page" from="1307" to="1319" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Towards end-to-end speech recognition with recurrent neural networks</title>
		<author>
			<persName><forename type="first">A</forename><surname>Graves</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Jaitly</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning, The Proceedings of Machine Learning Research</title>
				<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1764" to="1772" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Joint language and translation modeling with recurrent neural networks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Auli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Galley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Quirk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zweig</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing</title>
				<meeting>the 2013 Conference on Empirical Methods in Natural Language Processing</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="1044" to="1054" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Generating focused molecule libraries for drug discovery with recurrent neural networks</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">H</forename><surname>Segler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kogej</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Tyrchan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Waller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACS central science</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="120" to="131" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Artificial intelligent model with neural network machine learning for the diagnosis of orthognathic surgery</title>
		<author>
			<persName><forename type="first">H.-I</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-K</forename><surname>Jung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-H</forename><surname>Baek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">H</forename><surname>Lim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-J</forename><surname>Ahn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I.-H</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T.-W</forename><surname>Kim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Craniofacial Surgery</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="page" from="1986" to="1989" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Application of deep convolutional neural networks for detecting extreme weather in climate datasets</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Racah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Prabhat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Correa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Khosrowshahi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lavers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">F</forename><surname>Kunkel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">D</forename><surname>Wehner</surname></persName>
		</author>
		<author>
			<persName><surname>Collins</surname></persName>
		</author>
		<ptr target="http://arxiv.org/abs/1605.01156.arXiv:1605.01156" />
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">A</forename><surname>Hamilton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Apple cofounder Steve Wozniak says Apple Card offered his wife a lower credit limit</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">End-to-end learning for lane keeping of self-driving cars</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Huang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Intelligent Vehicles Symposium (IV)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="page" from="1856" to="1860" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Deepscreen: high performance drug-target interaction prediction with convolutional neural networks using 2-d structural compound representations</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Rifaioglu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Nalbat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Atalay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Martin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Cetin-Atalay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Doğan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Chemical science</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="2531" to="2557" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Deep neural networks for covid-19 detection and diagnosis using images and acoustic-based techniques: a recent review</title>
		<author>
			<persName><forename type="first">W</forename><surname>Hariri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Narin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Soft computing</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="15345" to="15362" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Understanding the role of individual units in a deep neural network</title>
		<author>
			<persName><forename type="first">D</forename><surname>Bau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-Y</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Strobelt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lapedriza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Torralba</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the National Academy of Sciences</title>
		<imprint>
			<biblScope unit="volume">117</biblScope>
			<biblScope unit="page" from="30071" to="30078" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Manipulating SGD with data ordering attacks</title>
		<author>
			<persName><forename type="first">I</forename><surname>Shumailov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Shumaylov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kazhdan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Papernot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Erdogdu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">J</forename><surname>Anderson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems (NeurIPS)</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="page" from="18021" to="18032" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Planting undetectable backdoors in machine learning models: [extended abstract</title>
		<author>
			<persName><forename type="first">S</forename><surname>Goldwasser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vaikuntanathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Zamir</surname></persName>
		</author>
		<idno type="DOI">10.1109/FOCS54457.2022.00092</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE Annual Symposium on Foundations of Computer Science (FOCS)</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="931" to="942" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Clifford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Shumailov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Mullins</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2210.00108</idno>
		<title level="m">ImpNet: Imperceptible and blackboxundetectable backdoors in compiled neural networks</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">What does explainable AI really mean? a new conceptualization of perspectives</title>
		<author>
			<persName><forename type="first">D</forename><surname>Doran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Schulz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">R</forename><surname>Besold</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<publisher>CEUR</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">2071</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">CLIP-Dissect: Automatic description of neuron representations in deep vision networks</title>
		<author>
			<persName><forename type="first">T</forename><surname>Oikarinen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T.-W</forename><surname>Weng</surname></persName>
		</author>
		<ptr target="https://openreview.net/forum?id=iPWiwWHc1V" />
	</analytic>
	<monogr>
		<title level="m">International Conference on Learning Representations, ICLR</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Concept embedding models: Beyond the accuracyexplainability trade-off</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Zarlenga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Barbiero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Ciravegna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Marra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giannini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Diligenti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Shams</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Precioso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Melacci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Weller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Lió</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jamnik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems (NeurIPS)</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Concept learning in description logics using refinement operators</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lehmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10994-009-5146-2</idno>
		<ptr target="https://doi.org/10.1007/s10994-009-5146-2.doi:10.1007/s10994-009-5146-2" />
	</analytic>
	<monogr>
		<title level="j">Mach. Learn</title>
		<imprint>
			<biblScope unit="volume">78</biblScope>
			<biblScope unit="page" from="203" to="250" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Semantic understanding of scenes through the ADE20K dataset</title>
		<author>
			<persName><forename type="first">B</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Puig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Fidler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barriuso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Torralba</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Vision</title>
		<imprint>
			<biblScope unit="volume">127</biblScope>
			<biblScope unit="page" from="302" to="321" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Wikipedia knowledge graph for explainable AI</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Sarker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schwartz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nadella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">S</forename><surname>Minnery</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Juvina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Raymer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">R</forename><surname>Aue</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-65384-2_6</idno>
		<idno>doi:</idno>
		<ptr target="10.1007/978-3-030-65384-2\_6" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Knowledge Graphs and Semantic Web Second Iberoamerican Conference and First Indo-American Conference (KGSWC)</title>
		<title level="s">Communications in Computer and Information Science</title>
		<editor>
			<persName><forename type="first">B</forename><surname>Villazón-Terrazas</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Ortiz-Rodríguez</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Tiwari</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Shandilya</surname></persName>
		</editor>
		<meeting>the Knowledge Graphs and Semantic Web Second Iberoamerican Conference and First Indo-American Conference (KGSWC)</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">1232</biblScope>
			<biblScope unit="page" from="72" to="87" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Achiam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Adler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ahmad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Akkaya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">L</forename><surname>Aleman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Almeida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Altenschmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Altman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Anadkat</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2303.08774</idno>
		<title level="m">GPT-4 technical report</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">On the value of labeled data and symbolic methods for hidden neuron activation analysis</title>
		<author>
			<persName><forename type="first">A</forename><surname>Dalal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rayan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">Y</forename><surname>Vasserman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Sarker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2404.13567</idno>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Efficient concept induction for description logics</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Sarker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<idno type="DOI">10.1609/aaai.v33i01.33013036</idno>
		<ptr target="https://doi.org/10.1609/aaai.v33i01.33013036.doi:10.1609/aaai.v33i01.33013036" />
	</analytic>
	<monogr>
		<title level="m">The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI) The Thirty-First Innovative Applications of Artificial Intelligence Conference (IAAI), The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI)</title>
				<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="3036" to="3043" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">On the minimal redundancy of binary error-correcting codes</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">I</forename><surname>Levenshtein</surname></persName>
		</author>
		<idno type="DOI">10.1016/S0019-9958(75)90300-9</idno>
		<idno>(75)90300-9</idno>
		<ptr target="10.1016/S0019-9958" />
	</analytic>
	<monogr>
		<title level="j">Inf. Control</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="268" to="291" />
			<date type="published" when="1975">1975</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<title level="m" type="main">Concept induction using LLMs: a user experiment for assessment</title>
		<author>
			<persName><forename type="first">A</forename><surname>Barua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Widmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2404.11875.arXiv:2404.11875" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Label-free concept bottleneck models</title>
		<author>
			<persName><forename type="first">T</forename><surname>Oikarinen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Das</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T.-W</forename><surname>Weng</surname></persName>
		</author>
		<ptr target="https://openreview.net/forum?id=FlCg47MNvBA" />
	</analytic>
	<monogr>
		<title level="m">The Eleventh International Conference on Learning Representations</title>
				<imprint>
			<publisher>ICLR</publisher>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Amazon mechanical turk: A research tool for organizations and information systems scholars</title>
		<author>
			<persName><forename type="first">K</forename><surname>Crowston</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Shaping the Future of ICT Research. Methods and Approaches</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Bhattacherjee</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Fitzgerald</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg; Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="210" to="221" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Deep learning</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Lecun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hinton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature</title>
		<imprint>
			<biblScope unit="volume">521</biblScope>
			<biblScope unit="page" from="436" to="444" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">XAI -explainable artificial intelligence</title>
		<author>
			<persName><forename type="first">D</forename><surname>Gunning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Stefik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Miller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Stumpf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G.-Z</forename><surname>Yang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Science robotics</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page">7120</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Peeking inside the black-box: a survey on explainable artificial intelligence (xai)</title>
		<author>
			<persName><forename type="first">A</forename><surname>Adadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Berrada</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE access</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="52138" to="52160" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Explainable artificial intelligence: a comprehensive review</title>
		<author>
			<persName><forename type="first">D</forename><surname>Minh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">F</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">N</forename><surname>Nguyen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence Review</title>
		<imprint>
			<biblScope unit="page" from="1" to="66" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<monogr>
		<title level="m" type="main">Grad-CAM: Why did you say that?</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">R</forename><surname>Selvaraju</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Das</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Vedantam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Cogswell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Parikh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Batra</surname></persName>
		</author>
		<ptr target="http://arxiv.org/abs/1611.07450.arXiv:1611.07450" />
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Why Should I Trust You?&quot;: Explaining the Predictions of Any Classifier</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guestrin</surname></persName>
		</author>
		<idno type="DOI">10.1145/2939672.2939778</idno>
		<ptr target="https://doi.org/10.1145/2939672.2939778.doi:10.1145/2939672.2939778" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;16</title>
				<meeting>the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;16<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1135" to="1144" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Interpreting deep visual representations via network dissection</title>
		<author>
			<persName><forename type="first">B</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Oliva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Torralba</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on pattern analysis and machine intelligence</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="page" from="2131" to="2145" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<title level="m" type="main">Counterfactual explanations without opening the black box: Automated decisions and the GDPR</title>
		<author>
			<persName><forename type="first">S</forename><surname>Wachter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">D</forename><surname>Mittelstadt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Russell</surname></persName>
		</author>
		<idno>CoRR abs/1711.00399</idno>
		<ptr target="http://arxiv.org/abs/1711.00399.arXiv:1711.00399" />
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Unified perceptual parsing for scene understanding</title>
		<author>
			<persName><forename type="first">T</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sun</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the European conference on computer vision (ECCV)</title>
				<meeting>the European conference on computer vision (ECCV)</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="418" to="434" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV)</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wattenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gilmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Cai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wexler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">B</forename><surname>Viégas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sayres</surname></persName>
		</author>
		<ptr target="http://proceedings.mlr.press/v80/kim18d.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference on Machine Learning (ICML)</title>
				<editor>
			<persName><forename type="first">J</forename><forename type="middle">G</forename><surname>Dy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Krause</surname></persName>
		</editor>
		<meeting>the International Conference on Machine Learning (ICML)<address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">80</biblScope>
			<biblScope unit="page" from="2673" to="2682" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Using ontologies to enhance human understandability of global post-hoc explanations of black-box models</title>
		<author>
			<persName><forename type="first">R</forename><surname>Confalonieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Weyde</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">R</forename><surname>Besold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Del Prado Martín</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">296</biblScope>
			<biblScope unit="page">103471</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Explainable neural-symbolic learning (x-nesyl) methodology to fuse deep learning representations with expert knowledge graphs: The monumai cultural heritage use case</title>
		<author>
			<persName><forename type="first">N</forename><surname>Díaz-Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lamas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sanchez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Franchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Donadello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tabik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Filliat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Cruz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Montes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Herrera</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Fusion</title>
		<imprint>
			<biblScope unit="volume">79</biblScope>
			<biblScope unit="page" from="58" to="83" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Explaining trained neural networks with semantic web technologies: First steps</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Sarker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Doran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Raymer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<ptr target="https://ceur-ws.org/Vol-2003/NeSy17_paper4.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twelfth International Workshop on Neural-Symbolic Learning and Reasoning (NeSy)</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">T</forename><forename type="middle">R</forename><surname>Besold</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Avila Garcez</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Noble</surname></persName>
		</editor>
		<meeting>the Twelfth International Workshop on Neural-Symbolic Learning and Reasoning (NeSy)</meeting>
		<imprint>
			<date type="published" when="2003">2003. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">An exploration of explainable machine learning using semantic web technology</title>
		<author>
			<persName><forename type="first">T</forename><surname>Procko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Elvira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Ochoa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">D</forename><surname>Rio</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICSC52841.2022.00029</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 16th International Conference on Semantic Computing (ICSC), IEEE Computer Society</title>
				<imprint>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="page" from="143" to="146" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
