<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Concept Induction Using LLMs</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Adrita</forename><surname>Barua</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Kansas State University</orgName>
								<address>
									<settlement>Manhattan</settlement>
									<region>KS</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Concept Induction Using LLMs</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">6E6D34AC6CF5A54A1AEEA0EBF060DE13</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:57+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Concept Induction</term>
					<term>LLM</term>
					<term>Explainable AI</term>
					<term>GPT-4</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this study, the capability of Large Language Models (LLMs) is explored to automate Concept Induction, a process traditionally reliant on formal logical reasoning using description logic ontologies, within the context of explainable AI (XAI). Initially, a pre-trained LLM like GPT-4 is employed to assess its ability to generate high-level concepts describing data differentials for a scene classification task via prompting. A human assessment study was conducted which revealed that concepts produced by GPT-4 are preferred over those from logical concept induction systems in terms of human understandability, despite some limitations in neuron activation analysis. Building on these insights, further research aims to automate the concept induction system using LLMs, potentially addressing the shortcomings of traditional logical reasoners. This approach has the potential to scale and provide a significant avenue for concept discovery in complex AI models.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Concept Induction <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref> is a symbolic reasoning task that involves generating complex class descriptions from instance examples using deductive reasoning algorithms over Description Logic knowledge bases. It can be used to depict meaningful explanations by identifying patterns from complex data. Previous studies <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref> have explored the potential of using concept induction in the context of explainable AI (XAI) to provide human-understandable explanations of machine learning classifications. However, traditional concept induction systems face several limitations in adaptability, scalability, and capturing complex data relationships due to their reliance on predefined rules and limited background knowledge and may not capture the full scope of humanlike reasoning. In contrast, research in XAI aims to improve the understandability of AI models without compromising accuracy <ref type="bibr" target="#b4">[5]</ref>. Current techniques often rely on post-hoc algorithms <ref type="bibr" target="#b5">[6]</ref>, which encounter challenges like visualization and adversarial attacks <ref type="bibr" target="#b6">[7]</ref>. Concept-based models <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9]</ref> offers a promising alternative by incorporating explicit representations of concepts aligned with human intuition to explain the model's behavior. However, generating context-specific meaningful concepts from complex data remains challenging. This research aims to explore the feasibility of replacing conventional concept induction systems with Large Language Models (LLMs) to overcome these limitations and enhance the interpretability of AI models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Proceedings of the Doctoral Consortium at ISWC 2024, co-located with the 23rd International Semantic Web Conference (ISWC 2024)</head><p>Envelope adrita@ksu.edu (A. Barua) Orcid 0000-0002-3287-7443 (A. Barua)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Importance</head><p>Concept induction plays a crucial role in various domains including XAI, enabling the generation of interpretable and meaningful insights from complex data. Transitioning to LLM-based methods for concept induction can improve symbolic reasoning tasks at scale across different domains such as information retrieval, knowledge extraction, etc. Furthermore, automating concept discovery through LLMs can make black box models more explainable, aligning with the ongoing efforts to map network activations with meaningful explanations <ref type="bibr" target="#b9">[10]</ref>. This addresses critical issues of transparency and trust in AI decisions, crucial for stakeholders across industries impacted by AI. The significance of this work extends to the broader AI community by potentially advancing neurosymbolic AI, bridging the gap between traditional AI and symbolic reasoning approaches. We project that the outcomes of this research can contribute to overcoming the limitations of symbolic concept induction systems and contribute to advancing XAI techniques, enabling safer and more accountable AI systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Related work</head><p>There are different approaches that utilizes traditional concept induction systems using provably correct <ref type="bibr" target="#b0">[1]</ref> or heuristic <ref type="bibr" target="#b10">[11]</ref> deduction algorithms over description logic knowledge bases. Various applications <ref type="bibr" target="#b11">[12]</ref> stand to benefit from a concept induction system that is not constrained by background knowledge and predefined rules. In the context of XAI, concept induction has shown significant results to generate human-understandable explanations through post-hoc analysis of input data to explain the machine learning classification outputs <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b12">13]</ref>. However, these methods are limited by their reliance on background knowledge and heuristic nature of explanation generation, potentially overlooking common-sense interpretations that are evident to humans. Leveraging LLMs has the potential to bridge this gap by automating higher-level concept generation by utilizing minimal text-based information. Methods like TCAV <ref type="bibr" target="#b7">[8]</ref> focus on global explanations by employing high-level concepts to estimate their importance for predictions, but relys on human-provided concepts. Alternatively, ACE <ref type="bibr" target="#b13">[14]</ref> leverages image segmentation and clustering to curate automated concepts that may result in some information loss. Other approaches, such as Concept Bottleneck Models (CBM) <ref type="bibr" target="#b14">[15]</ref> and Post-hoc CBM <ref type="bibr" target="#b15">[16]</ref>, map DNN models to human-understandable concepts but often depend on hand-picked concepts, highlighting the need for automated methods to generate higher-level concepts. Another study <ref type="bibr" target="#b16">[17]</ref> employing a similar approach utilizes GPT-3 with a few-shot method to produce automated concepts. But none of these methods cater to the generation of complex description logic concepts. Our study delves into LLMs' ability to generate such explanations that can replace the symbolic reasoners at scale, to be used as a stand alone system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Research question(s) and hypotheses</head><p>The objective of our research is to assess whether LLMs, leveraging their vast domain knowledge and reasoning capabilities, can outperform or at least match concept induction systems in producing accurate and understandable explanations aligned with human intuition, while also being capable of explaining hidden neuron activations in the domain of XAI. Previous research <ref type="bibr" target="#b3">[4]</ref> has explored the effectiveness of concept induction for creating explanations that "make sense" to humans, indicating that while concept induction can explain data differentials in machine learning classifications, human-generated explanations are generally superior. This work employed the ECII heuristic concept induction system <ref type="bibr" target="#b10">[11]</ref> and utilized the Wikipedia category hierarchy <ref type="bibr" target="#b17">[18]</ref> as background knowledge. Building upon their findings, our study extends their work by replacing the ECII model with an LLM to generate meaningful and coherent explanations. Primarily, we seek to identify "good" concepts that are understandable to humans and evaluate their alignment with human-generated explanations to potentially surpass concept induction in terms of accuracy and comprehensibility. Furthermore, we seek to understand if LLM explanations are preferred over logical concept induction systems in terms of "meaningfulness to humans", whether they will still remain effective in demonstrating neuron activations when mapped to a neural network architecture. There could be a trade-off between the two approaches; for example, the type of concepts that work well for humans might not always be useful to depict what the neuron 'sees' in a DNN architecture. The primary goal is to utilize pre-trained LLMs like GPT-4 <ref type="bibr" target="#b18">[19]</ref> to achieve satisfactory results via prompting <ref type="bibr" target="#b19">[20]</ref> and subsequently fine-tune an LLM to mimic the output of a symbolic reasoner (e.g., generating complex concepts) that could be verifiable using description logics while making use of the common-sense capability of LLMs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Research methods</head><p>We begin by employing an initial prompting technique to asses the effectiveness of concepts generated by LLMs in terms of human understandability and their applicability to hidden neuron activation. This initial assessment serves as a foundation for our broader objective of fine-tuning it further to automate the system of concept induction.</p><p>Prompting method In preliminary investigations <ref type="bibr" target="#b20">[21]</ref>, we utilized GPT-4 to generate concepts for distinguishing between different image classes as an initial assessment of LLM's concept induction capability. Object tags from the ADE20K dataset <ref type="bibr" target="#b21">[22,</ref><ref type="bibr" target="#b22">23]</ref> were used as input for the GPT-4 model via the OpenAI API, using zero-shot prompting. This dataset comprises around 20,000 images annotated with scene categories and object tags. We selected 45 image set pairs, each containing two groups of images representing distinct scene categories(e.g., Bathroom vs Park). Our objective was to generate explanations that describe what distinguished category A from category B in each image set pair. To prompt the GPT-4 model effectively, we experimented with different techniques, ultimately leveraging only the object labels from each image set category. The model was instructed to differentiate between the two categories based on their object tags. The generated concepts were compared with those produced by the ECII system, which also used the same object tags. Object tags can be any items physically present in the images, such as stands, food, walls, etc. The process and the prompt used for interacting with the GPT-4 model are illustrated in Figure <ref type="figure" target="#fig_0">1</ref>. The latest version of the GPT-4 model was used with specific parameter settings, including a temperature of 0.5 and top_p of 1, to ensure consistent and reproducible answers. We came up with the specific prompt(1) to generate generic concepts or object classes that mimic the ontology classes positioned somewhere in the middle of the hierarchy used by ECII, aiming to provide a balance between more general concepts and highly specific subclasses within the ontology structure. Each set produced a list of seven concepts following this method. A detailed description of the experimental setup and prompting method can be found in <ref type="bibr" target="#b20">[21]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Hidden neuron activation analysis</head><p>To evaluate whether concepts generated by LLMs can offer insights into hidden layer activation space, we do a preliminary investigation mentioned in <ref type="bibr" target="#b9">[10]</ref>, using two evaluation methods: Statistical Evaluation and Concept Activation Analysis. In this work, three approaches are compared for generating concepts: GPT-4, ECII, and CLIP-Dissect <ref type="bibr" target="#b8">[9]</ref>. To begin, label hypotheses are obtained to determine which neurons are activated for specific concept labels. Initially, a trained ResNet50V2 is fed with ADE20K images, and the activations of the dense layer's 64 neurons are analyzed individually. For each neuron, positive examples (𝑃 ) consist of images that activate the neuron with at least 80% of the maximum activation value, while negative examples (𝑁 ) are images that activate the neuron with at most 20% or not at all. ECII generates concept-label hypotheses for each neuron based on 𝑃, 𝑁, and background knowledge. Similarly, GPT-4 uses the same sets 𝑃 and 𝑁 but with adjustments. Due to input constraints, only one image per class is selected for set 𝑁. GPT-4 identifies concepts present in 𝑃 but not in 𝑁, using a prompting method described earlier in this section. This yields a list of three concepts per neuron, but only one concept per neuron is chosen at random for the analysis. To compare with other XAI methods, target labels are also generated using CLIP-Dissect, a label-free method that associates high-level concepts with individual neurons using a pre-trained multimodal model. To confirm these label hypotheses, images corresponding to each concept-label are retrieved from Google Images using the label as a keyword. 80% of the obtained images are used for hypothesis confirmation, and the remaining 20% for statistical evaluation. The images are fed to the network to check if the target neuron activates for the retrieved label and if any other neurons activate. A target label for a neuron is confirmed if it activates for ≥ 80% of its target images. In total, 19, 5, and 14 distinct confirmed concepts are obtained from Concept Induction, CLIP-Dissect, and GPT-4, respectively.</p><p>Fine-tuning an LLM After reviewing the initial results generated from the prompting technique, our next step is to fine-tune an open-sourced LLM, to automatically generate meaningful concepts based on input data, that can effectively explain the reasoning behind specific outputs from neural network architectures. We want to fine-tune the LLM in a manner that captures the logical reasoning structure of a symbolic deductive system, ensuring it remains both explainable and verifiable. This approach aims to address the challenge of using another black box model, like an LLM, to explain a neural network system, while also mitigating the uncontrolled nature of a generic LLM by providing a more controlled system for concept generation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Evaluation</head><p>Human Assessment We conducted a human assessment study on Amazon Mechanical Turk using the Cloud Research platform to evaluate the human understandability of concepts generated from GPT-4. 300 participants were recruited, with each compensated $5 for completing the task. The study aimed to evaluate the quality of explanations generated by LLM (GPT-4) compared to human-generated and ECII explanations. Participants were presented with 45 image set pairs and asked to choose the more accurate explanation among three types: human vs. ECII, human vs. LLM, and LLM vs. ECII. Each participant completed all three comparisons, with only two explanation types compared in any given question. Human and ECII explanations were crafted in the previous study <ref type="bibr" target="#b3">[4]</ref>, while LLM explanations were generated following the prompting method specified in section 5. Participants preferred human explanations over ECII explanations (83% preference) and LLM (GPT-4) explanations (69% preference). However, LLM explanations were preferred over ECII explanations (63% preference). Ability scores derived from Bradley-Terry analysis revealed that human explanations had the highest scores (M = 1.77), followed by LLM explanations (M = 0.724), with a significant overall difference (𝑝 &lt; 0.001, 𝜂 2 = 0.41). Tukey's Honestly Significant Difference (HSD) test confirmed significant differences in ability scores between human vs. ECII explanations, human vs. LLM explanations (both 𝑝 &lt; 0.0001), as well as between LLM vs. ECII explanations (𝑝 = 0.0004). It indicates that the observed differences in ability scores are highly significant. Detailed ability scores for each image set pair and a discussion of the nature of the resulting concepts can be found in <ref type="bibr" target="#b20">[21]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Statistical Evaluation</head><p>To do a statistical analysis on the confirmed labels generated in hidden neuron activation method described in section 5, we consider each neuron-label pair as a hypothesis, using the remaining 20% images retrieved from Google Images. For example, the hypothesis for neuron 1 is that it activates more strongly for images related to "crosswalk" than for images related to other keywords. The corresponding null hypothesis is that activation values are not different. We test 20 hypotheses from Concept Induction, 8 from CLIP-Dissect, and 27 from GPT-4. Since activation values may not follow a normal distribution, we use the Mann-Whitney U test <ref type="bibr" target="#b23">[24]</ref> for statistical assessment. Among the 20 null hypotheses from Concept Induction, 19 are rejected at p &lt; 0.05. For CLIP-Dissect, all 8 null hypotheses are rejected are rejected at p &lt; 0.05, and for GPT-4, 25 out of 27 null hypotheses are rejected. Considering unique concepts, Concept Induction validates 18 hypotheses statistically, CLIP-Dissect validates 5, and GPT-4 validates 12. Mann-Whitney U results demonstrate that for most neurons (with p &lt; 0.00001), activation values of target images are significantly higher than those of non-target images.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Concept Activation Analysis</head><p>We utilize Concept Activation <ref type="bibr" target="#b24">[25,</ref><ref type="bibr" target="#b25">26]</ref>, an XAI technique that measures the presence of predefined concepts in hidden-layer activations of images. We evaluate label hypotheses obtained from all three methods using this analysis, and unlike previous methods, this analysis doesn't restrict itself to confirmed concepts. Images for each concept are collected from Google, and a concept classifier is trained using a Support Vector Machine (SVM). The dataset for each classifier consists of images showing the presence (label=1) and absence (label=0) of the concept. This dataset is passed through a pre-trained ResNet50V2 model, and the activation values of each image in the dense layer are saved. The transformed dataset is split into train (80%) and test (20%) datasets, and an SVM classifier is trained using the train split. Both linear (Concept Activation Vector, CAV) and non-linear (Concept Activation Region, CAR) kernels are used to assess the decision boundary separating the presence/absence of a concept. Finally, the test dataset is used to evaluate the concept classifier's ability to classify the existence of concepts. All concepts analyzed using Concept Activation achieved a p-value of less than 0.05 in k-fold cross-validation tests. CLIP-Dissect outperformed GPT-4 on CAR, and Concept Induction surpassed GPT-4 on CAV. However, there was no statistically significant difference between Concept Induction and CLIP-Dissect. A detailed result and discussion of both the neuron activation analysis can be found in <ref type="bibr" target="#b9">[10]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Limitations and future work</head><p>The human assessment study of concepts generated by LLMs such as GPT-4 has shown that they have great potential in automating the system for concept induction to provide meaningful insights into data differentials. However, the evaluation using hidden neuron activation methods did not yield promising results. It is understandable as the evaluation method of neuron activations has its own constraints (e.g., verification using the Google image dataset can have anomalies and does not always depict the accurate concepts that are originally true to the neuron) and is still under development. Despite these limitations, there is room for improvement in LLM's concept generation pipeline to better align with the nature of activated neurons. Efforts to fully automate XAI systems for concept discovery within DNN are crucial and further refinement of LLM-based approaches is necessary. While challenges persist, LLMs demonstrate the capacity to produce human-understandable high-level concepts. Developing standalone systems by finetuning LLMs to leverage their common sense capabilities could potentially replace traditional Concept Induction systems at scale, offering significant value across various domains, including XAI. This study underscores the efficient utilization of LLMs in Concept Induction and paves the way for future research to harness these models to enhance the explainability of AI systems.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Prompting Method</figDesc><graphic coords="4,151.80,84.19,291.68,112.60" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This research acknowledges Dr. Pascal Hitzler, professor in the Department of Computer Science, Kansas State University, director of the Data Semantics (DaSe) Lab, for his supervision and guidance throughout this study. The study received partial funding from the National Science Foundation grant 2333782 "Proto-OKN Theme 1: Safe Agricultural Products and Water Graph (SAWGraph): An OKN to Monitor and Trace PFAS and Other Contaminants in the Nation's Food and Water Systems. "</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Concept learning in description logics using refinement operators</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lehmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10994-009-5146-2</idno>
		<ptr target="https://doi.org/10.1007/s10994-009-5146-2" />
	</analytic>
	<monogr>
		<title level="j">Mach. Learn</title>
		<imprint>
			<biblScope unit="volume">78</biblScope>
			<biblScope unit="page" from="203" to="250" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Dl-learner -A framework for inductive learning on the semantic web</title>
		<author>
			<persName><forename type="first">L</forename><surname>Bühmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lehmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Westphal</surname></persName>
		</author>
		<idno type="DOI">10.1016/J.WEBSEM.2016.06.001</idno>
		<ptr target="https://doi.org/10.1016/j.websem.2016.06.001.doi:10.1016/J.WEBSEM.2016.06.001" />
	</analytic>
	<monogr>
		<title level="j">J. Web Semant</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="page" from="15" to="24" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Explaining trained neural networks with semantic web technologies: First steps</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Sarker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Doran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Raymer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<ptr target="https://ceur-ws.org/Vol-2003/NeSy17_paper4.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twelfth International Workshop on Neural-Symbolic Learning and Reasoning, NeSy 2017</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">T</forename><forename type="middle">R</forename><surname>Besold</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Avila Garcez</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Noble</surname></persName>
		</editor>
		<meeting>the Twelfth International Workshop on Neural-Symbolic Learning and Reasoning, NeSy 2017<address><addrLine>London, UK</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2003">July 17-18, 2017. 2003. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Towards human-compatible XAI: Explaining data differentials with concept induction over background knowledge</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">L</forename><surname>Widmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Sarker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nadella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fiechter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Juvina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Minnery</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schwartz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Raymer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Web Semantics</title>
		<imprint>
			<biblScope unit="volume">79</biblScope>
			<biblScope unit="page">100807</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Explainable AI: A brief survey on history, research areas, approaches and challenges</title>
		<author>
			<persName><forename type="first">F</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Uszkoreit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhu</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-32236-6_51</idno>
		<ptr target="https://doi.org/10.1007/978-3-030-32236-6_51" />
	</analytic>
	<monogr>
		<title level="m">Natural Language Processing and Chinese Computing -8th CCF International Conference, NLPCC 2019</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">J</forename><surname>Tang</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Kan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Zhao</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Li</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Zan</surname></persName>
		</editor>
		<meeting><address><addrLine>Dunhuang, China</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">October 9-14, 2019. 2019</date>
			<biblScope unit="volume">11839</biblScope>
			<biblScope unit="page" from="563" to="574" />
		</imprint>
	</monogr>
	<note>Proceedings, Part II</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">B</forename><surname>Arrieta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Díaz-Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Del</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bennetot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tabik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Barbado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>García</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gil-López</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Molina</surname></persName>
		</author>
		<author>
			<persName><surname>Benjamins</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information fusion</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="page" from="82" to="115" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Das</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rad</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2006.11371</idno>
		<title level="m">Opportunities and challenges in explainable artificial intelligence (XAI): A survey</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV)</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wattenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gilmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Cai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wexler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">B</forename><surname>Viégas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sayres</surname></persName>
		</author>
		<ptr target="http://proceedings.mlr.press/v80/kim18d.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 35th International Conference on Machine Learning, ICML 2018</title>
				<editor>
			<persName><forename type="first">J</forename><forename type="middle">G</forename><surname>Dy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Krause</surname></persName>
		</editor>
		<meeting>the 35th International Conference on Machine Learning, ICML 2018<address><addrLine>Stockholmsmässan, Stockholm, Sweden; PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">July 10-15, 2018. 2018</date>
			<biblScope unit="volume">80</biblScope>
			<biblScope unit="page" from="2673" to="2682" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Clip-dissect: Automatic description of neuron representations in deep vision networks</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">P</forename><surname>Oikarinen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Weng</surname></persName>
		</author>
		<ptr target="https://openreview.net/pdf?id=iPWiwWHc1V" />
	</analytic>
	<monogr>
		<title level="m">The Eleventh International Conference on Learning Representations, ICLR 2023</title>
				<meeting><address><addrLine>Kigali, Rwanda</addrLine></address></meeting>
		<imprint>
			<publisher>OpenReview</publisher>
			<date type="published" when="2023">May 1-5, 2023. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">On the value of labeled data and symbolic methods for hidden neuron activation analysis</title>
		<author>
			<persName><forename type="first">A</forename><surname>Dalal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rayan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">Y</forename><surname>Vasserman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Sarker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2404.13567</idno>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Efficient concept induction for description logics</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Sarker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<idno type="DOI">10.1609/aaai.v33i01.33013036</idno>
		<ptr target="https://doi.org/10.1609/aaai.v33i01.33013036" />
	</analytic>
	<monogr>
		<title level="m">The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019</title>
				<meeting><address><addrLine>Honolulu, Hawaii, USA</addrLine></address></meeting>
		<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2019-02-01">January 27 -February 1, 2019. 2019</date>
			<biblScope unit="page" from="3036" to="3043" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">An exploration of explainable machine learning using semantic web technology</title>
		<author>
			<persName><forename type="first">T</forename><surname>Procko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Elvira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Ochoa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">D</forename><surname>Rio</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICSC52841.2022.00029</idno>
		<ptr target="https://doi.org/10.1109/ICSC52841.2022.00029.doi:10.1109/ICSC52841.2022.00029" />
	</analytic>
	<monogr>
		<title level="m">16th IEEE International Conference on Semantic Computing, ICSC 2022</title>
				<meeting><address><addrLine>Laguna Hills, CA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2022">January 26-28, 2022. 2022</date>
			<biblScope unit="page" from="143" to="146" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Using ontologies to enhance human understandability of global post-hoc explanations of black-box models</title>
		<author>
			<persName><forename type="first">R</forename><surname>Confalonieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Weyde</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">R</forename><surname>Besold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Del Prado Martín</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">296</biblScope>
			<biblScope unit="page">103471</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Towards automatic concept-based explanations</title>
		<author>
			<persName><forename type="first">A</forename><surname>Ghorbani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wexler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Zou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Kim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Concept bottleneck models</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">W</forename><surname>Koh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">S</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mussmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Pierson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Liang</surname></persName>
		</author>
		<ptr target="http://proceedings.mlr.press/v119/koh20a.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 37th International Conference on Machine Learning, ICML 2020</title>
				<meeting>the 37th International Conference on Machine Learning, ICML 2020<address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020-07">July 2020. 2020</date>
			<biblScope unit="volume">119</biblScope>
			<biblScope unit="page" from="5338" to="5348" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Post-hoc Concept Bottleneck Models</title>
		<author>
			<persName><forename type="first">M</forename><surname>Yüksekgönül</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zou</surname></persName>
		</author>
		<ptr target="https://openreview.net/pdf?id=nA5AZ8CEyow" />
	</analytic>
	<monogr>
		<title level="m">The Eleventh International Conference on Learning Representations, ICLR 2023</title>
				<meeting><address><addrLine>Kigali, Rwanda</addrLine></address></meeting>
		<imprint>
			<publisher>OpenReview</publisher>
			<date type="published" when="2023">May 1-5, 2023. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Label-free concept bottleneck models</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">P</forename><surname>Oikarinen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Das</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Weng</surname></persName>
		</author>
		<ptr target="https://openreview.net/pdf?id=FlCg47MNvBA" />
	</analytic>
	<monogr>
		<title level="m">The Eleventh International Conference on Learning Representations, ICLR 2023</title>
				<meeting><address><addrLine>Kigali, Rwanda</addrLine></address></meeting>
		<imprint>
			<publisher>OpenReview</publisher>
			<date type="published" when="2023">May 1-5, 2023. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Wikipedia knowledge graph for explainable AI</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Sarker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schwartz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nadella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">S</forename><surname>Minnery</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Juvina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Raymer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">R</forename><surname>Aue</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-65384-2_6</idno>
		<ptr target="https://doi.org/10.1007/978-3-030-65384-2_6" />
	</analytic>
	<monogr>
		<title level="m">Knowledge Graphs and Semantic Web -Second Iberoamerican Conference and First Indo-American Conference, KGSWC 2020</title>
		<title level="s">Communications in Computer and Information Science</title>
		<editor>
			<persName><forename type="first">B</forename><surname>Villazón-Terrazas</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Ortiz-Rodríguez</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Tiwari</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Shandilya</surname></persName>
		</editor>
		<meeting><address><addrLine>Mérida, Mexico</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">November 26-27, 2020. 2020</date>
			<biblScope unit="volume">1232</biblScope>
			<biblScope unit="page" from="72" to="87" />
		</imprint>
	</monogr>
	<note>Proceedings</note>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Achiam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Adler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ahmad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Akkaya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">L</forename><surname>Aleman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Almeida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Altenschmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Altman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Anadkat</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2303.08774</idno>
		<title level="m">GPT-4 technical report</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Prompt engineering for ChatGPT: a quick guide to techniques, tips, and best practices</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ekin</surname></persName>
		</author>
		<idno type="DOI">10.36227/techrxiv.22683919.v2</idno>
		<ptr target="https://www.techrxiv.org/doi/full/10.36227/techrxiv.22683919.v2" />
	</analytic>
	<monogr>
		<title level="s">Authorea Preprints</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Concept induction using LLMs: a user experiment for assessment</title>
		<author>
			<persName><forename type="first">A</forename><surname>Barua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Widmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<ptr target="NeSy2024" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note>submitted to</note>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Scene parsing through ADE20K dataset</title>
		<author>
			<persName><forename type="first">B</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Puig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Fidler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barriuso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Torralba</surname></persName>
		</author>
		<idno type="DOI">10.1109/CVPR.2017.544</idno>
		<ptr target="https://doi.org/10.1109/CVPR.2017.544" />
	</analytic>
	<monogr>
		<title level="m">2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017</title>
				<meeting><address><addrLine>Honolulu, HI, USA</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE Computer Society</publisher>
			<date type="published" when="2017">July 21-26, 2017. 2017</date>
			<biblScope unit="page" from="5122" to="5130" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Semantic understanding of scenes through the ADE20k dataset</title>
		<author>
			<persName><forename type="first">B</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Puig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Fidler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barriuso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Torralba</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Vision</title>
		<imprint>
			<biblScope unit="volume">127</biblScope>
			<biblScope unit="page" from="302" to="321" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Mann-whitney u test</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">E</forename><surname>Mcknight</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Najab</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Corsini Encyclopedia of Psychology</title>
				<imprint>
			<publisher>Wiley</publisher>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV)</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wattenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gilmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wexler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Viegas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sayres</surname></persName>
		</author>
		<ptr target="https://proceedings.mlr.press/v80/kim18d.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 35th International Conference on Machine Learning</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Dy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Krause</surname></persName>
		</editor>
		<meeting>the 35th International Conference on Machine Learning<address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">80</biblScope>
			<biblScope unit="page" from="2668" to="2677" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Concept activation regions: A generalized framework for concept-based explanations</title>
		<author>
			<persName><forename type="first">J</forename><surname>Crabbé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Van Der Schaar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<editor>
			<persName><forename type="first">S</forename><surname>Koyejo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Mohamed</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Agarwal</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Belgrave</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Cho</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Oh</surname></persName>
		</editor>
		<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="2590" to="2607" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
