<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Interactive xAI-dashboard for Semantic Segmentation</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Finn</forename><surname>Schürmann</surname></persName>
							<email>finn.schuermann@kasel.ch</email>
							<affiliation key="aff0">
								<orgName type="institution">Lucerne University of Applied Sciences and Arts</orgName>
								<address>
									<addrLine>Suurstoffi 1</addrLine>
									<postCode>CH-6343</postCode>
									<settlement>Rotkreuz</settlement>
									<country key="CH">Switzerland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sibylle</forename><forename type="middle">D</forename><surname>Sager-Müller</surname></persName>
							<email>sibylle.sager@hslu.ch</email>
							<affiliation key="aff0">
								<orgName type="institution">Lucerne University of Applied Sciences and Arts</orgName>
								<address>
									<addrLine>Suurstoffi 1</addrLine>
									<postCode>CH-6343</postCode>
									<settlement>Rotkreuz</settlement>
									<country key="CH">Switzerland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Interactive xAI-dashboard for Semantic Segmentation</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">FD5176ED41BB0B354ADC27EE090B91FF</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:37+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Computer vision</term>
					<term>Semantic segmentation</term>
					<term>explainable AI</term>
					<term>Human-Machine interaction 1</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This article proposes an interactive dashboard for analyzing semantic image segmentation models using eXplainable AI (xAI) methods. It integrates open-source xAI packages with segmentation models from PyTorch and TensorFlow Keras, focusing on road traffic images. Through model-based and post hoc explanation methods, users gain insights into model perceptions. The dashboard facilitates user interaction by allowing selection of model, label, and xAI method, with visualizations displaying segmented images and explanations. The implementation uses Python's Dash library, complemented by PyTorch and external xAI libraries. A demo app showcases model comparisons and xAI method outputs, enhancing transparency and trust in AI systems for safety-critical applications like autonomous driving.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>xAI is a set of methods and procedures which help humans to understand and trust results created by artificial intelligence (AI) algorithms. Especially in safety-critical applications, xAI is a key requirement <ref type="bibr" target="#b0">[1]</ref>. This can be achieved through various validation algorithms. To obtain an unbiased and comprehensive understanding of the outcomes produced by existing algorithms for neural networks (NNs) at different depths, this article proposes a novel interactive dashboard containing a subset of common xAI methods of the most prominent open source xAI packages. These packages can be used to analyze data and segmentation models, e.g., from road traffic. The dashboard also allows for a selection of data and trained segmentation models from common libraries like PyTorch <ref type="bibr" target="#b1">[2]</ref>.</p><p>In autonomous driving the perception of environment is an important aspect. Image segmentation, a frequently used technique in this application, assigns a label to every pixel in an image so that pixels with the same label share certain characteristics. It is important that these segmentation models have a high accuracy and efficiency as they are included in driver assistance systems. xAI enables the user to analyze the performance of the segmentation models. Thus, it helps the AI developer -in the example of machine vision for autonomous vehicles -to check if the model evaluates the situation shown in the image correctly. However, the segmentation models and xAI methods are not yet optimised for autonomous driving. While they can assist in the resolution of issues, their development in this field is not yet sufficiently advanced.</p><p>From 2015 to 2021, more than 150 xAI related tools have been published <ref type="bibr" target="#b2">[3]</ref>. In general the tools are implemented for image classification, because the xAI-methods were initially developed for image classification <ref type="bibr" target="#b3">[4]</ref>. For both machine vision tasks, image classification and semantic segmentation, heatmaps can be employed to help the user to find out if the model learned what it was expected to learn. During the analysis of the widely used xAItools, we did not come across a specific tool dedicated solely to image segmentation. The Neuroscope framework <ref type="bibr" target="#b3">[4]</ref> has been one approach towards xAI analysis for image segmentation (plus image classification). However, it is important to note that its implementation is platform-dependent and there are currently no plans for further development <ref type="bibr" target="#b4">[5]</ref>. Therefore, we intend to fill the gap by implementing a novel platformindependent dashboard for image segmentation specifically. Our goal is to insert the most common segmentation models and xAI-methods in a dashboard with user-friendly interaction possibility.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Models and xAI-Methods</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Models</head><p>The right choice of model for computer vision tasks is crucial. The library from PyTorch specifies which model is preferable for which task. Table <ref type="table">1</ref> shows an overview of the models for image segmentation used in the dashboard. The models can be categorised in fully convolutional networks (FCN) and DeepLabV3 networks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1 Overview of used models for semantic image segmentation in PyTorch</head><p>The first model class comprises FCN ResNet models which take inputs of different sizes and produce outputs of corresponding sizes. The second model class is the DeepLabV3 model, which is based on the Resnet 50, Resnet 101 or MobileNet3 backbone. The difference is that DeepLabV3 uses atrous convolution, also called dilated convolution. The models based on atrous convolution are actively researched for in semantic segmentation <ref type="bibr" target="#b6">[7]</ref>. DeepLabV3 uses the MobileNetV3-Large model, which is 34% faster than its predecessor at the same accuracy level for cityscape segmentation. The reason for the speed increase of mobile models is two-fold: First, MobileNetV3 uses the hard sigmoid function instead of the standard sigmoid function because the hard sigmoid function has much lower latency costs <ref type="bibr" target="#b7">[8]</ref>. Second, mobile models employ atrous convolution meaning that the kernel laid over the input has some holes. The size of the holes can be controlled by the hyperparameter rate. The default convolution sets the rate to 1. The more the rate increases, the more it is possible to encode the object with multi-scale context <ref type="bibr" target="#b6">[7]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">xAI-Methods</head><p>Our dashboard in its current implementation uses Layer GradCAM, LIME, Feature Ablation, and Saliency as xAI-methods. xAI-methods can be categorized in model-specific and modelagnostic methods. Model-specific methods calculated the effect of changes in the input features to the output using the model itself, while model-agnostic methods work by manipulating input data and analyzing the respective model predictions without knowledge of the model. Within the subclasses of specific and agnostic, one can further distinguish between local or global methods. Local methods explain the individual predictions of models, while global methods explain the behavior of the model averaged over all samples <ref type="bibr" target="#b8">[9]</ref>, <ref type="bibr" target="#b9">[10]</ref>, <ref type="bibr" target="#b10">[11]</ref>.</p><p>Gradient-weighted Class Activation Mapping (GradCAM) <ref type="bibr" target="#b11">[12]</ref> is a technique that analyzes gradient information for any convolutional layer of a model and generates a heatmap that highlights important regions in the image. This method operates through forward passes without backpropagation. <ref type="bibr" target="#b12">[13]</ref> trains an interpretable surrogate model. The model is evaluated at sampling points around a defined input example to train a simple surrogate model. It is a model-agnostic, perturbation-based approach.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Local Interpretable Model Agnostic Explanations (LIME)</head><p>Feature Ablation <ref type="bibr" target="#b8">[9]</ref> is perturbation-based and calculates attribution, by replacing each input feature with some reference, and calculating the difference in output. A set of features can be turned off together instead of one at a time.</p><p>Saliency <ref type="bibr" target="#b13">[14]</ref> calculates the gradients with respect to inputs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Implementation</head><p>The goal is to deliver a dashboard that allows the user to interactively check an AI model for semantic segmentation using a variety of xAI methods. This involves finding the tradeoff between technical depth on one side and comprehensibility by the ordinary user on the other side. The dashboard is implemented with python using Dash <ref type="bibr" target="#b14">[15]</ref>. Dash is a library from Plotly to create web apps without need to write code in JavaScript or HTML. Dash in combination with PyTorch yields good visualization possibilities for segmented models. xAI methods were implemented using the Captum library <ref type="bibr" target="#b8">[9]</ref> and the pytorch-grad-cam library from Jacob Gildenblat <ref type="bibr" target="#b15">[16]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Function of dashboard</head><p>The dashboard can be used to compare xAI methods for different segmentation models. As a first step, a demo app has been implemented with the goal to show the user the main working principles of the dashboard. This will help the user to get accustomed to the tool. The demo app can be opened with the tab "show demo". In the dropdown menus as shown in Figure <ref type="figure" target="#fig_0">1</ref> the segmentation models for comparison can be chosen. If on both images a model is selected, the difference between the segmented images of the two models gets visible on the bottom left side. This image is obtained by taking the pixelwise difference of the arrays of the two segmented images. The segmented image is overlaid with the original one so that the user can easily see the quality of the segmentation. By showing the differences, the model quality can be easily compared visually. This helps the user to decide for an appropriate model. After selecting a model, the user chooses a label from the dropdown menu "Label Selection", as shown in Figure <ref type="figure" target="#fig_1">2</ref>. The labels are, in the preliminary version, predefined since the dashboard only allows the selection of pre-trained models from PyTorch. After the label is selected, up to two xAI methods can be chosen from the dropdown menu, one on the left-hand side, another on the right-hand side. Then, the upper row shows the original image overlaid with the heatmap from the corresponding xAI methods. If two xAI methods are selected, the difference of their corresponding heatmaps is shown in the lower right section to allow for direct visual comparison. On a technical level, the user has to make sure to select the same label for both xAI methods to make the comparison meaningful. The difference of the heatmaps is calculated as soon as the methods are selected in both filter bars. If a method is not selected in one of the filters, the element will indicate which filter needs to have a method chosen, as shown in Figure <ref type="figure" target="#fig_2">3</ref>. This is not the final state of the dashboard: One the lower right section, it is planned to include metrics to evaluate the xAI methods, e.g., from the library Quantus <ref type="bibr" target="#b16">[17]</ref>. Currently, Quantus is only applicable to image classification, but with a few modifications, it should be possible to apply it to image segmentation as well. Also, the metrics have to be pre-selected first. For implementation everything is documented on GitHub*<ref type="foot" target="#foot_0">2</ref> . </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>The demo app with the functions described above serves as the foundation of the final app currently under construction. The goal of the final version is to allow the users to import their own image(s) and segmentation model(s) to test its/their performance. This enables the user to adjust the models. In the current state, comparison of two models and two methods is possible only visually. It is planned to display evaluation metrics additionally, which would be a benefit compared to similar implementations like Neuroscope <ref type="bibr" target="#b3">[4]</ref>. Metrics could be based on those from Quantus <ref type="bibr" target="#b16">[17]</ref>. However, this will require first to study which of the metrics can be transferred from classification to segmentation tasks. The xAI methods employed in this demo app are just a small selection which will be extended to a larger subset like, e.g., in Neuroscope. For a better visualization, it is planned to include a color scale to illustrate the magnitude of the image differences. As a last step, the app will be thoroughly tested to make sure it will meet the requirements for user-friendly humanmachine interaction.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Model selection to compare the segmentation models.</figDesc><graphic coords="4,101.00,420.80,392.99,214.30" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Label Selection.</figDesc><graphic coords="5,85.33,156.79,424.35,231.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Method selection to compare the xAI methods.</figDesc><graphic coords="6,85.05,85.05,424.63,231.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>Segmentation networks are based on classification networks with little adaption. ResNet is a deep convolutional neural network proposed by Microsoft. With residual blocks that help optimise a residual function, this architecture allows accuracy to be increased by increasing the depths of layers<ref type="bibr" target="#b5">[6]</ref>. The number "50", for example in FCN ResNet 50, represents the number of layers in the network. All models in Table1are pretrained with the PASCAL VOC dataset.</figDesc><table><row><cell>task</cell><cell>model</cell><cell>specification</cell></row><row><cell></cell><cell>FCN</cell><cell>FCN_ResNet_50 FCN_ResNet_101</cell></row><row><cell>Semantic segmentation</cell><cell></cell><cell>DeepLabV3_MobileNet_V3_Large</cell></row><row><cell></cell><cell>DeepLabV3</cell><cell>DeepLabV3_ResNet_50</cell></row><row><cell></cell><cell></cell><cell>DeepLabV3_ResNet_101</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_0">https://github.com/fschurma/xAI_dashboard</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges</title>
		<author>
			<persName><forename type="first">F</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Uszkoreit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhu</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-32236-6_51</idno>
	</analytic>
	<monogr>
		<title level="m">Natural Language Processing and Chinese Computing</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">J</forename><surname>Tang</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M.-Y</forename><surname>Kan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Zhao</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Li</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Zan</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">11839</biblScope>
			<biblScope unit="page" from="563" to="574" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<ptr target="https://pytorch.org/vision/stable/models.html#semantic-segmentation" />
		<title level="m">Models and pre-trained weights -Torchvision 0.16 documentation</title>
				<imprint>
			<date type="published" when="2023-10-24">Oct. 24, 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Making It Easier to Compare the Tools for Explainable AI, Partnership on AI</title>
		<author>
			<persName><forename type="first">N</forename><surname>Uhl</surname></persName>
		</author>
		<ptr target="https://partnershiponai.org/making-it-easier-to-compare-the-tools-for-explainable-ai/" />
		<imprint>
			<date type="published" when="2023-11-26">Nov. 26, 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Neuroscope: An Explainable AI Toolbox for Semantic Segmentation and Image Classification of Convolutional Neural Nets</title>
		<author>
			<persName><forename type="first">C</forename><surname>Schorr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Goodarzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Dahmen</surname></persName>
		</author>
		<idno type="DOI">10.3390/app11052199</idno>
	</analytic>
	<monogr>
		<title level="j">Appl. Sci</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">5</biblScope>
			<date type="published" when="2021">2199. Mar. 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Schorr</surname></persName>
		</author>
		<title level="m">Personal Communication</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A quick overview of ResNet models</title>
		<author>
			<persName><forename type="first">K</forename><surname>Le</surname></persName>
		</author>
		<ptr target="https://medium.com/mlearning-ai/a-quick-overview-of-resnet-models-f8ed277ae81e" />
	</analytic>
	<monogr>
		<title level="j">MLearning.ai</title>
		<imprint>
			<date type="published" when="2023-07">Nov. 07, 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Rethinking Atrous Convolution for Semantic Image Segmentation</title>
		<author>
			<persName><forename type="first">L.-C</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Papandreou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Schroff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Adam</surname></persName>
		</author>
		<ptr target="http://arxiv.org/abs/1706.05587" />
	</analytic>
	<monogr>
		<title level="j">arXiv</title>
		<imprint>
			<date type="published" when="2017-12-05">Dec. 05, 2017. Nov. 12, 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Searching for MobileNetV3</title>
		<author>
			<persName><forename type="first">A</forename><surname>Howard</surname></persName>
		</author>
		<ptr target="http://arxiv.org/abs/1905.02244" />
	</analytic>
	<monogr>
		<title level="j">arXiv</title>
		<imprint>
			<date type="published" when="2019-11-20">Nov. 20, 2019. Nov. 07, 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Introduction • Captum</title>
		<ptr target="https://captum.ai/" />
		<imprint>
			<date type="published" when="2023-02">Dec. 02, 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Interpretable Machine Learning -A Brief History, State-of-the-Art and Challenges</title>
		<author>
			<persName><forename type="first">C</forename><surname>Molnar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Casalicchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bischl</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-65965-3_28</idno>
		<ptr target="https://doi.org/10.1007/978-3-030-65965-3_28" />
	</analytic>
	<monogr>
		<title level="m">ECML PKDD 2020 Workshops. ECML PKDD 2020</title>
		<title level="s">Communications in Computer and Information Science</title>
		<editor>
			<persName><forename type="first">I</forename><surname>Koprinska</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">1323</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Munn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pitman</surname></persName>
		</author>
		<title level="m">Explainable AI for practitioners: designing and implementing explainable ML solutions</title>
				<meeting><address><addrLine>Beijing, Sebastopol, CA</addrLine></address></meeting>
		<imprint>
			<publisher>O&apos;Reilly</publisher>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization</title>
		<author>
			<persName><forename type="first">R</forename><surname>Selvaraju</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Cogswell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Das</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Vedantam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Parikh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Batra</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11263-019-01228-7</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal Computer Vision</title>
		<imprint>
			<biblScope unit="volume">128</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="336" to="359" />
			<date type="published" when="2020-02">Feb. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Explaining the Predictions of Any Classifier</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guestrin</surname></persName>
		</author>
		<idno>arXiv</idno>
		<ptr target="http://arxiv.org/abs/1602.04938" />
		<imprint>
			<date type="published" when="2016-02-26">Feb. 26, 2016. May 13, 2024</date>
		</imprint>
	</monogr>
	<note>Why Should I Trust You?</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps</title>
		<author>
			<persName><forename type="first">K</forename><surname>Simonyan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vedaldi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zisserman</surname></persName>
		</author>
		<idno>arXiv</idno>
		<ptr target="http://arxiv.org/abs/1312.6034" />
		<imprint>
			<date type="published" when="2014-04-19">Apr. 19, 2014. Mar. 20, 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Dash Documentation &amp; User Guide | Plotly</title>
		<ptr target="https://dash.plotly.com/" />
		<imprint>
			<date type="published" when="2024-05">Apr. 05, 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">jacobgil/pytorch-grad-cam</title>
		<author>
			<persName><forename type="first">J</forename><surname>Gildenblat</surname></persName>
		</author>
		<ptr target="https://github.com/jacobgil/pytorch-grad-cam" />
		<imprint>
			<date type="published" when="2024-04-05">Apr. 05, 2024. Apr. 05, 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond</title>
		<author>
			<persName><forename type="first">A</forename><surname>Hedström</surname></persName>
		</author>
		<ptr target="http://arxiv.org/abs/2202.06861" />
	</analytic>
	<monogr>
		<title level="j">arXiv</title>
		<imprint>
			<date type="published" when="2022-02-14">Feb. 14, 2022. May 13, 2024</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
