<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">XAI in Healthcare ⋆</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Gizem</forename><surname>Gezici</surname></persName>
							<email>gizem.gezici@sns.it</email>
							<affiliation key="aff0">
								<orgName type="institution">Scuola Normale Superiore</orgName>
								<address>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Carlo</forename><surname>Metta</surname></persName>
							<email>carlo.metta@isti.cnr.it</email>
							<affiliation key="aff1">
								<orgName type="institution">ISTI-CNR</orgName>
								<address>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andrea</forename><surname>Beretta</surname></persName>
							<email>andrea.beretta@isti.cnr.it</email>
							<affiliation key="aff1">
								<orgName type="institution">ISTI-CNR</orgName>
								<address>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Roberto</forename><surname>Pellungrini</surname></persName>
							<email>roberto.pellungrini@sns.it</email>
							<affiliation key="aff0">
								<orgName type="institution">Scuola Normale Superiore</orgName>
								<address>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Salvatore</forename><surname>Rinzivillo</surname></persName>
							<email>rinzivillo@isti.cnr.it</email>
							<affiliation key="aff1">
								<orgName type="institution">ISTI-CNR</orgName>
								<address>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Dino</forename><surname>Pedreschi</surname></persName>
							<affiliation key="aff2">
								<orgName type="institution">University of Pisa</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Fosca</forename><surname>Giannotti</surname></persName>
							<email>fosca.giannotti@sns.it</email>
							<affiliation key="aff0">
								<orgName type="institution">Scuola Normale Superiore</orgName>
								<address>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">XAI in Healthcare ⋆</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">F4BF41FC4203188B014F32BC91BC30A4</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:12+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Explainable AI, healthcare, interpretability, user trust (F. Giannotti) 0000-0001-9782-575 (G. Gezici)</term>
					<term>0000-0002-9325-8232 (C. Metta)</term>
					<term>0000-0001-8531-9325 (A. Beretta)</term>
					<term>0000-0003-3268-9271 (R. Pellungrini)</term>
					<term>0000-0003-4404-4147 (S. Rinzivillo)</term>
					<term>0000-0003-4801-3225 (D. Pedreschi)</term>
					<term>0000-0003-3099-3835 (F. Giannotti)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The evolution of Explainable Artificial Intelligence (XAI) within healthcare represents a crucial turn towards more transparent, understandable, and patient-centric AI applications. The main objective is not only to increase the accuracy of AI models but also, and more importantly, to establish user trust in decision support systems through improving their interpretability. This extended abstract outlines the ongoing efforts and advancements of our lab addressing the challenges brought up by complex AI systems in healthcare domain. Currently, there are four main projects: Prostate Imaging Cancer AI, Liver Transplantation &amp; Diabetes, Breast Cancer, and Doctor XAI, and ABELE.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>AI-assisted clinical decision support systems (CDSS) <ref type="bibr" target="#b0">[1]</ref> have brought many opportunities, mainly by leading to a better diagnosis performance, predicting patient outcomes, and personalising treatment plans. Nonetheless, there have been growing concerns due to the opaque nature of the widely-used black-box algorithms in CDSS. Our lab's work is dedicated to advancing the interpretability of AI models through local explanation techniques, with the ultimate goal of making AI decisions more transparent and comprehensible to healthcare providers and patients similar to the study of a CDSS in vaccine hesitancy through leveraging XAI approaches to obtain valuable insights about public health <ref type="bibr" target="#b1">[2]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Methodology</head><p>Our methodology on XAI in healthcare field combines AI technologies with healthcare domain knowledge, and the key methodologies in our research are model-agnostic local explainers for generating understandable and relevant explanations of model predictions on healthcare datasets. Among these approaches, the first local explainer which works for different input data types provides decision rules of influential factors and counterfactual rules. The second approach we use specifically works on image data and returns a set of exemplar and counterexemplar images, as well as a saliency map. Lastly, apart from the aforementioned explainers, the third local explainer is ontology-based that works on multi-labeled sequential data. In addition to the local explainers, we use global feature attributions to understand the overall model behaviour.</p><p>The key methodologies used in the following projects are related to the LORE method proposed by Guidotti et al. <ref type="bibr" target="#b2">[3]</ref>. LORE is a powerful framework for generating local and interpretable explanations for machine learning models. LORE utilizes a genetic algorithm to create a synthetic neighborhood, which serves as the basis for training a local interpretable predictor. This predictor captures the underlying logic of the model's decision-making process, enabling the derivation of meaningful explanations. One of the key characteristics of LORE is its ability to provide transparent and understandable explanations for individual predictions. By focusing on local interpretability, LORE aims to explain the reasoning behind a specific prediction rather than the overall behavior of the model. This makes it particularly useful in situations where interpretability at the instance level is crucial, such as in healthcare and finance.</p><p>The explanations consist of two main components. First, a decision rule is derived from the logic of the local interpretable predictor. This decision rule sheds light on the factors that influenced the model's decision, providing insights into the important features and their corresponding weights. This information helps in understanding the key drivers behind the prediction. Additionally, LORE produces a set of counterfactual rules as part of the explanation. These counterfactual rules suggest modifications to the instance's features that would lead to a different outcome. By providing actionable suggestions for changing the input variables, LORE enables users to explore what-if scenarios and understand how small changes can influence the model's predictions. The availability of the LORE framework, along with the accompanying code <ref type="foot" target="#foot_0">1</ref> , facilitates its adoption and implementation in various domains. In next sections different research project are described. They leverage over LORE methodology from different point of views.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Current Projects</head><p>In this section, we briefly present our ongoing projects on XAI in healthcare by referring to the XAI methodologies mentioned above.</p><p>Prostate Imaging Cancer AI In this project, the dataset consists of T2-weighted and Apparent Diffusion Coefficient (ADC) MRI scans that were gathered in cooperation with the doctors in Prostate Cancer Unit. To enhance knowledge of prostate cancer diagnosis, we mainly leverage the local explainer that works on images to produce insightful justifications for intricate imaging analyses. The project will explore the novel field of cross-domain explanations between T2-weighted and ADC images. Through this approach, we seek to facilitate communication between various imaging modalities and promote a more comprehensive, integrated understanding of prostate cancer diagnosis, ultimately leading to improved patient outcomes and management.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Liver Transplantation and Diabetes</head><p>This project aims to establish an Explainable CDSS to investigate if there are some pre-liver transplantation (pre-ltx) patient characteristics that might affect the glycemic status (condition of diabetes), i.e. if a non-diabetic patient becomes pre-diabetic or diabetic after the liver transplantation, and also the survival of a given patient. In this project, we work in collaboration with doctors from the Diabetology Department and we employ the liver transplantation dataset that they collected. This tabular dataset includes 1468 patients, 470 of whom had liver transplants with follow-up data for one and five years after the operation. The proposed pipeline is composed of two main parts: i. classification model for the prediction tasks of diabetes and survival, ii. exploiting global and local XAI methods to explain the overall model behaviour and individual patient predictions respectively through pinpointing the impactful pre-ltx features.</p><p>Breast Cancer In this project, the dataset consists of public health records in collaboration with administrative institutions gathered through voluntary efforts and arranged into linked tables that can be accessed using the SAS statistical tool developed by North Carolina State University. Due to the size and complexity of the dataset, and the incomplete documentation, extracting information is challenging. To address this, an entity-relationship (ER) diagram has recently been developed as a conceptual schema to choose suitable columns and our discussions are ongoing to identify the main research questions to which we can answer with this particular dataset. <ref type="bibr" target="#b3">[4]</ref> describes an ontology-based technique that aims to explain black-box predicting multi-labeled, sequential, ontology-linked data. Formal representations of knowledge called ontologies are used in the methodology to encapsulate concepts and relationships unique to a given domain. In order to forecast the next visit, the study concentrates on explaining Doctor AI <ref type="bibr" target="#b3">[4]</ref>, a multilabel classifier that uses a patient's clinical history as input.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Doctor XAI: Explainer for Sequential Patient Data Doctor XAI: an ontology-based approach to black-box sequential data classification explanations</head><p>This project aims to establish an explainable CDSS which takes the clinical history of a patient (sequential data) and predict the next visit with a multi-label classifier. Then, leveraging ontologies specifically the ICD-9 ontology on the MIMIC-III dataset <ref type="bibr" target="#b4">[5]</ref> which contains deidentified health-related sequential data associated with over 40,000 patients who stayed in critical care units of the Beth Israel Deaconess Medical Center between 2001 and 2012. Our experiments on the proposed pipeline showed promising results in terms of capturing domainspecific knowledge, extracting relevant features, and providing interpretable explanations. Currently, we further aim to refine and expand the capabilities of the proposed pipeline by utilizing Large Language Models (LLMs).  <ref type="bibr" target="#b5">[6]</ref> is a local model-agnostic explainer that receives a picture as input, a black-box classifier, and sets of exemplar and counter-exemplar images along with a saliency map. Exemplars and counterexemplars are artificially created images that are categorized with an outcome that differs from the input image and the same outcome as the input image, respectively. To comprehend the rationale for the choice, they can be visually examined. The input image's regions that support one class and those that force it into a different one are indicated by the saliency map. An Adversarial Autoencoder (AAE) is used by ABELE to create a neighborhood in the latent feature space. The encoder uses latent features to return the latent representation after receiving the image to be explained as input from the AAE. The neighborhood generation was achieved via a genetic technique that maximizes a fitness function. Utilizing a latent form of LORE, ABELE benefits in this way.</p><p>Following generation, ABELE queries the discriminator and converts the resultant image to verify the legitimacy of every instance within the neighborhood. Following that, it uses the picture to ask the black-box classifier for the class. By using the black-box classifier to label the neighborhood, ABELE constructs a decision tree classifier based on the local neighborhood. With the help of the surrogate tree, the black-box classifier's local behavior should be mimicked. The process facilitates the creation of exemplars and counter-exemplars by extracting the decision rule and counter-factual rules. The quality of the encoder and decoder functions used determines how effective ABELE is overall. The explanations will be more practical and meaningful the higher the AAE.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion</head><p>The vision of our lab on XAI in healthcare is on creating a powerful, accessible, and trustworthy AI-assisted CDSSs for healthcare professionals and patients. We believe that properly designing the integration of XAI methodologies based on the feedback from our healthcare practitioner collaborators is valuable. In this way, XAI can help us to foster trust, enhance decision-making, and improve treatments for patients. Going forward, the emphasis will continue to be on establishing CDSSs with AI in a manner that values human welfare above all else and acknowledges the intricacies of healthcare.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Doctor XAI explanation pipeline</figDesc><graphic coords="4,211.96,84.19,171.36,244.44" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://github.com/riccotti/LORE</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>This work has been supported by the European Union under ERC-2018-ADG GA 834756 (XAI), by HumanE-AI-Net GA 952026, and by the Partnership Extended PE00000013 -"FAIR -Future Artificial Intelligence Research" -Spoke 1 "Human-centered AI".</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Current challenges and future opportunities for xai in machine learning-based clinical decision support systems: a systematic review</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Antoniadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Guendouz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Mazo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">A</forename><surname>Becker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Mooney</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Sciences</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page">5088</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Explaining sociodemographic and behavioral patterns of vaccination against the swine flu (h1n1) pandemic</title>
		<author>
			<persName><forename type="first">C</forename><surname>Punzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Maslennikova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Gezici</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Pellungrini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giannotti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">World Conference on Explainable Artificial Intelligence</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="621" to="635" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Factual and counterfactual explanations for black box decision making</title>
		<author>
			<persName><forename type="first">R</forename><surname>Guidotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Monreale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ruggieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pedreschi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Turini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giannotti</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="page" from="14" to="23" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Doctor XAI: an ontology-based approach to black-box sequential data classification explanations</title>
		<author>
			<persName><forename type="first">C</forename><surname>Panigutti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Perotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pedreschi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conference on Fairness, Accountability, and Transparency</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="629" to="639" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">MIMIC-III, a freely accessible critical care database</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">E W</forename><surname>Johnson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Scientific data</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Black box explanation by learning image exemplars in the latent feature space</title>
		<author>
			<persName><forename type="first">R</forename><surname>Guidotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Monreale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Matwin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pedreschi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Machine Learning and Knowledge Discovery in Databases</title>
				<editor>
			<persName><forename type="first">U</forename><surname>Brefeld</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Fromont</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Hotho</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Knobbe</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Maathuis</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Robardet</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="189" to="205" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
