<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Workshop on Human-Interpretable AI</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Gabriele</forename><surname>Ciravegna</surname></persName>
							<email>gabriele.ciravegna@polito.it</email>
						</author>
						<author>
							<persName><forename type="first">Mateo</forename><forename type="middle">Espinoza</forename><surname>Zarlenga</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Damien</forename><surname>Garreau</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Mateja</forename><surname>Jamnik</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Tania</forename><surname>Cerquitelli</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Pietro</forename><surname>Barbiero</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Francesco</forename><surname>Gi</surname></persName>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Automatica e Informatica</orgName>
								<orgName type="institution">Politecnico di Torino Torino</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="institution">University of Cambridge</orgName>
								<address>
									<settlement>Cambridge</settlement>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<address>
									<addrLine>of</addrLine>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Workshop on Human-Interpretable AI</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">AA71857804B54EEA965122F894856404</idno>
					<idno type="DOI">10.1145/3637528.3671499</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:48+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Human-Interpretable AI</term>
					<term>Interpretability</term>
					<term>Explainability</term>
					<term>HI-AI</term>
					<term>XAI</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This workshop aims to spearhead research on Human-Interpretable Artificial Intelligence (HI-AI) by providing: (i) a general overview of the key aspects of HI-AI, in order to equip all researchers with the necessary background and set of definitions; (ii) novel and interesting ideas coming from both invited talks and top paper contributions; (iii) the chance to engage in dialogue with prominent scientists during poster presentations and coffee breaks. The workshop welcomes contributions covering novel interpretableby-design or post-hoc approaches, as well as theoretical analysis of existing works. Additionally, we accept visionary contributions speculating on the future potential of this field. Finally, we welcome contributions from related fields such as Ethical AI, Knowledgedriven Machine learning, Human-machine Interaction, applications in Medicine and Industry, and analyses from Regulatory experts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>CCS Concepts</head><p>• Computing methodologies → Artificial intelligence.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Human-interpretable AI models <ref type="bibr" target="#b0">[1]</ref> are playing an increasingly important role in Artificial Intelligence (AI). Today, a large part of the technologies employed by AI and SIGKDD researchers is based on Deep Neural Networks (DNNs). Yet, the lack of transparency of DNNs prevents a safe deployment of these models in critical contexts that significantly affect users. Consequently, decision-making systems based on deep learning are facing constraints and limitations from regulatory institutions <ref type="bibr" target="#b1">[2]</ref>, which increasingly demand transparency in AI models <ref type="bibr" target="#b2">[3]</ref>. Even though standard eXplainable AI (XAI) emerged to address the need to interpret DNNs, several works are arguing that it may not have achieved its goal <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>.</p><p>To really explain DNN decision-making process, there is a growing consensus that human-interpretable explanations are required. Human-Interpretable AI (HI-AI) methods either provide post-hoc explanations by extracting the symbols that have been automatically learnt by the models (e.g., T-CAV <ref type="bibr" target="#b5">[6]</ref>), or directly design intrinsically interpretable architectures (e.g., CBM <ref type="bibr" target="#b6">[7]</ref>). Among other qualities, these explanations resemble better the way humans reason and explain <ref type="bibr" target="#b7">[8]</ref>, help to detect model biases <ref type="bibr" target="#b8">[9]</ref>, are more stable to perturbations <ref type="bibr" target="#b9">[10]</ref>, and can create more robust models <ref type="bibr" target="#b10">[11]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Workshop Topics</head><p>Topics of interest include, but are not limited to, the following:</p><p>• Explainable-by-design models, novel approaches to creating machine learning and deep learning models that are intrinsically explainable or interpretable. • Post-hoc methods for Interpretable AI, novel approaches on post-hoc interpretable AI. These include but are not limited to approaches working on higher-level features such as concepts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>• Theoretical analyses of existing methods, showing what</head><p>existing interpretable methods can achieve both from an explanation and a generalization point of view. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Program</head><p>This workshop aims to advance the research on HI-AI by offering a diverse program designed to enhance participants' knowledge, and foster collaboration and innovation. The following list contains the invited speakers who will give keynote talks at the HI-AI workshop, and the expected topics that their talks will cover. All invited speakers have already confirmed their presence. Program Outline. Table <ref type="table">1</ref> reports the workshop program. Firstly, we will give an overview of the key aspects of HI-AI to ensure all attendees have a solid understanding of the background concepts and terminology. Secondly, the workshop features three invited talks from experts in the field, who will share their insights and latest research findings. These talks will provide valuable perspectives and inspire new ideas. Thirdly, we will offer participants the chance to engage in dialogue with prominent scientists during a long coffee break with poster presentations, encouraging collaborations and knowledge-sharing. Also, the workshop program includes three contributed talks from selected contributions. We will recognize the most interesting contribution with a Best Workshop Paper Award. We have allocated 40 minutes for each invited talk, allowing for a 30-minute presentation followed by a 10-minute Q&amp;A session. We allotted the same time for the poster sessions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Paper Management</head><p>Paper management. We published the Call For Papers (CFP) on the workshop website 1 . The CFP focuses on short papers, which can be research papers, theoretical analysis papers, or vision papers. Table <ref type="table">1</ref>: Draft of the program outline.</p><p>In the case of research contributions, we asked paper authors to make their code and data openly available to ensure reproducibility.</p><p>The review process has been double-blind. We have used OpenReview to ensure the final decisions for each paper are made by the organisers with no conflict of interest. All accepted papers will be published on the workshop website, which will remain active and accessible after the conference concludes. Additionally, we took contact with an external editor (CEUR-WS) to create an archival version of these papers for authors who wish to participate in a subsequent publication.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>1 https://human-interpretable-ai.github.io/ 8:50 -9:00 Opening remarks 9:00 -9:40 Keynote: Andrea Passerini 9:40 -10:00 5 mins lightning talks (3 selected papers) 10:00 -10:40 Keynote: Abbas Rahimi 10:40 -11:30 Coffee &amp; Posters 11:30 -12:10 Keynote: Sonali Parbhoo 12:10 -12:20 Awards and Closing Remarks</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>AI Ethics papers analysing implications of interpretable AI methods, discussing topics such as fairness, accountability, transparency, and bias mitigation in AI systems.• Human-machine Interaction studies on innovative humanmachine interaction systems, successfully exploiting interpretable AI models in their capability to provide both standard and counter-factual explanations. • Vision papers on XAI discussing the possible evolutions of the XAI field or speculating potential interpretable system and applications with their implications. • Applications in Medicine and Healthcare applications of interpretable AI methods in medical diagnosis, treatment planning, and healthcare decision-making. • AI in Industry practical applications of interpretable AI methods in various safety-critical industrial sectors, such as transportation, finance and retail. • Legal and Regulatory dissertations discussing and providing analysis of the legal challenges associated with interpretable AI, including compliance with data protection laws for transparent and accountable AI systems.</figDesc><table /><note>• Knowledge integration &amp; Reasoning methods injecting domain knowledge and reasoning methods into deep learning models to enhance their interpretability and performance. CEUR Workshop Proceedings ceur-ws.org ISSN 1613-0073 •</note></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Program Commitee</head><p>We are very grateful to each of our program committee members for their hard reviewing work, namely Romain Giot, Eliana Pastor, Roberto Pellungrini, Eleonora Poeta, Gianluigi Lopardo, and Gizem Gezici, besides workshop chairs.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">Eleonora</forename><surname>Poeta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gabriele</forename><surname>Ciravegna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Eliana</forename><surname>Pastor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tania</forename><surname>Cerquitelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Elena</forename><surname>Baralis</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2312.12936</idno>
		<title level="m">Concept-based explainable artificial intelligence: A survey</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">European union regulations on algorithmic decision-making and a &quot;right to explanation</title>
		<author>
			<persName><forename type="first">Bryce</forename><surname>Goodman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Seth</forename><surname>Flaxman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AI magazine</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="50" to="57" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Trustworthy artificial intelligence and the european union ai act: On the conflation of trustworthiness and acceptability of risk</title>
		<author>
			<persName><forename type="first">Johann</forename><surname>Laux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sandra</forename><surname>Wachter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Brent</forename><surname>Mittelstadt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Regulation &amp; Governance</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="3" to="32" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Sanity checks for saliency maps</title>
		<author>
			<persName><forename type="first">Julius</forename><surname>Adebayo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Justin</forename><surname>Gilmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Muelly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ian</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Moritz</forename><surname>Hardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Been</forename><surname>Kim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead</title>
		<author>
			<persName><forename type="first">Cynthia</forename><surname>Rudin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature machine intelligence</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="206" to="215" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)</title>
		<author>
			<persName><forename type="first">Been</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martin</forename><surname>Wattenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Justin</forename><surname>Gilmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Carrie</forename><surname>Cai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">James</forename><surname>Wexler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fernanda</forename><surname>Viegas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="2668" to="2677" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Concept bottleneck models</title>
		<author>
			<persName><forename type="first">Pang</forename><surname>Wei Koh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thao</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Siang</forename><surname>Yew</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stephen</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Emma</forename><surname>Mussmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Been</forename><surname>Pierson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Percy</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><surname>Liang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="5338" to="5348" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">help me help the ai&quot;: Understanding how explainability can support human-ai interaction</title>
		<author>
			<persName><forename type="first">Elizabeth</forename><forename type="middle">Anne</forename><surname>Sunnie Sy Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Olga</forename><surname>Watkins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ruth</forename><surname>Russakovsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrés</forename><surname>Fong</surname></persName>
		</author>
		<author>
			<persName><surname>Monroy-Hernández</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems</title>
				<meeting>the 2023 CHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1" to="17" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Extending logic explained networks to text classification</title>
		<author>
			<persName><forename type="first">Rishabh</forename><surname>Jain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gabriele</forename><surname>Ciravegna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pietro</forename><surname>Barbiero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesco</forename><surname>Giannini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Davide</forename><surname>Buffelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pietro</forename><surname>Lio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing</title>
				<meeting>the 2022 Conference on Empirical Methods in Natural Language Processing</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="8838" to="8857" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Towards robust interpretability with self-explaining neural networks</title>
		<author>
			<persName><forename type="first">David</forename><surname>Alvarez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Melis</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Tommi</forename><surname>Jaakkola</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in neural information processing systems</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">31</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Logic explained networks</title>
		<author>
			<persName><forename type="first">Gabriele</forename><surname>Ciravegna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pietro</forename><surname>Barbiero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesco</forename><surname>Giannini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>Gori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pietro</forename><surname>Lió</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>Maggini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stefano</forename><surname>Melacci</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">314</biblScope>
			<biblScope unit="page">103822</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
