<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Exploring Commonalities in Explanation Frameworks: A Multi-Domain Survey Analysis</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Eduard</forename><surname>Barbu</surname></persName>
							<email>eduard.barbu@ut.ee</email>
							<affiliation key="aff0">
								<orgName type="department">Institute Of Computer Science</orgName>
								<address>
									<settlement>Tartu</settlement>
									<country key="EE">Estonia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Marharytha</forename><surname>Domnich</surname></persName>
							<email>marharyta.domnich@ut.ee</email>
							<affiliation key="aff0">
								<orgName type="department">Institute Of Computer Science</orgName>
								<address>
									<settlement>Tartu</settlement>
									<country key="EE">Estonia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Raul</forename><surname>Vicente</surname></persName>
							<email>raulvicente@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Institute Of Computer Science</orgName>
								<address>
									<settlement>Tartu</settlement>
									<country key="EE">Estonia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nikos</forename><surname>Sakkas</surname></persName>
							<email>sakkas@apintech.com</email>
							<affiliation key="aff1">
								<orgName type="department">POLIS-21 Group</orgName>
								<orgName type="institution">Apintech Ltd</orgName>
								<address>
									<settlement>Limassol</settlement>
									<country key="CY">Cyprus</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">André</forename><surname>Morim</surname></persName>
							<email>andre.morim@ltplabs.com</email>
							<affiliation key="aff2">
								<orgName type="institution">LTPlabs</orgName>
								<address>
									<addrLine>Avenida da Senhora da Hora</addrLine>
									<postCode>459</postCode>
									<settlement>Porto</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Exploring Commonalities in Explanation Frameworks: A Multi-Domain Survey Analysis</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">0E2BFC181CECB7FD3BDE9AD261A7B691</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:37+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>machine learning</term>
					<term>expert surveys</term>
					<term>explainability framework</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This study presents insights gathered from surveys and discussions with specialists in three domains, aiming to find essential elements for an explanation framework that could be applied to these and possibly other use cases. The applications analyzed include a medical scenario (involving predictive ML), a retail use case (involving prescriptive ML), and an energy use case (also involving predictive ML). We interviewed professionals from each sector, transcribing their conversations for further analysis. Additionally, experts and non-experts in these fields filled out questionnaires designed to probe various dimensions of explanatory methods. The findings indicate a universal preference for sacrificing a degree of accuracy in favor of greater explainability. Additionally, we highlight the significance of feature importance and counterfactual explanations as critical components of such a framework. Our questionnaires are publicly available to facilitate the dissemination of knowledge in the field of XAI.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction and Related Work</head><p>This paper explores the role of AI in data-driven decision-making across sectors like healthcare, retail, and energy, highlighting the challenges of ML models' complexity and opacity. It focuses on improving explanation understandability and trust through a study involving expert and layman feedback on different explanation types. Although the study focuses on developing a genetic programming (GP) tool to aid decision-making in these fields, the findings are relevant for any machine learning algorithm. This strategy enhances user trust and transparency across various ML models, providing applicable insights for AI applications.</p><p>Research in explainable AI (XAI) aligns AI system explanations with user expectations and needs. Key studies, such as <ref type="bibr" target="#b0">[1]</ref>, highlight identifying crucial stakeholders in AI explainability and the development of a framework to meet these needs. Tools like the System Causability Scale <ref type="bibr" target="#b1">[2]</ref> and the System Usability Scale <ref type="bibr" target="#b2">[3]</ref> have been introduced to assess ML explanation interfaces and their effectiveness. Furthermore, a novel questionnaire leveraging psychometrics <ref type="bibr" target="#b3">[4]</ref> aims to reliably evaluate XAI method explanations, addressing explainability's complex nature. This body of work underpins our effort to craft AI tools that meet the diverse requirements of professionals in fields such as medicine, retail, and energy, proposing a cross-disciplinary approach to enhance user satisfaction and trust in AI applications. In their literature review, the authors in <ref type="bibr" target="#b4">[5]</ref> define five primary goals for AI system interactions with end users: understandability, trustworthiness, transparency, controllability, and fairness. They recommend designing XAI systems to achieve these objectives and suggest guidelines for creating explanations focusing on crucial system components. Additionally, they highlight the necessity for compromises in AI explanations, underlining the absence of a one-size-fits-all solution.</p><p>The paper is organized as follows: we begin with an overview of related work. This is followed by introducing the three distinct use cases and their unique characteristics. In Section 3, we elaborate on the methodology employed in conducting the surveys. The paper concludes with a discussion of our findings and presents conclusions, including recommendations for developing a GP tool to support practitioners across three use cases. The developed questionnaires are publicly available to facilitate the dissemination of knowledge in the field of XAI.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">The use cases</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Medical Scenario</head><p>The medical scenario explores GP models for paraganglioma and diabetes, aiming to predict the tumor's progression and diabetes presence. The model for paraganglioma seeks to guide physicians on treatment timing, enhancing shared decision-making, optimizing treatments, and reducing unnecessary interventions without substituting clinical judgment. For diabetes, the model uses a well-known dataset <ref type="bibr" target="#b5">[6]</ref> to predict if a patient has or does not have diabetes.</p><p>Retail use case Grocery stores use Dynamic Timeslot Pricing to balance customer satisfaction with efficiency in home delivery. They offer flexible delivery times while keeping costs low. This AI-based approach sets fair and clear prices by looking at customer data and delivery logistics to estimate how much customers are willing to pay and the cost to serve. An algorithm then matches customer preferences with delivery efficiency to find the best times and prices.</p><p>The method, which sets slot prices using a specific formula (Prescriptive Model), depends on two support models-the Willingness to Pay (WTP) and Cost to Serve (CTS) models.</p><p>Energy use case To recommend savings, the energy use case predicts household energy consumption by analyzing weather, historical usage, building dynamics, pricing, and indoor temperatures. It aims to offer users clear explanations to support informed decisions and to integrate these insights into business strategies for improved energy efficiency. Key considerations include weather conditions, past consumption patterns, building characteristics, pricing strategies for managing demand, and indoor temperature monitoring for energy conservation. The challenge is making these forecasts understandable and actionable, facilitating efficient energy use and decision-making in practical settings.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Survey methods</head><p>This section outlines the survey methodologies applied to the three investigated use cases. Our approach incorporated two methods: conducting interviews with domain experts and distributing questionnaires to practitioners who may not have expert knowledge.</p><p>Details of the surveyed experts are available at this link: Interviewed Experts Document. Links to the questionnaires for each use case can be found in the following subsections. Three medical doctors completed the medical use case questionnaires, while the retail questionnaires were filled out by the interviewed expert and six additional respondents. For the energy case, six respondents completed the questionnaires, four of whom were the experts interviewed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Survey methods for the Medical Scenario</head><p>The questionnaire, which focused on diabetes risk estimation and was developed for the medical scenario, aimed to explore the type of AI model explanations doctors need. Key areas explored included the trade-off between accuracy and explainability, various presentation formats (such as symbolic regression graphs, genetic programming protocols, SHAP feature importance graphs, coefficients tables, and textual explanations), and their impact on understandability and decision-making effectiveness. Doctors were asked to rate each format's interpretability and effectiveness on a 1 to 5 scale.</p><p>Additionally, an interview focusing on the paraganglioma case collected insights on tumor identification, statistical prediction models, genetic factors, training protocols for new doctors, expectations from AI tools in managing paraganglioma, and the specific explanations needed for comprehending this condition. The questionnaire and interview outcomes are intended to guide the development of AI tools that effectively meet doctors' informational needs and preferences.</p><p>The questionnaire for the medical scenario can be explored here: Diabetes Questionnaire</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Survey methods for the Retail Use Case</head><p>The retail use case questionnaire was designed to delve into several key areas. First, they explored price breakthroughs to gauge the significance of location and demand and how clear the explanations were to customers. Next, the questionnaire sought to identify which types of explanations customers preferred and how well they understood them. Lastly, there was a focus on summarization assessment to evaluate the need for summaries in conjunction with detailed pricing information. This part aimed to assess how these summaries affected clarity and influenced decision-making. Participants rated explanations on interpretability and effectiveness from 1 (least) to 5 (highest), aiming to understand the extent to which explanations helped in decision-making and their clarity to customers. For this use case, two questionnaires have been devised for two categories of users.</p><p>1. Decision-makers Seek a comprehensive understanding of feature contributions to model predictions for system optimization. With their expert background, they prefer detailed, technical explanations to build trust and validate the model's use based on its accuracy. Decision-Makers Questionnaire 2. Customers Favor straightforward, accessible explanations that still convey essential information, aiding in understanding the rationale behind received offers without overwhelming technical detail.Customers Questionnaire</p><p>The interview, which was recorded as a video file, explored issues such as finding a balance between accuracy and explainability in e-commerce models, the incorporation of graphs and mathematical formulas into explanations, understanding customer behavior through the dynamic relationship between slot availability and pricing, and designing a dynamic dashboard to manage the interaction between operational efficiency and customer behavior effectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Survey methods for the Energy Use Case</head><p>The questionnaire targets operational managers and customers, aiming to identify their preferred formats (tables, charts, interactive graphics, text) and types of explanations (causal, contrastive, counterfactual) for model predictions. Operational managers, the primary audience, must provide detailed feedback based on their expertise. They will focus on how model features affect predictions and optimization opportunities to enhance their trust and model endorsement through accurate and complex explanations. In contrast, customers likely prefer simpler, straightforward explanations that clarify the rationale behind offers. The energy questionnaire delves into key areas like the accuracy-explainability trade-off, the value of explanations in forecasting, the role of what-if scenarios in understanding model outcomes, and the specific needs of facility managers for detailed explanations and visualization tools such as SHAP graphs, highlighting preferences for explanation frequency and detail level.</p><p>All interviewed experts and five additional energy experts have completed the questionnaire. Energy Questionnaire</p><p>The interviews explored the energy problem from various angles, each tailored to the interviewee's expertise. Discussions ranged from addressing market challenges in energy solutions and the importance of clear explanations for end-users to exploring energy consumption disaggregation and the role of genetic programming in enhancing analysis. Insights were also shared on leveraging machine learning for water consumption monitoring to optimize resource management and identify inefficiencies. Additionally, the design and usability of user interfaces for energy management systems were examined, emphasizing the need for intuitive and engaging interfaces to manage energy consumption better.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Medical scenario</head><p>Figure <ref type="figure" target="#fig_0">1</ref> summarizes key findings from the diabetes questionnaire.</p><p>Doctors prefer AI explanations that balance a slight decrease in accuracy for better clarity, find complex graphs challenging, and favor clear, intuitive details like protocols and SHAP graphs. Simplification and clarity were highlighted as essential for effectively conveying model logic, with counterfactual explanations being particularly valued for their potential to improve patient understanding and therapy compliance. Feature importance graphs were most favored, followed by textual explanations and rulebased protocols. Graphs and coefficient tables were least preferred due to concerns about understandability.</p><p>Interview insights highlight the novelty of our paraganglioma models due to a lack of benchmarks to measure the accuracy of our models, the critical role of genetic data in personalized medicine, and the need for tools to monitor tumor growth. The value doctors place on model predictions for patient communication emphasizes the importance of accurate, explainable models to foster trust and informed decisions. Initial tests on GP models for paraganglioma are documented in <ref type="bibr" target="#b6">[7]</ref>, providing detailed outcomes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Retail use case</head><p>The decision-makers seek explanations across various dimensions: customer behavior, transportation costs, and strategies for maximizing profits. The questionnaires findings are summarized in the figure 2 In feedback from decision-makers on AI system explanations, there's an openness to sacrificing a portion of model performance for enhanced explainability, with preferences for detailed yet intuitive insights into model workings. This encompasses a broad interest in customer behavior, cost analysis, and profit strategies, highlighting a desire for interactive tools and visualizations that facilitate deeper understanding and strategic adjustments. There's a notable emphasis on practical application, with decision-makers valuing features like counterfactual explanations and the ability to interpret and act upon complex information, all aimed at optimizing operational efficiency and customer engagement.</p><p>The interview highlighted a preference for explainability over accuracy, with caution advised due to limited machine learning expertise. Simple visual explanations and mathematical formulas are preferred to avoid complexity. Graphical dashboards are recommended for assessing operational efficiency and customer behavior, enhancing interpretability and interaction. Counterfactual explanations are valued for demonstrating the impact of decisions such as new scheduling slots. Developing models that identify customer characteristics and behaviors by region is essential for deeper business insights.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Energy use case</head><p>The insights from operational and facility managers are summarized in figure <ref type="figure" target="#fig_2">3</ref>. Operational managers favor a balance between accuracy and transparency, adjusting the trade-off based on the audience. They prefer visual and simple mathematical explanations to suit various stakeholder technical levels. Graphical dashboards are effective for insights into efficiency and customer behavior, with counterfactual explanations providing useful scenario analysis. Strategic analyses, such as regional behavior modeling and what-if scenarios, highlight the value of feature importance graphs and counterfactuals in delivering clear, actionable insights for decision-making and management.</p><p>Insights from the interviews demonstrate a preference for explanatory forecasting models over basic ones, with methods applicable across sectors like gas and energy. Ease of use and interactive elements are advised for the graphical interface, alongside a smartphone component for energy applications to enable notifications. For detailed analyses of GP models in energy, see <ref type="bibr" target="#b7">[8]</ref> and <ref type="bibr" target="#b8">[9]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4.">General guidelines</head><p>The table 1 summarizes the overarching guidelines derived from the survey findings. Drawing from these insights, the design of the explanatory tool should incorporate two essential modules: a Counterfactual Module, which calculates the minimal changes required to shift the model's decision towards a desired outcome, thereby enabling "What-if" scenarios based on user queries, and a Global Importance Module, which provides visualization of the significant feature contributions to the model's predictions, in line with findings from the user studies. Both modules should be integrated within the tool, ensuring that the inputs, outputs, and connections between modules are well-defined.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions</head><p>This study identifies foundational components for an XAI framework intended for various applications through comprehensive questionnaires and interviews with domain experts in three distinct use cases. The envisioned XAI tool incorporates a Counterfactual Module to facilitate "What-if" scenarios, allowing users to see how minimal changes could lead to desired outcomes. Additionally, a Global Importance Module is designed to visually represent the most influential features in model predictions, resonating with the XAI literature emphasizing the critical role of feature importance and counterfactual explanations. While aiming for shared applicability, the framework also acknowledges the unique requirements of each specific case, although the detailed exploration of these unique case aspects was beyond this paper's scope. This approach informs the ongoing development of the AI tool, leveraging insights gathered from user studies to ensure the tool's effectiveness across different domains. Our tool is now prepared for evaluation by experts across the three fields. We will integrate their feedback into an updated version of the tool. For future research, the interest in online retail and energy sectors for customizable and user-specific explanations points towards a growing trend. This trend leans towards integrating NLP interactivity into explanations, an area we are beginning to explore.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Insights into doctors' preferences for medical scenario derived from the questionnaire.</figDesc><graphic coords="5,130.96,84.19,333.36,119.66" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Insights into online retail decision-makers preferences derived from the questionnaire.</figDesc><graphic coords="5,130.96,461.94,333.36,130.34" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: The insights from the energy questionnaire from operational and facility managers</figDesc><graphic coords="6,130.96,308.45,333.35,142.99" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Guidelines and Insights from User Studies on Explanatory Tool's Architecture</figDesc><table><row><cell>Domain</cell><cell>Insight</cell><cell>Recommendation</cell></row><row><cell>All</cell><cell>Preference for explainability over perfect</cell><cell>Balance explainability and accuracy,</cell></row><row><cell></cell><cell>accuracy, feature importance graphs as</cell><cell>utilize feature importance graphs,</cell></row><row><cell></cell><cell>effective communication tools, and value</cell><cell>and supplement counterfactuals for</cell></row><row><cell></cell><cell>of counterfactual explanations.</cell><cell>comprehensive understanding.</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This research was conducted under the Transparent, Reliable, and Unbiased Smart Tool for AI (Trust-AI) project, with Grant Agreement ID: 952060, funded by the EU Commission.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">What do we want from explainable artificial intelligence (xai)? -a stakeholder perspective on xai and a conceptual model guiding interdisciplinary xai research</title>
		<author>
			<persName><forename type="first">M</forename><surname>Langer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Oster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Speith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hermanns</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kästner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sesing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Baum</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.artint.2021.103473</idno>
		<ptr target="https://doi.org/10.1016/j.artint.2021.103473" />
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">296</biblScope>
			<biblScope unit="page">103473</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Holzinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Carrington</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
		<idno>CoRR abs/1912.09024</idno>
		<ptr target="http://arxiv.org/abs/1912.09024.arXiv:1912.09024" />
		<title level="m">Measuring the quality of explanations: The system causability scale (SCS). comparing human and machine explanations</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Explainable ai meets persuasiveness: Translating reasoning results into behavioral change advice</title>
		<author>
			<persName><forename type="first">M</forename><surname>Dragoni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Donadello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Eccher</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.artmed.2020.101840</idno>
		<ptr target="https://doi.org/10.1016/j.artmed.2020.101840" />
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence in Medicine</title>
		<imprint>
			<biblScope unit="volume">105</biblScope>
			<biblScope unit="page">101840</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Development of a human-centred psychometric test for the evaluation of explanations produced by xai methods</title>
		<author>
			<persName><forename type="first">G</forename><surname>Vilone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Explainable Artificial Intelligence</title>
				<editor>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</editor>
		<meeting><address><addrLine>Switzerland, Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer Nature</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="205" to="232" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">How to explain ai systems to end users: a systematic literature review and research agenda</title>
		<author>
			<persName><forename type="first">S</forename><surname>Laato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tiainen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Islam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mäntymäki ; Tiainen</surname></persName>
		</author>
		<idno type="DOI">10.1108/INTR-08-2021-0600</idno>
		<ptr target="https://www.utupub.fi/handle/10024/151554" />
	</analytic>
	<monogr>
		<title level="m">To whom to explain and what?: Systematic literature review on empirical studies on Explainable Artificial Intelligence (XAI)</title>
				<editor>
			<persName><forename type="first">Samuli</forename><surname>Laato</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">K M</forename><surname>Miika Tiainen</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Najmul</forename><surname>Islam</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Matti</forename><surname>Mäntymäki</surname></persName>
		</editor>
		<imprint>
			<publisher>Publisher Copyright</publisher>
			<date type="published" when="2021-04-02">2022. 2021. April 2, 2022. 2021</date>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="1" to="31" />
		</imprint>
	</monogr>
	<note>funding Information: The initial literature search upon which this article develops was done for the following Master&apos;s thesis published at the University</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Using the adap learning algorithm to forecast the onset of diabetes mellitus</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Everhart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">C</forename><surname>Dickson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">C</forename><surname>Knowler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Johannes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Annual Symposium on Computer Application in Medical Care</title>
				<meeting>the Annual Symposium on Computer Application in Medical Care</meeting>
		<imprint>
			<date type="published" when="1988">1988</date>
			<biblScope unit="page" from="261" to="265" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Function class learning with genetic programming: Towards explainable meta learning for tumor growth functionals</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M C</forename><surname>Sijben</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Jansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A N</forename><surname>Bosman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Alderliesten</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2402.12510</idno>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Explainable approaches for forecasting building electricity consumption</title>
		<author>
			<persName><forename type="first">N</forename><surname>Sakkas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yfanti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Shah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Sakkas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Chaniotakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Daskalakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Barbu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Domnich</surname></persName>
		</author>
		<idno type="DOI">10.3390/en16207210</idno>
		<ptr target="https://www.mdpi.com/1996-1073/16/20/7210.doi:10.3390/en16207210" />
	</analytic>
	<monogr>
		<title level="j">Energies</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Interpretable forecasting of energy demand in the residential sector</title>
		<author>
			<persName><forename type="first">N</forename><surname>Sakkas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yfanti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Daskalakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Barbu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Domnich</surname></persName>
		</author>
		<idno type="DOI">10.3390/en14206568</idno>
		<ptr target="https://www.mdpi.com/1996-1073/14/20/6568.doi:10.3390/en14206568" />
	</analytic>
	<monogr>
		<title level="j">Energies</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
