<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Exploring Commonalities in Explanation Frameworks: A Multi-Domain Survey Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Eduard Barbu</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marharytha Domnich</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Raul Vicente</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nikos Sakkas</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>André Morim</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Apintech Ltd, POLIS-21 Group</institution>
          ,
          <addr-line>Limassol</addr-line>
          ,
          <country country="CY">Cyprus</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute Of Computer Science</institution>
          ,
          <addr-line>Tartu</addr-line>
          ,
          <country country="EE">Estonia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>LTPlabs, Avenida da Senhora da Hora</institution>
          ,
          <addr-line>459, Porto</addr-line>
          ,
          <country country="PT">Portugal</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This study presents insights gathered from surveys and discussions with specialists in three domains, aiming to find essential elements for an explanation framework that could be applied to these and possibly other use cases. The applications analyzed include a medical scenario (involving predictive ML), a retail use case (involving prescriptive ML), and an energy use case (also involving predictive ML). We interviewed professionals from each sector, transcribing their conversations for further analysis. Additionally, experts and non-experts in these fields filled out questionnaires designed to probe various dimensions of explanatory methods. The findings indicate a universal preference for sacrificing a degree of accuracy in favor of greater explainability. Additionally, we highlight the significance of feature importance and counterfactual explanations as critical components of such a framework. Our questionnaires are publicly available to facilitate the dissemination of knowledge in the field of XAI.</p>
      </abstract>
      <kwd-group>
        <kwd>machine learning</kwd>
        <kwd>expert surveys</kwd>
        <kwd>explainability framework</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction and Related Work</title>
      <p>This paper explores the role of AI in data-driven decision-making across sectors like healthcare,
retail, and energy, highlighting the challenges of ML models’ complexity and opacity. It focuses
on improving explanation understandability and trust through a study involving expert and
layman feedback on diferent explanation types. Although the study focuses on developing a
genetic programming (GP) tool to aid decision-making in these fields, the findings are relevant
for any machine learning algorithm. This strategy enhances user trust and transparency across
various ML models, providing applicable insights for AI applications.</p>
      <p>
        Research in explainable AI (XAI) aligns AI system explanations with user expectations and
needs. Key studies, such as [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], highlight identifying crucial stakeholders in AI explainability and
the development of a framework to meet these needs. Tools like the System Causability Scale
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and the System Usability Scale [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] have been introduced to assess ML explanation interfaces
and their efectiveness. Furthermore, a novel questionnaire leveraging psychometrics [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] aims
to reliably evaluate XAI method explanations, addressing explainability’s complex nature. This
body of work underpins our efort to craft AI tools that meet the diverse requirements of
professionals in fields such as medicine, retail, and energy, proposing a cross-disciplinary approach
to enhance user satisfaction and trust in AI applications. In their literature review, the authors
in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] define five primary goals for AI system interactions with end users: understandability,
trustworthiness, transparency, controllability, and fairness. They recommend designing XAI
systems to achieve these objectives and suggest guidelines for creating explanations focusing
on crucial system components. Additionally, they highlight the necessity for compromises in
AI explanations, underlining the absence of a one-size-fits-all solution.
      </p>
      <p>The paper is organized as follows: we begin with an overview of related work. This is followed
by introducing the three distinct use cases and their unique characteristics. In Section 3, we
elaborate on the methodology employed in conducting the surveys. The paper concludes with a
discussion of our findings and presents conclusions, including recommendations for developing
a GP tool to support practitioners across three use cases. The developed questionnaires are
publicly available to facilitate the dissemination of knowledge in the field of XAI.</p>
    </sec>
    <sec id="sec-2">
      <title>2. The use cases</title>
      <p>
        Medical Scenario The medical scenario explores GP models for paraganglioma and diabetes,
aiming to predict the tumor’s progression and diabetes presence. The model for paraganglioma
seeks to guide physicians on treatment timing, enhancing shared decision-making, optimizing
treatments, and reducing unnecessary interventions without substituting clinical judgment. For
diabetes, the model uses a well-known dataset [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] to predict if a patient has or does not have
diabetes.
      </p>
      <p>Retail use case Grocery stores use Dynamic Timeslot Pricing to balance customer satisfaction
with eficiency in home delivery. They ofer flexible delivery times while keeping costs low. This
AI-based approach sets fair and clear prices by looking at customer data and delivery logistics
to estimate how much customers are willing to pay and the cost to serve. An algorithm then
matches customer preferences with delivery eficiency to find the best times and prices.</p>
      <p>The method, which sets slot prices using a specific formula (Prescriptive Model), depends on
two support models—the Willingness to Pay (WTP) and Cost to Serve (CTS) models.</p>
      <p>Energy use case To recommend savings, the energy use case predicts household energy
consumption by analyzing weather, historical usage, building dynamics, pricing, and indoor
temperatures. It aims to ofer users clear explanations to support informed decisions and to
integrate these insights into business strategies for improved energy eficiency. Key
considerations include weather conditions, past consumption patterns, building characteristics, pricing
strategies for managing demand, and indoor temperature monitoring for energy conservation.
The challenge is making these forecasts understandable and actionable, facilitating eficient
energy use and decision-making in practical settings.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Survey methods</title>
      <p>This section outlines the survey methodologies applied to the three investigated use cases.
Our approach incorporated two methods: conducting interviews with domain experts and
distributing questionnaires to practitioners who may not have expert knowledge.</p>
      <p>Details of the surveyed experts are available at this link: Interviewed Experts Document.
Links to the questionnaires for each use case can be found in the following subsections. Three
medical doctors completed the medical use case questionnaires, while the retail questionnaires
were filled out by the interviewed expert and six additional respondents. For the energy case,
six respondents completed the questionnaires, four of whom were the experts interviewed.</p>
      <sec id="sec-3-1">
        <title>3.1. Survey methods for the Medical Scenario</title>
        <p>The questionnaire, which focused on diabetes risk estimation and was developed for the
medical scenario, aimed to explore the type of AI model explanations doctors need. Key areas
explored included the trade-of between accuracy and explainability, various presentation formats
(such as symbolic regression graphs, genetic programming protocols, SHAP feature importance
graphs, coeficients tables, and textual explanations), and their impact on understandability and
decision-making efectiveness. Doctors were asked to rate each format’s interpretability and
efectiveness on a 1 to 5 scale.</p>
        <p>Additionally, an interview focusing on the paraganglioma case collected insights on tumor
identification, statistical prediction models, genetic factors, training protocols for new doctors,
expectations from AI tools in managing paraganglioma, and the specific explanations needed
for comprehending this condition. The questionnaire and interview outcomes are intended
to guide the development of AI tools that efectively meet doctors’ informational needs and
preferences.</p>
        <p>The questionnaire for the medical scenario can be explored here: Diabetes Questionnaire</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Survey methods for the Retail Use Case</title>
        <p>The retail use case questionnaire was designed to delve into several key areas. First, they
explored price breakthroughs to gauge the significance of location and demand and how clear
the explanations were to customers. Next, the questionnaire sought to identify which types
of explanations customers preferred and how well they understood them. Lastly, there was a
focus on summarization assessment to evaluate the need for summaries in conjunction with
detailed pricing information. This part aimed to assess how these summaries afected clarity and
influenced decision-making. Participants rated explanations on interpretability and efectiveness
from 1 (least) to 5 (highest), aiming to understand the extent to which explanations helped in
decision-making and their clarity to customers. For this use case, two questionnaires have been
devised for two categories of users.</p>
        <p>1. Decision-makers Seek a comprehensive understanding of feature contributions to model
predictions for system optimization. With their expert background, they prefer detailed,
technical explanations to build trust and validate the model’s use based on its accuracy.</p>
        <p>Decision-Makers Questionnaire
2. Customers Favor straightforward, accessible explanations that still convey essential
information, aiding in understanding the rationale behind received ofers without
overwhelming technical detail.Customers Questionnaire</p>
        <p>The interview, which was recorded as a video file, explored issues such as finding a balance
between accuracy and explainability in e-commerce models, the incorporation of graphs and
mathematical formulas into explanations, understanding customer behavior through the
dynamic relationship between slot availability and pricing, and designing a dynamic dashboard to
manage the interaction between operational eficiency and customer behavior efectively.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Survey methods for the Energy Use Case</title>
        <p>The questionnaire targets operational managers and customers, aiming to identify their preferred
formats (tables, charts, interactive graphics, text) and types of explanations (causal, contrastive,
counterfactual) for model predictions. Operational managers, the primary audience, must
provide detailed feedback based on their expertise. They will focus on how model features
afect predictions and optimization opportunities to enhance their trust and model endorsement
through accurate and complex explanations. In contrast, customers likely prefer simpler,
straightforward explanations that clarify the rationale behind ofers. The energy questionnaire
delves into key areas like the accuracy-explainability trade-of, the value of explanations in
forecasting, the role of what-if scenarios in understanding model outcomes, and the specific
needs of facility managers for detailed explanations and visualization tools such as SHAP graphs,
highlighting preferences for explanation frequency and detail level.</p>
        <p>All interviewed experts and five additional energy experts have completed the questionnaire.
Energy Questionnaire</p>
        <p>The interviews explored the energy problem from various angles, each tailored to the
interviewee’s expertise. Discussions ranged from addressing market challenges in energy solutions
and the importance of clear explanations for end-users to exploring energy consumption
disaggregation and the role of genetic programming in enhancing analysis. Insights were also shared
on leveraging machine learning for water consumption monitoring to optimize resource
management and identify ineficiencies. Additionally, the design and usability of user interfaces for
energy management systems were examined, emphasizing the need for intuitive and engaging
interfaces to manage energy consumption better.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <sec id="sec-4-1">
        <title>4.1. Medical scenario</title>
        <p>Feature importance graphs were most favored, followed by textual explanations and
rulebased protocols. Graphs and coeficient tables were least preferred due to concerns about
understandability.</p>
        <p>
          Interview insights highlight the novelty of our paraganglioma models due to a lack of
benchmarks to measure the accuracy of our models, the critical role of genetic data in personalized
medicine, and the need for tools to monitor tumor growth. The value doctors place on model
predictions for patient communication emphasizes the importance of accurate, explainable
models to foster trust and informed decisions. Initial tests on GP models for paraganglioma are
documented in [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], providing detailed outcomes.
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Retail use case</title>
        <p>The decision-makers seek explanations across various dimensions: customer behavior,
transportation costs, and strategies for maximizing profits. The questionnaires findings are
summarized in the figure 2</p>
        <p>In feedback from decision-makers on AI system explanations, there’s an openness to
sacrificing a portion of model performance for enhanced explainability, with preferences for detailed
yet intuitive insights into model workings. This encompasses a broad interest in customer
behavior, cost analysis, and profit strategies, highlighting a desire for interactive tools and
visualizations that facilitate deeper understanding and strategic adjustments. There’s a notable
emphasis on practical application, with decision-makers valuing features like counterfactual
explanations and the ability to interpret and act upon complex information, all aimed at optimizing
operational eficiency and customer engagement.</p>
        <p>The interview highlighted a preference for explainability over accuracy, with caution
advised due to limited machine learning expertise. Simple visual explanations and mathematical
formulas are preferred to avoid complexity. Graphical dashboards are recommended for
assessing operational eficiency and customer behavior, enhancing interpretability and interaction.
Counterfactual explanations are valued for demonstrating the impact of decisions such as new
scheduling slots. Developing models that identify customer characteristics and behaviors by
region is essential for deeper business insights.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Energy use case</title>
        <p>The insights from operational and facility managers are summarized in figure 3.</p>
        <p>Operational managers favor a balance between accuracy and transparency, adjusting the
trade-of based on the audience. They prefer visual and simple mathematical explanations to
suit various stakeholder technical levels. Graphical dashboards are efective for insights into
eficiency and customer behavior, with counterfactual explanations providing useful scenario
analysis. Strategic analyses, such as regional behavior modeling and what-if scenarios, highlight
the value of feature importance graphs and counterfactuals in delivering clear, actionable
insights for decision-making and management.</p>
        <p>
          Insights from the interviews demonstrate a preference for explanatory forecasting models
over basic ones, with methods applicable across sectors like gas and energy. Ease of use and
interactive elements are advised for the graphical interface, alongside a smartphone component
for energy applications to enable notifications. For detailed analyses of GP models in energy,
see [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] and [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. General guidelines</title>
        <p>The table 1 summarizes the overarching guidelines derived from the survey findings.</p>
        <p>Drawing from these insights, the design of the explanatory tool should incorporate two
essential modules: a Counterfactual Module, which calculates the minimal changes required
to shift the model’s decision towards a desired outcome, thereby enabling "What-if" scenarios
based on user queries, and a Global Importance Module, which provides visualization of the
significant feature contributions to the model’s predictions, in line with findings from the user
studies. Both modules should be integrated within the tool, ensuring that the inputs, outputs,
and connections between modules are well-defined.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>This study identifies foundational components for an XAI framework intended for various
applications through comprehensive questionnaires and interviews with domain experts in
three distinct use cases. The envisioned XAI tool incorporates a Counterfactual Module to
facilitate "What-if" scenarios, allowing users to see how minimal changes could lead to desired
outcomes. Additionally, a Global Importance Module is designed to visually represent the most
influential features in model predictions, resonating with the XAI literature emphasizing the
critical role of feature importance and counterfactual explanations. While aiming for shared
applicability, the framework also acknowledges the unique requirements of each specific case,
although the detailed exploration of these unique case aspects was beyond this paper’s scope.
This approach informs the ongoing development of the AI tool, leveraging insights gathered
from user studies to ensure the tool’s efectiveness across diferent domains. Our tool is now
prepared for evaluation by experts across the three fields. We will integrate their feedback into
an updated version of the tool. For future research, the interest in online retail and energy
sectors for customizable and user-specific explanations points towards a growing trend. This
trend leans towards integrating NLP interactivity into explanations, an area we are beginning
to explore.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This research was conducted under the Transparent, Reliable, and Unbiased Smart Tool for AI
(Trust-AI) project, with Grant Agreement ID: 952060, funded by the EU Commission.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Langer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Oster</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Speith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hermanns</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kästner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sesing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Baum</surname>
          </string-name>
          ,
          <article-title>What do we want from explainable artificial intelligence (xai)? - a stakeholder perspective on xai and a conceptual model guiding interdisciplinary xai research</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>296</volume>
          (
          <year>2021</year>
          )
          <article-title>103473</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S0004370221000242. doi:https://doi.org/10.1016/j.artint.
          <year>2021</year>
          .
          <volume>103473</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Carrington</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <article-title>Measuring the quality of explanations: The system causability scale (SCS). comparing human and machine explanations</article-title>
          , CoRR abs/
          <year>1912</year>
          .09024 (
          <year>2019</year>
          ). URL: http://arxiv.org/abs/
          <year>1912</year>
          .09024. arXiv:
          <year>1912</year>
          .09024.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Dragoni</surname>
          </string-name>
          , I. Donadello,
          <string-name>
            <given-names>C.</given-names>
            <surname>Eccher</surname>
          </string-name>
          ,
          <article-title>Explainable ai meets persuasiveness: Translating reasoning results into behavioral change advice</article-title>
          ,
          <source>Artificial Intelligence in Medicine</source>
          <volume>105</volume>
          (
          <year>2020</year>
          )
          <article-title>101840</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S0933365719310140. doi:https://doi.org/10.1016/j.artmed.
          <year>2020</year>
          .
          <volume>101840</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Vilone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <article-title>Development of a human-centred psychometric test for the evaluation of explanations produced by xai methods</article-title>
          , in: L.
          <string-name>
            <surname>Longo</surname>
          </string-name>
          (Ed.),
          <source>Explainable Artificial Intelligence</source>
          , Springer Nature Switzerland, Cham,
          <year>2023</year>
          , pp.
          <fpage>205</fpage>
          -
          <lpage>232</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Laato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tiainen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Najmul</given-names>
            <surname>Islam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mäntymäki</surname>
          </string-name>
          ,
          <article-title>How to explain ai systems to end users: a systematic literature review and research agenda</article-title>
          ,
          <source>INTERNET RESEARCH 32</source>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>31</lpage>
          . doi:
          <volume>10</volume>
          .1108/INTR-08-2021-0600, funding Information:
          <article-title>The initial literature search upon which this article develops was done for the following Master's thesis published</article-title>
          at the University of Turku: Tiainen,
          <string-name>
            <surname>M.</surname>
          </string-name>
          , (
          <year>2021</year>
          ),
          <article-title>To whom to explain and what?: Systematic literature review on empirical studies on Explainable Artificial Intelligence (XAI</article-title>
          ), available at: https://www.utupub.fi/handle/10024/151554, accessed
          <issue>April 2</issue>
          ,
          <year>2022</year>
          . Publisher Copyright: ©
          <year>2021</year>
          ,
          <string-name>
            <given-names>Samuli</given-names>
            <surname>Laato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Miika</given-names>
            <surname>Tiainen</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.K.M. Najmul Islam</surname>
            and
            <given-names>Matti</given-names>
          </string-name>
          <string-name>
            <surname>Mäntymäki</surname>
          </string-name>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. E.</given-names>
            <surname>Everhart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. C.</given-names>
            <surname>Dickson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. C.</given-names>
            <surname>Knowler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Johannes</surname>
          </string-name>
          ,
          <article-title>Using the adap learning algorithm to forecast the onset of diabetes mellitus</article-title>
          ,
          <source>in: Proceedings of the Annual Symposium on Computer Application in Medical Care</source>
          ,
          <year>1988</year>
          , pp.
          <fpage>261</fpage>
          -
          <lpage>265</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>E. M. C.</given-names>
            <surname>Sijben</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Jansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. A. N.</given-names>
            <surname>Bosman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Alderliesten</surname>
          </string-name>
          ,
          <article-title>Function class learning with genetic programming: Towards explainable meta learning for tumor growth functionals</article-title>
          ,
          <year>2024</year>
          . arXiv:
          <volume>2402</volume>
          .
          <fpage>12510</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Sakkas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yfanti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Sakkas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Chaniotakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Daskalakis</surname>
          </string-name>
          , E. Barbu,
          <string-name>
            <given-names>M.</given-names>
            <surname>Domnich</surname>
          </string-name>
          ,
          <article-title>Explainable approaches for forecasting building electricity consumption</article-title>
          ,
          <source>Energies</source>
          <volume>16</volume>
          (
          <year>2023</year>
          ). URL: https://www.mdpi.com/1996-1073/16/20/7210. doi:
          <volume>10</volume>
          .3390/en16207210.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>N.</given-names>
            <surname>Sakkas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yfanti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Daskalakis</surname>
          </string-name>
          , E. Barbu,
          <string-name>
            <given-names>M.</given-names>
            <surname>Domnich</surname>
          </string-name>
          ,
          <article-title>Interpretable forecasting of energy demand in the residential sector</article-title>
          ,
          <source>Energies</source>
          <volume>14</volume>
          (
          <year>2021</year>
          ). URL: https://www.mdpi.com/ 1996-1073/14/20/6568. doi:
          <volume>10</volume>
          .3390/en14206568.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>