<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>XAI.it 2024 - Preface to the Fifth Italian Workshop on eXplainable Artificial Intelligence</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Marco Polignano</string-name>
          <email>marco.polignano@uniba.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cataldo Musto</string-name>
          <email>cataldo.musto@uniba.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roberto Pellungrini</string-name>
          <email>roberto.pellungrini@sns.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Erasmo Purificato</string-name>
          <email>erasmo.purificato@acm.org</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giovanni Semeraro</string-name>
          <email>giovanni.semeraro@uniba.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mattia Setzu</string-name>
          <email>mattia.setzu@unipi.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Joint Research Centre, European Commission</institution>
          ,
          <addr-line>Ispra</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Scuola Normale Superiore</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Bari Aldo Moro</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>As Artificial Intelligence (AI) systems become integral to daily life, ensuring transparency and interpretability in their decision-making processes is critical. The General Data Protection Regulation (GDPR) has underscored users' right to understand how AI-driven systems make decisions that afect them. However, the pursuit of model performance often compromises explainability, creating a tension between achieving high accuracy and maintaining transparency. Core research questions focus on reconciling the high performance of LLMs and other AI models with interpretability requirements. Emerging research focuses on designing transparent systems, understanding the efects of opaque models on users, developing explanation strategies, and enhancing user control over AI behaviors. The workshop on eXplainable AI (XAI.it) provides a platform for addressing these challenges, fostering collaboration within the XAI community to explore novel solutions and share insights across this evolving multifaceted field.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;eXplainable AI</kwd>
        <kwd>Biases</kwd>
        <kwd>Trustworthiness</kwd>
        <kwd>Large Language Models</kwd>
        <kwd>LLMs</kwd>
        <kwd>XAI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Motivations and Scientific Relevance</title>
      <p>We are experiencing a new "AI summer" as artificial intelligence algorithms become widely adopted
across a diverse range of fields, from media and entertainment to healthcare, finance, and legal
decisionmaking. While early AI systems were relatively straightforward and interpretable, the rise of complex
models, particularly those based on Deep Neural Networks (DNNs), has led to powerful yet opaque
methodologies. These models’ efectiveness is ofset by their complexity—characterized by deep layers
and a vast number of parameters—which often makes them dificult to understand or scrutinize. As
intelligent systems are increasingly used in sensitive domains, the adoption of black-box models without
clear interpretability mechanisms is both impractical and potentially risky.</p>
      <p>The advent of Large Language Models (LLMs) has further amplified the challenges of transparency
and interpretability. LLMs are highly efective at language-related tasks, such as text generation,
summarization, and language translation, yet their internal decision-making processes are often dificult
to interpret due to the scale and complexity of their architectures. The opaque nature of LLMs can limit
user trust and raise concerns about the potential for bias, misinformation, and unintended consequences
in high-stakes applications. As LLMs increasingly impact areas such as education, healthcare, and
content moderation, the need for models that are not only accurate but also explainable becomes
paramount. Developing interpretability techniques for LLMs that enable users to understand and
control these models’ outputs is essential for ensuring responsible AI deployment. Conventional metrics
for evaluating AI performance often prioritize accuracy, inadvertently favoring these opaque models at
the expense of transparency. This trade-of has been highlighted by recent regulations and initiatives,
such as the General Data Protection Regulation (GDPR) and DARPA’s eXplainable AI Project, which
underscore a growing need for AI methodologies that are both efective and interpretable. These
initiatives have reinforced the user’s right to transparency, emphasizing that AI-driven decisions must
be comprehensible to build trust and ensure accountability.</p>
      <p>The motivation behind this workshop is to tackle the crucial question: how can we reconcile the
efectiveness of advanced AI systems with the imperative for transparency and interpretability? This
question opens up several important research avenues, including developing transparent and
interpretable models that retain high performance, enabling humans to better understand and trust AI-based
methods, and establishing ways to assess the transparency and explainability of AI systems as a whole.
The workshop provides a platform for the Italian research community to engage in these critical
discussions, share innovative approaches, and address the pressing challenges in explainable AI and LLM
transparency.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Accepted Papers</title>
      <p>We believe the program ofers a well-balanced exploration of the diverse topics within the field of
eXplainable AI. This year’s program is further enhanced by a keynote presentation focused on the
evolving role of XAI in the era of Large Language Models (LLMs). The accepted papers cover a wide
array of contributions, from proposing novel methodologies to enhance the interpretability of AI
systems to developing new applications that embody eXplainable AI principles. A total of 8 submissions
were received for XAI.it 2024, with 6 selected for inclusion in the proceedings, plus a position paper by
organizers:
• Daehyun Yoo and Caterina Giannetti. Ethical AI Systems and Shared Accountability: The Role of</p>
      <p>
        Economic Incentives in Fairness and Explainability [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
• Silvia D’Amicantonio, Mishal Kizhakkam Kulangara, Het Darshan Mehta, Shalini Pal, Marco
Levantesi, Marco Polignano, Erasmo Purificato , and Ernesto William De Luca. A Comprehensive
Strategy to Bias and Mitigation in Human Resource Decision Systems [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
• Leonardo Dal Ronco and Erasmo Purificato . ExplainBattery: Enhancing Battery Capacity Estimation
with an Eficient LSTM Model and Explainability Features [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
• Ejdis Gjinika, Nicola Arici, Luca Putelli, Alfonso Emilio Gerevini, and Ivan Serina. An Analysis on
      </p>
      <p>
        How Pre-Trained Language Models Learn Diferent Aspects [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
• Giovanna Castellano, Maria Grazia Miccoli, Rafele Scaringi , Gennaro Vessio, and Gianluca Zaza.
      </p>
      <p>
        Using LLMs to explain AI-generated art classification via Grad-CAM heatmaps [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
• Zhuofan Zhang and Herbert Wiklicky. Probabilistic Abstract Interpretation on Neural Networks
via Grids Approximation [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
• (Position Paper) Marco Polignano, Cataldo Musto, Roberto Pellungrini, Erasmo Purificato , Giovanni
Semeraro, and Mattia Setzu. XAI.it 2024: An Overview on the Future of eXplainable AI in the era
of Large Language Models [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Program Committee</title>
      <p>As a final remark, the program co-chairs would like to thank all the members of the Program Committee
(listed below), as well as the organizers of the AIxIA 2024 Conference 1.
• Roberto Confalonieri, University of Padua
• Ruggero G. Pensa, University of Torino
• Antonio Rago, Imperial College London
• Giuseppe Sansonetti, Roma Tre University
• Valerio Basile, University of Turin
• Claudio Pomo, Politecnico di Bari
• Roberto Capobianco, Sapienza University of Rome
• Salvatore Ruggieri, Università di Pisa
• Ludovico Boratto, University of Cagliari
• Mirko Marras, University of Cagliari</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>This research is partially funded by PNRR project FAIR - Future AI Research (PE00000013), Spoke 6
Symbiotic AI (CUP H97G22000210007) under the NRRP MUR program funded by the NextGenerationEU.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.-H.</given-names>
            <surname>Yoo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Giannetti</surname>
          </string-name>
          ,
          <article-title>Ethical AI Systems and Shared Accountability: The Role of Economic Incentives in Fairness and Explainability</article-title>
          ,
          <source>in: Proceedings of 5th Italian Workshop on Explainable Artificial Intelligence, co-located with the 23rd International Conference of the Italian Association for Artificial Intelligence</source>
          , Bolzano, Italy,
          <source>November 25-28</source>
          ,
          <year>2024</year>
          , CEUR. org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S. D</given-names>
            <surname>'Amicantonio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Kulangara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. D.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Levantesi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Polignano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Purificato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. W.</given-names>
            <surname>De Luca</surname>
          </string-name>
          ,
          <article-title>A Comprehensive Strategy to Bias and Mitigation in Human Resource Decision Systems</article-title>
          ,
          <source>in: Proceedings of 5th Italian Workshop on Explainable Artificial Intelligence, co-located with the 23rd International Conference of the Italian Association for Artificial Intelligence</source>
          , Bolzano, Italy,
          <source>November 25-28</source>
          ,
          <year>2024</year>
          , CEUR. org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Dal Ronco</surname>
          </string-name>
          , E. Purificato,
          <article-title>ExplainBattery: Enhancing Battery Capacity Estimation with an Eficient LSTM Model and Explainability Features</article-title>
          ,
          <source>in: Proceedings of 5th Italian Workshop on Explainable Artificial Intelligence, co-located with the 23rd International Conference of the Italian Association for Artificial Intelligence</source>
          , Bolzano, Italy,
          <source>November 25-28</source>
          ,
          <year>2024</year>
          , CEUR. org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Gjinika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Arici</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Putelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. E.</given-names>
            <surname>Gerevini</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Serina</surname>
          </string-name>
          ,
          <article-title>An Analysis on How Pre-Trained Language Models Learn Diferent Aspects</article-title>
          ,
          <source>in: Proceedings of 5th Italian Workshop on Explainable Artificial Intelligence, co-located with the 23rd International Conference of the Italian Association for Artificial Intelligence</source>
          , Bolzano, Italy,
          <source>November 25-28</source>
          ,
          <year>2024</year>
          , CEUR. org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Castellano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Miccoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Scaringi</surname>
          </string-name>
          , G. Vessio, G. Zaza,
          <article-title>Using LLMs to explain AI-generated art classification via Grad-CAM heatmaps</article-title>
          ,
          <source>in: Proceedings of 5th Italian Workshop on Explainable Artificial Intelligence, co-located with the 23rd International Conference of the Italian Association for Artificial Intelligence</source>
          , Bolzano, Italy,
          <source>November 25-28</source>
          ,
          <year>2024</year>
          , CEUR. org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , H. Wiklicky,
          <article-title>Probabilistic Abstract Interpretation on Neural Networks via Grids Approximatio</article-title>
          ,
          <source>in: Proceedings of 5th Italian Workshop on Explainable Artificial Intelligence, co-located with the 23rd International Conference of the Italian Association for Artificial Intelligence</source>
          , Bolzano, Italy,
          <source>November 25-28</source>
          ,
          <year>2024</year>
          , CEUR. org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Polignano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Musto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Pellungrini</surname>
          </string-name>
          , E. Purificato, G. Semeraro,
          <string-name>
            <given-names>M.</given-names>
            <surname>Setzu</surname>
          </string-name>
          , XAI.it
          <year>2024</year>
          :
          <article-title>An Overview on the Future of Explainable AI in the era of Large Language Models</article-title>
          ,
          <source>in: Proceedings of 5th Italian Workshop on Explainable Artificial Intelligence, co-located with the 23rd International Conference of the Italian Association for Artificial Intelligence</source>
          , Bolzano, Italy,
          <source>November 25-28</source>
          ,
          <year>2024</year>
          , CEUR. org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>