=Paper= {{Paper |id=Vol-3277/Preface |storemode=property |title=None |pdfUrl=https://ceur-ws.org/Vol-3277/xpreface.pdf |volume=Vol-3277 }} ==None== https://ceur-ws.org/Vol-3277/xpreface.pdf
XAI.it 2022 - Preface to the Third Italian Workshop on
Explainable Artificial Intelligence
Cataldo Musto1 , Riccardo Guidotti2 , Anna Monreale2 and Giovanni Semeraro1
1
    Department of Computer Science, University of Bari Aldo Moro, Bari, Italy
2
    Department of Computer Science, University of Pisa, Pisa, Italy


                                         Abstract
                                         Artificial Intelligence systems are increasingly playing an increasingly important role in our daily lives.
                                         As their importance in our everyday lives grows, it is fundamental that the internal mechanisms that
                                         guide these algorithms are as clear as possible. It is not by chance that the recent General Data Protection
                                         Regulation (GDPR) emphasized the users’ right to explanation when people face artificial intelligence-
                                         based technologies. Unfortunately, the current research tends to go in the opposite direction, since most
                                         of the approaches try to maximize the effectiveness of the models (e.g., recommendation accuracy) at
                                         the expense of the explainability and the transparency. The main research questions which arise from
                                         this scenario is straightforward: how can we deal with such a dichotomy between the need for effective
                                         adaptive systems and the right to transparency and interpretability? Several research lines are triggered
                                         by this question: building transparent intelligent systems, analyzing the impact of opaque algorithms on
                                         final users, studying the role of explanation strategies, investigating how to provide users with more
                                         control in the behavior of intelligent systems. XAI.it, the Italian workshop on Explainable AI, tries to
                                         address these research lines and aims to provide a forum for the Italian community to discuss problems,
                                         challenges and innovative approaches in the various sub-fields of XAI.




1. Background and Motivations
Nowadays we are witnessing a new summer of Artificial Intelligence, since the AI-based
algorithms are being adopting in a growing number of contexts and applications domains,
ranging from media and entertainment to medical, finance and legal decision-making. While
the very first AI systems were easily interpretable, the current trend showed the rise of opaque
methodologies such as those based on Deep Neural Networks (DNN), whose (very good)
effectiveness is contrasted by the enormous complexity of the models, which is due to the huge
number of layers and parameters that characterize these models.
   As intelligent systems become more and more widely applied (especially in very “sensitive”
domain), it is not possible to adopt opaque or inscrutable black-box models or to ignore the

XAI.it 2022 - Italian Workshop on Explainable Artificial Intelligence - November 30, 2022 - Udine, Italy
Envelope-Open cataldo.musto@uniba.it (C. Musto); riccardo.guidotti@unipi.it (R. Guidotti); anna.monreale@unipi.it
(A. Monreale); giovanni.semeraro@uniba.it (G. Semeraro)
GLOBE http://www.di.uniba.it/~swap/musto (C. Musto); https://kdd.isti.cnr.it/people/guidotti-riccardo (R. Guidotti);
https://kdd.isti.cnr.it/people/monreale-anna (A. Monreale); http://www.di.uniba.it/~swap/semeraro.html
(G. Semeraro)
Orcid 0000-0001-6089-928X (C. Musto); 0000-0002-2827-7613 (R. Guidotti); 0000-0001-8541-0284 (A. Monreale);
0000-0001-6883-1853 (G. Semeraro)
                                       © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
general rationale that guides the algorithms in the task it carries on Moreover, the metrics
that are usually adopted to evaluate the effectiveness of the algorithms reward very opaque
methodologies that maximize the accuracy of the model at the expense of the transparency and
explainability.
   This issue is even more felt in the light of the recent experiences, such as the General Data
Protection Regulation (GDPR) and DARPA’s Explainable AI Project, which further emphasized
the need and the right for scrutable and transparent methodologies that can guide the user in a
complete comprehension of the information held and managed by AI-based systems.
   Accordingly, the main motivation of the workshop is simple and straightforward: how can
we deal with such a dichotomy between the need for effective intelligent systems and the right to
transparency and interpretability?
   These questions trigger several lines, that are particularly relevant for the current research
in AI. The workshop tries to address these research lines and aims to provide a forum for the
Italian community to discuss problems, challenges and innovative approaches in the area.


2. Accepted Papers
We believe that the program provides a good balance between the different topics related to
the area of Explainable AI. Moreover, the program will be further enriched through a keynote
given by Pasquale Minervini from University of Edinburgh and University College London.
   The accepted papers range from the definition of new methodologies to explain the behavior
of artificial intelligence systems to the development of new applications implementing the
principles of Explainable AI. In total, 11 contributions were accepted at XAI.it 2022 (9 of them
included in the proceedings):

   1. Andrea Apicella, Francesco Isgro, Andrea Pollastro and Roberto Prevete - Toward the
      application of XAI methods in EEG-based systems
   2. Luca Putelli, Alfonso Emilio Gerevini, Alberto Lavelli, Tahir Mehmood and Ivan Serina -
      On the Behaviour of BERT’s Attention for the Classification of Medical Reports
   3. Nina Spreitzer, Hinda Haned and Ilse van der Linden - Evaluating the Practicality of
      Counterfactual Explanations
   4. Erasmo Purificato, Saijal Shahania and Ernesto William De Luca - Tell Me Why It’s Fake:
      Developing an Explainable User Interface for a Fake News Detection System
   5. Mario Alviano, Francesco Bartoli, Marco Botta, Roberto Esposito, Laura Giordano, Valentina
      Gliozzi and Daniele Theseider Dupre - Towards a Conditional and Multi-preferential Ap-
      proach to Explainability of Neural Network Models in Computational Logic
   6. Francesca Naretto, Francesco Bodria, Fosca Giannotti and Dino Pedreschi - Benchmark
      analysis of black box local explanation methods
   7. Mario Alfonso Prado-Romero, Bardh Prenkaj, Giovanni Stilo, Alessandro Celi, Ernesto
      Estevanell-Valladares and Daniel Alejandro Valdés-Pérez - Ensemble approaches for Graph
      Counterfactual Explanations
   8. Simona Colucci, Francesco M Donini and Eugenio Di Sciascio - A Human-readable Expla-
      nation for the Similarity of RDF Resources
   9. Josè Luis Corcuera Bárcena, Mattia Daole, Pietro Ducange, Francesco Marcelloni, Alessan-
      dro Renda, Fabrizio Ruffini and Alessio Schiavo - Fed-XAI: Federated Learning of Explain-
      able Artificial Intelligence Models


3. Program Committee
As a final remark, the program co-chairs would like to thank all the members of the Program
Committee (listed below), as well as the organizers of the AIxIA 2022 Conference1 .
    • Davide Bacciu, Università di Pisa
    • Matteo Baldoni, Università di Torino
    • Valerio Basile, Università di Torino
    • Ludovico Boratto, Università di Cagliari
    • Roberta Calegari, Università di Bologna
    • Federica Cena, Università di Torino
    • Federica Cena, Università di Trento
    • Nicolò Cesa-Bianchi, Università di Milano
    • Roberto Confalonieri, Libera Università di Bozen-Bolzano
    • Rodolfo Delmonte, Università Ca’ Foscari
    • Stefano Ferilli, Università di Bari
    • Alessandro Giuliani, Università di Cagliari
    • Andrea Iovine, Università di Bari
    • Kyriaki Kalimeri, ISI Foundation
    • Antonio Lieto, Università di Torino
    • Francesca Alessandra Lisi, Università di Bari
    • Noemi Mauro, Università di Torino
    • Alessandro Mazzei, Università di Torino
    • Fabio Mercorio, Università di Milano-Bicocca
    • Andrea Omicini, Università di Bologna
    • Enea Parimbelli Università di Pavia
    • Roberto Prevete, Università di Naples Federico II
    • Erasmo Purificato Otto-von-Guericke Universitat Magdeburg
    • Antonio Rago, Imperial College London
    • Amon Rapp, Università di Torino
    • Gaetano Rossiello, IBM Research
    • Salvatore Ruggieri, Università di Pisa
    • Giuseppe Sansonetti, Roma Tre University
    • Mattia Setzu, Università di Pisa
    • Stefano Teso, Università di Trento
    • Gabriele Tolomei, Università di Roma La Sapienza
    • Francesca Toni, Imperial College London
    • Autilia Vitiello, University of Naples, Federico II
   1
       https://aixia2022.uniud.it//