=Paper= {{Paper |id=Vol-3014/xpreface |storemode=property |title=None |pdfUrl=https://ceur-ws.org/Vol-3014/xpreface.pdf |volume=Vol-3014 }} ==None== https://ceur-ws.org/Vol-3014/xpreface.pdf
XAI.it 2021 - Preface to the Second Italian Workshop
on Explainable Artificial Intelligence
Cataldo Musto1 , Riccardo Guidotti2 , Anna Monreale2 and Giovanni Semeraro1
1
    Department of Computer Science, University of Bari Aldo Moro, Bari, Italy
2
    Department of Computer Science, University of Pisa, Pisa, Italy


                                         Abstract
                                         Artificial Intelligence systems are increasingly playing an increasingly important role in our daily lives.
                                         As their importance in our everyday lives grows, it is fundamental that the internal mechanisms that
                                         guide these algorithms are as clear as possible. It is not by chance that the recent General Data Protection
                                         Regulation (GDPR) emphasized the users’ right to explanation when people face artificial intelligence-
                                         based technologies. Unfortunately, the current research tends to go in the opposite direction, since most
                                         of the approaches try to maximize the effectiveness of the models (e.g., recommendation accuracy) at
                                         the expense of the explainability and the transparency. The main research questions which arise from
                                         this scenario is straightforward: how can we deal with such a dichotomy between the need for effective
                                         adaptive systems and the right to transparency and interpretability? Several research lines are triggered
                                         by this question: building transparent intelligent systems, analyzing the impact of opaque algorithms on
                                         final users, studying the role of explanation strategies, investigating how to provide users with more
                                         control in the behavior of intelligent systems. XAI.it, the Italian workshop on Explainable AI, tries to
                                         address these research lines and aims to provide a forum for the Italian community to discuss problems,
                                         challenges and innovative approaches in the various sub-fields of XAI.




1. Background and Motivations
Nowadays we are witnessing a new summer of Artificial Intelligence, since the AI-based
algorithms are being adopting in a growing number of contexts and applications domains,
ranging from media and entertainment to medical, finance and legal decision-making. While
the very first AI systems were easily interpretable, the current trend showed the rise of opaque
methodologies such as those based on Deep Neural Networks (DNN), whose (very good)
effectiveness is contrasted by the enormous complexity of the models, which is due to the huge
number of layers and parameters that characterize these models.
   As intelligent systems become more and more widely applied (especially in very “sensitive”
domain), it is not possible to adopt opaque or inscrutable black-box models or to ignore the

XAI.it 2021 - Italian Workshop on Explainable Artificial Intelligence - November 30, 2021 - Online Event
Envelope-Open cataldo.musto@uniba.it (C. Musto); riccardo.guidotti@unipi.it (R. Guidotti); anna.monreale@unipi.it
(A. Monreale); giovanni.semeraro@uniba.it (G. Semeraro)
GLOBE http://www.di.uniba.it/~swap/musto (C. Musto); https://kdd.isti.cnr.it/people/guidotti-riccardo (R. Guidotti);
https://kdd.isti.cnr.it/people/monreale-anna (A. Monreale); http://www.di.uniba.it/~swap/semeraro.html
(G. Semeraro)
Orcid 0000-0001-6089-928X (C. Musto); 0000-0002-2827-7613 (R. Guidotti); 0000-0001-8541-0284 (A. Monreale);
0000-0001-6883-1853 (G. Semeraro)
                                       © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
general rationale that guides the algorithms in the task it carries on Moreover, the metrics
that are usually adopted to evaluate the effectiveness of the algorithms reward very opaque
methodologies that maximize the accuracy of the model at the expense of the transparency and
explainability.
   This issue is even more felt in the light of the recent experiences, such as the General Data
Protection Regulation (GDPR) and DARPA’s Explainable AI Project, which further emphasized
the need and the right for scrutable and transparent methodologies that can guide the user in a
complete comprehension of the information held and managed by AI-based systems.
   Accordingly, the main motivation of the workshop is simple and straightforward: how can
we deal with such a dichotomy between the need for effective intelligent systems and the right
to transparency and interpretability?
   These questions trigger several lines, that are particularly relevant for the current research
in AI. The workshop tries to address these research lines and aims to provide a forum for the
Italian community to discuss problems, challenges and innovative approaches in the area.


2. Accepted Papers
We believe that the program provides a good balance between the different topics related to
the area of Explainable AI. Moreover, the program will be further enriched through a keynote
given by Fabrizio Silvestri from La Sapienza University of Rome.
   The accepted papers range from the definition of new methodologies to explain the behavior
of artificial intelligence systems to the development of new applications implementing the
principles of Explainable AI. In total, 6 contributions were accepted at XAI.it 2021:

   1. Ivan Donadello and Mauro Dragoni - Bridging Signals to Natural Language Explanations
      With Explanation Graphs
   2. Stefano Teso, Andrea Bontempelli, Fausto Giunchiglia and Andrea Passerini - Interactive
      Label Cleaning with Example-based Explanations
   3. Martin Jullum, Annabelle Redelmeier and Kjersti Aas - Efficient and Simple Prediction
      Explanations with GroupShapley: A Practical Perspective
   4. Andrea Apicella, Salvatore Giugliano, Francesco Isgrò and Roberto Prevete - Explanations
      in terms of Hierarchically organised Middle Level Features
   5. Rodolfo Delmonte - What’s Wrong with Deep Learning for Meaning Understanding
   6. Purin Sukpanichnant, Antonio Rago, Piyawat Lertvittayakumjorn and Francesca Toni -
      LLRP-Based Argumentative Explanations for Neural Networks


3. Program Committee
As a final remark, the program co-chairs would like to thank all the members of the Program
Committee (listed below), as well as the organizers of the AIxIA 2021 Conference1 .

    • Davide Bacciu, Università di Pisa
   1
       https://aixia2021.disco.unimib.it/
• Matteo Baldoni, Università di Torino
• Valerio Basile, Università di Torino
• Federico Bianchi, Università Bocconi - Milano
• Ludovico Boratto, Università di Cagliari
• Roberta Calegari, Università di Bologna
• Federica Cena, Università di Torino
• Roberto Capobianco, Università di Roma La Sapienza
• Federica Cena, Università di Trento
• Nicolò Cesa-Bianchi, Università di Milano
• Roberto Confalonieri, Libera Università di Bozen-Bolzano
• Luca Costabello, Accenture
• Rodolfo Delmonte, Università Ca’ Foscari
• Mauro Dragoni, Fondazione Bruno Kessler
• Stefano Ferilli, Università di Bari
• Fabio Gasparetti, Roma Tre University
• Alessandro Giuliani, Università di Cagliari
• Andrea Iovine, Università di Bari
• Antonio Lieto, Università di Torino
• Alessandro Mazzei, Università di Torino
• Stefania Montani, Università di Roma La Sapienza
• Daniele Nardi, Università di Roma La Sapienza
• Andrea Omicini, Università di Bologna
• Andrea Passerini, Università di Trento
• Roberto Prevete, Università di Naples Federico II
• Antonio Rago, Imperial College London
• Amon Rapp, Università di Torino
• Salvatore Rinzivillo, ISTI - CNR
• Gaetano Rossiello, IBM Research
• Salvatore Ruggieri, Università di Pisa
• Giuseppe Sansonetti, Roma Tre University
• Lucio Davide Spano, Università di Cagliari
• Stefano Teso, Katholieke Universiteit Leuven
• Francesca Toni, Imperial College London