<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Editorial: The First Workshop on Explainable Artificial Intelligence for the medical domain - EXPLIMED@ECAI2024</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gianluca Zaza</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gabriella Casalino</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giovanna Castellano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Science Department, University of Bari Aldo Moro Bari</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The 2024 First Workshop on Explainable Artificial Intelligence for the Medical Domain (EXPLIMED) marks its inaugural edition in conjunction with the 27th European Conference on Artificial Intelligence (ECAI 2024) in Santiago de Compostela. This workshop brought together experts in Artificial Intelligence to deepen the latest innovations and best practices in Explainable AI (XAI) within the medical field. Participants engaged in discussions that covered recent trends, research initiatives, and emerging developments in XAI as they pertain to healthcare applications, emphasizing a multifaceted approach to understanding how these advancements can enhance medical practice and patient outcomes.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainable Artificial Intelligence</kwd>
        <kwd>Transparent models</kwd>
        <kwd>Interpretable models</kwd>
        <kwd>e-health</kwd>
        <kwd>bioinformatics</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        The EXPLIMED workshop is a pivotal forum that integrates Explainable Artificial Intelligence
(XAI) within the medical field, focusing on the latest research, methodologies, and practical
case studies. In an era where AI is becoming increasingly central to healthcare decision-making,
the workshop aims to cultivate a collaborative platform for researchers, practitioners, and
policymakers to exchange insights on enhancing transparency, interpretability, and trust in
medical AI systems [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. The primary objective of EXPLIMED is to underscore the critical
importance of XAI in ensuring that medical professionals and patients can fully understand
and trust the outcomes generated by AI [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. With the opacity of AI models posing significant
concerns regarding accurate diagnoses, treatments, and patient care, this workshop emphasizes
XAI’s vital role in delivering understandable insights that empower healthcare professionals
and patients alike [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Furthermore, EXPLIMED highlights the necessity of transparently
communicating AI-generated outcomes to patients, enabling them to engage more actively
in their healthcare decisions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Addressing ethical considerations related to biases in AI
systems, the workshop advocates for fair and equitable healthcare practices by elucidating
the decision-making processes inherent in AI technologies [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Covering a broad spectrum
of topics—including post-hoc and ante-hoc methods for explainability, uncertainty modeling,
and applications of XAI in medical imaging and video—EXPLIMED is committed to fostering
interdisciplinary collaboration. By bringing together researchers, clinicians, and policymakers,
the workshop promotes the adoption of XAI in healthcare, ultimately contributing to developing
reliable and interpretable AI systems that enhance decision-making in clinical environments.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Workshop’s Contibutions</title>
      <p>We received 24 submissions for the EXPLIMED workshop, and after a thorough review process,
18 were accepted for presentation. Researchers from 12 countries (Brazil, Germany, India,
Ireland, Israel, Italy, Netherlands, Portugal, Spain, Taiwan, and Türkiye) attended the workshop,
enriching the exchange of ideas and insights in Explainable Artificial Intelligence in medicine.</p>
      <p>EXPLIMED featured several impactful presentations, opened by the keynote by Prof. Hani
Hagras (University of Essex) on "True Explainable Artificial Intelligence for Health
Applications." Prof. Hagras discussed how advances in computing power and the exponential growth
of data have renewed interest in AI, yet the complexity of many AI algorithms creates opacity,
often termed "black-box" models. He emphasized that for AI to earn trust and broad adoption,
especially in healthcare, it must ofer transparency through Explainable AI (XAI). XAI aims
to build models that both explain individual decisions and help users understand a system’s
capabilities, predict behaviors, and identify improvements. Prof. Hagras highlighted XAI’s
transformative potential for healthcare, advocating for systems that are accessible, understandable,
and supportive of human augmentation in decision-making.</p>
      <p>The paper "Integrating Graph Neural Networks and Fuzzy Logic to Enhance Deep Learning
Interpretability" introduced a methodology integrating Graph Neural Networks (GNNs) with
Fuzzy Logic to enhance interpretability in deep learning models, demonstrating how this
combination can address the complexities of structured data while maintaining transparency
and reliability in AI systems. The paper "ProtoAL: Interpretable deep active learning with
prototypes for medical imaging" focused on ProtoAL, presented a deep active learning model
utilizing prototypes to foster interpretability in medical imaging, achieving commendable
accuracy while reducing data requirements. In a significant advancement for patient privacy,
the paper "Latent difusion models for Privacy-preserving Medical Case-based Explanations"
discussed using Latent Difusion Models to create privacy-preserving, case-based explanations
for medical diagnoses, efectively balancing visual quality with anonymity. Furthermore, the
paper "Explaining Bayesian Networks in Natural Language using Factor Arguments. Evaluation
in the medical domain" explored how Bayesian Networks can be explained in natural language
through factor arguments, presenting a novel approach that aids users in understanding the
reasoning process behind probabilistic inference in medical contexts. The paper "VISE: Validated
and Invalidated Symbolic Explanations for Knowledge Graph Integrity" proposed a hybrid
method that combines symbolic and numerical learning techniques for Knowledge Graphs,
ensuring integrity and improving predictive performance while generating meaningful insights.
Instead, the paper "Prediction of Continuous Targets by Explainable Imbalanced Regression
from Omics Data in Childhood Obesity" addressed the challenge of imbalanced regression
in predicting health metrics related to childhood obesity, employing explainable models that
improve prediction accuracy and elucidate meaningful biological relationships.</p>
      <p>The paper "Explainable skin lesion classification with multitask learning" introduced a
multitask learning framework for skin lesion classification that utilizes optical coherence tomography
to analyze cell nuclei and skin layers. This approach enhances model interpretability while
accurately identifying skin conditions. Following this, the paper entitled "An Explainable
Convolutional Neural Network for the Detection of Drug Abuse" study on drug abuse detection utilized
a convolutional neural network (CNN) to analyze lateral-flow tests. The authors highlighted the
model’s ability to explain its predictions, providing crucial insights for real-world applications.
Instead, the paper "Towards Explainable Federated Learning in Healthcare: A Focus on Heart
Arrhythmia Detection" tackled the integration of explainable federated learning in detecting
heart arrhythmias, employing a temporal convolutional network with attention mechanisms
to ensure both privacy and interoperability. Another notable contribution entitled "Towards
Explainable Deep Learning in Oncology: Integrating EficientNet-B7 with XAI techniques
for Acute Lymphoblastic Leukaemia" examined the explainability of electrocardiogram-based
algorithms using Shapley attribution under a counterfactual reasoning setup, demonstrating
the importance of understanding model predictions in cardiovascular diagnostics. A further
study, called "Explainability by Shapley attribution for electrocardiogram-based algorithmic
diagnosis under subtractive counterfactual reasoning setup", proposed a framework combining
EficientNet-B7 with various XAI techniques for diagnosing Acute Lymphoblastic Leukaemia,
emphasizing the importance of explainability to enhance trust in AI-driven diagnostics. Instead,
the paper "AI Readiness in Healthcare through Storytelling XAI" aims to tailor explanations to
diverse audience needs, enhancing user trust and understanding of AI systems.</p>
      <p>The paper "Mechanistic Causal Models for Explainable AI in Medicine: Coupling Respiratory
and Immunological Systems for In Silico Medicine Simulations" proposed a novel approach
utilizing mechanistic causal models that integrate known physiological principles to enhance
understanding of complex medical conditions, mainly focusing on the dynamics of respiratory and
immunological systems during cytokine storms. Instead, the paper "Identifying Candidates for
Protein-Protein Interaction: A Focus on NKp46’s Ligands" introduced a method for identifying
candidates for protein-protein interactions (PPIs) using a deep learning model called DSCRIPT.
This study emphasized the model’s self-explanatory capabilities to streamline the screening
process for potential interacting proteins. Futhermore, the paper "Evaluating Machine Learning
Models against Clinical Protocols for Enhanced Interpretability and Continuity of Care"
examined how machine learning models can be evaluated against established clinical protocols to
enhance interpretability and continuity of care. The authors proposed metrics for comparing
model predictions with clinical rules, ensuring that AI tools align with existing medical
practices. In addition, the paper "Towards Explainable General Medication Planning" focused on a
framework for explainable medication planning, which aimed to clarify the decision-making
process in personalized medication administration by employing visualization techniques. The
paper entitled "Reliable central nervous system tumor diagnosis on MRI images with Deep
Neural Networks and Conformal Prediction" addressed the reliable diferentiation of central
nervous system tumors using deep neural networks coupled with conformal prediction, which
provided not only accurate classifications but also quantifiable confidence measures. Finally,
the paper "Explaining Predictions of Hypertension Disease through Anchors" discussed the use
of the Anchors algorithm for explaining predictions in hypertension diagnosis, demonstrating
that optimal feature selection could enhance classification accuracy and explanation clarity.</p>
    </sec>
    <sec id="sec-3">
      <title>Organizing Committee</title>
      <p>Gianluca Zaza is an assistant professor at the University of Bari Aldo Moro
and is a member of the Computational Intelligence Laboratory (CILab). He
is working on "Understandability of AI systems" within the NRRP project
"FAIR - Future Artificial Intelligence Research," Spoke 6 - Symbiotic AI. He
is the project coordinator for the research project titled "Computational
Models based on Fuzzy Logic for eXplainable Artificial Intelligence," which
is funded for one year under the "Research Projects GNCS 2023" grant.</p>
      <p>He is a Guest Co-Editor of the Special Issue "Computational Intelligence
in Healthcare" in Bioengineering (MDPI) and an Associate Editor for the
Journal of Intelligent &amp; Fuzzy Systems (IOS Press). He is a reviewer for several international
journals published by leading publishers, including Elsevier and Springer.</p>
      <p>Gabriella Casalino is currently an Assistant Professor (Tenure Track)
at the Computational Intelligence Laboratory (CILab) of the Informatics
Department of the University of Bari. Her research is focused on
Computational Intelligence methods for interpretable data analysis. She is actively
involved in eHealth, Data Stream Mining, and eXplainable Artificial
Intelligence. Her work primarily focuses on the medical domain. She holds
membership in the IEEE Task Force on Explainable Fuzzy Systems and
the Interdepartmental Center for Telemedicine of the University of
BariCITEL. She is an active member of the computer science community and contributes by
organizing committees of workshops and special sessions in prestigious international conferences
such as ECAI and IEEE WCCI. She is an Associate Editor for the international journals "IEEE
Transactions on Computational Social Systems" and "Soft Computing". She is a Guest Editor
for several special issues in IEEE SMC Magazine, IEEE Transactions on Computational Social
Systems, and IEEE Systems Journal. She is a Senior member of the IEEE Society and has received
several awards for her research, including the prestigious FUZZ-IEEE Best Paper award in 2022.</p>
      <p>Giovanna Castellano is an Associate Professor at the Department of
Computer Science, University of Bari Aldo Moro, where she coordinates
the Computational Intelligence Laboratory (CILab). Her research interests
are in the area of Computational Intelligence and Computer Vision. She
has been responsible for the local unit of several research projects and is
currently the Principal Investigator of the WP 6.4 "Understandability of
AI systems" in the NRRP "FAIR - Future Artificial Intelligence Research"
project, Spoke 6 - Symbiotic AI. She is an Associate Editor of several
international journals. She has been a Guest Editor of special issues and
participated in organizing scientific events. She is a reviewer for several
international journals published by leading publishers, including Elsevier, IEEE, and Springer,
and a member of the program committee of several international conferences. She is a member
of the IEEE Society, EUSFLAT Society, INDAM-GNCS Society, IAPR Technical Committee 19
(Computer Vision for Cultural Heritage Applications), CINI-AIIS laboratory, CINI-BIG DATA
laboratory, CITEL telemedicine research center, GRIN, MIR laboratories. She is also a member
of the IEEE CIS Task Force on Explainable Fuzzy Systems.</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>The EXPLIMED organizers would like to thank the organizing committee of the 27 th European
Conference on Artificial Intelligence (ECAI 2024) for hosting this first edition of the workshop.
The EXPLIMED workshop was organized with support from the CILAB (Computational
Intelligence Lab) at the Department of Computer Science, University of Bari, and patronized by
Fondazione FAIR through the PNRR project, FAIR - Future AI Research (PE00000013), Spoke 6
Symbiotic AI (CUP H97G22000210007), under the NRRP MUR program funded by
NextGenerationEU. The workshop organizers are members of the INdAM GNCS research group. Gabriella
Casalino and Giovanna Castellano are members of the CITEL - Centro Interdipartimentale della
ricerca in Telemedicina, of the University of Bari Aldo Moro.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A. B.</given-names>
            <surname>Arrieta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Díaz-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Del</given-names>
            <surname>Ser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bennetot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tabik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barbado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>García</surname>
          </string-name>
          , S. GilLópez, D. Molina,
          <string-name>
            <given-names>R.</given-names>
            <surname>Benjamins</surname>
          </string-name>
          , et al.,
          <article-title>Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai</article-title>
          ,
          <source>Information fusion 58</source>
          (
          <year>2020</year>
          )
          <fpage>82</fpage>
          -
          <lpage>115</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Albahri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Duhaim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Fadhel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Alnoor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. S.</given-names>
            <surname>Baqer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Alzubaidi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. S.</given-names>
            <surname>Albahri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. H.</given-names>
            <surname>Alamoodi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Salhi</surname>
          </string-name>
          , et al.,
          <article-title>A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion</article-title>
          ,
          <source>Information Fusion</source>
          <volume>96</volume>
          (
          <year>2023</year>
          )
          <fpage>156</fpage>
          -
          <lpage>191</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H. W.</given-names>
            <surname>Loh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. P.</given-names>
            <surname>Ooi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Seoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. D.</given-names>
            <surname>Barua</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Molinari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U. R.</given-names>
            <surname>Acharya</surname>
          </string-name>
          ,
          <article-title>Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (</article-title>
          <year>2011</year>
          -2022),
          <source>Computer Methods and Programs in Biomedicine</source>
          <volume>226</volume>
          (
          <year>2022</year>
          )
          <fpage>107161</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>I.</given-names>
            <surname>Stepin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sufian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Catala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Alonso-Moral</surname>
          </string-name>
          ,
          <article-title>How to build self-explaining fuzzy systems: from interpretability to explainability [ai-explained]</article-title>
          ,
          <source>IEEE Computational Intelligence Magazine</source>
          <volume>19</volume>
          (
          <year>2024</year>
          )
          <fpage>81</fpage>
          -
          <lpage>82</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>McDermid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lawton</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Habli</surname>
          </string-name>
          ,
          <article-title>The role of explainability in assuring safety of machine learning in healthcare</article-title>
          ,
          <source>IEEE Transactions on Emerging Topics in Computing</source>
          <volume>10</volume>
          (
          <year>2022</year>
          )
          <fpage>1746</fpage>
          -
          <lpage>1760</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bharati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R. H.</given-names>
            <surname>Mondal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Podder</surname>
          </string-name>
          ,
          <article-title>A review on explainable artificial intelligence for healthcare: why, how, and when?</article-title>
          ,
          <source>IEEE Transactions on Artificial Intelligence</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>