<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Advanced AI in Explainability and Ethics for the Sustainable Development Goals, November</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Preface of the Proceedings of the 1st Workshop on Advanced AI in Explainability and Ethics for the Sustainable Development Goals (ExplAI-2025)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tetiana Hovorushchenko</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksander Barmak</string-name>
          <email>barmako@khmnu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abdel-Badeeh M. Salem</string-name>
          <email>abmsalem@yahoo.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pavlo Radiuk</string-name>
          <email>radiukp@khmnu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ain Shams University</institution>
          ,
          <addr-line>El-Khalyfa El-Mamoun Street Abbasya, Cairo</addr-line>
          ,
          <country country="EG">Egypt</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Khmelnytskyi National University</institution>
          ,
          <addr-line>11, Instytuts'ka str., Khmelnytskyi, 29016</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>07</volume>
      <issue>2025</issue>
      <abstract>
        <p>This volume collects the papers accepted for publication at the 1st Workshop on Advanced AI in Explainability and Ethics for the Sustainable Development Goals (ExplAI-2025). The event took place in Khmelnytskyi, Ukraine, on November 7, 2025. This volume contains a total of 15 papers, all of which were presented at the workshop. The workshop aimed to address the growing need for transparency and trustworthiness in artificial intelligence systems, particularly those applied in high-stakes domains aligned with the United Nations Sustainable Development Goals (SDGs). By fostering interdisciplinary dialogue, ExplAI-2025 explored methodologies for explainable AI (XAI) that bridge the gap between complex algorithmic decision-making and human interpretability, ensuring that technological progress adheres to ethical governance frameworks.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The rapid proliferation of artificial intelligence in critical sectors, from healthcare and urban planning
to information security, has underscored the urgent necessity for systems that are not only accurate
but also transparent and ethically sound. ExplAI-2025 was established to seek cutting-edge research
fusing robust AI methodology with ethical principles and demonstrable social impact.</p>
      <p>The workshop specifically focused on the intersection of these technological requirements with the
global imperative of the UN SDGs. Topics of interest ranged from explainable machine learning models
for medical diagnostics to ethical frameworks for countering disinformation. The event provided a
platform for exchanging ideas, forging collaborations, and charting a path toward responsible,
humancentred innovation.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Submission and Review Process</title>
      <p>The program committee received 44 submissions from authors representing 8 countries. To ensure high
quality and impartiality, each submission underwent a rigorous double-blind peer-review process. Every
paper was evaluated by at least two independent members of the international program committee,
consisting of experts from Ukraine, Poland, Slovakia, the USA, the UK, Canada, Estonia, the Czech
Republic, Egypt, Algeria, and Kazakhstan.</p>
      <p>The acceptance criteria were stringent: papers required at least one strong acceptance
recommendation and no rejections to be considered. In instances of disagreement between reviewers, a third expert
was invited to adjudicate. Based on this process, 15 papers were accepted for presentation and inclusion
in this volume, resulting in an acceptance rate of approximately 34%.</p>
      <p>The program was organized into four thematic sessions:
1. Explainable AI: Focused on foundational methods, visual embedders, and interpretable models
for forecasting and security.
2. AI Ethics and Governance: Addressed resilience to social engineering, ensemble strategies for
small data, and requirements engineering.
3. Sustainable Development Goals: Covered medical diagnostics, urban trafic optimization, and
knowledge engineering for cultural heritage.
4. Trustworthy AI: Explored vision transformers, satellite imagery analysis, and verifiable fake news
detection.</p>
    </sec>
    <sec id="sec-3">
      <title>Acknowledgements</title>
      <p>We would like to express our deep gratitude to all the authors who submitted their work to
ExplAI2025, and to the speakers who presented their research. We are also indebted to the members of the
International Program Committee for their time, expertise, and dedication in reviewing the papers and
ensuring the high scientific standard of the workshop.</p>
      <p>We extend our thanks to Khmelnytskyi National University for hosting the event. We also
acknowledge the use of the Microsoft CMT service for managing the peer-reviewing process; this service was
provided for free by Microsoft, covering costs for Azure cloud services and software support.
November 2025</p>
      <sec id="sec-3-1">
        <title>Program Committee Chairs</title>
        <p>• Tetiana Hovorushchenko, Khmelnytskyi National University, Ukraine
• Oleksander Barmak, Khmelnytskyi National University, Ukraine
• Iurii Krak, Taras Shevchenko National University of Kyiv, Ukraine</p>
      </sec>
      <sec id="sec-3-2">
        <title>Organizing Committee</title>
        <p>• Pavlo Radiuk, Khmelnytskyi National University, Ukraine
• Oleksandr Mazurets, Khmelnytskyi National University, Ukraine
• Maryna Molchanova, Khmelnytskyi National University, Ukraine
• Olena Sobko, Khmelnytskyi National University, Ukraine</p>
      </sec>
      <sec id="sec-3-3">
        <title>International Program Committee</title>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Organization &amp; Sponsors</title>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>