<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Preface of the 2nd Workshop on Law, Society and Artificial Intelligence: Interdisciplinary perspectives on AI safety</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vincenzo Calderonio</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vittoria Caponecchia</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giuseppe Colavito</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Failla</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Federico Mazzoni</string-name>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Monique Munarini</string-name>
          <xref ref-type="aff" rid="aff5">5</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco Sanchi</string-name>
          <xref ref-type="aff" rid="aff4">4</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giuseppe Pisano</string-name>
          <xref ref-type="aff" rid="aff4">4</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Sciences and Information Technologies “A. Faedo” (ISTI), National Research Council, CNR</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute of legal informatics and judicial systems (IGSG) - CNR</institution>
          ,
          <addr-line>Florence</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Sant'Anna School of Advanced Studies, LIDER-Lab, DIRPOLIS</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Bari, Department of Computer Science</institution>
          ,
          <addr-line>Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of Bologna, CIRSFID - Alma AI Research Center</institution>
          ,
          <addr-line>Bologna</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>University of Pisa, Department of Computer Science</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff6">
          <label>6</label>
          <institution>Marco Billi, University of Bologna</institution>
          ,
          <addr-line>Bologna, Italy • Marianna Molinari</addr-line>
          ,
          <institution>University of Bologna</institution>
          ,
          <addr-line>Bologna</addr-line>
          ,
          <country>Italy • Sara Tibidò</country>
          ,
          <institution>IMT Scuola Alti Studi Lucca</institution>
          ,
          <addr-line>Lucca</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The second edition of the workshop on Law, Society and Artificial Intelligence (LSAI) was held at the International Conference on Hybrid Human-Artificial Intelligence at Pisa on June 10, 2025. This year, the workshop's focus was on interdisciplinary perspectives on AI safety. The 2nd edition of the LSAI Workshop explored the interdisciplinary nature of AI Safety, focusing on its socio-technical, ethical, and legal implications. It brought together researchers and practitioners from various fields to encourage diverse approaches to designing, deploying, and regulating AI systems that prioritise safety and align with societal values. In line with the HHAI conference goals, the workshop focused on AI systems designed to complement human abilities and interact with persons. Key discussions included technical robustness, governance frameworks, ethical considerations, and human-AI collaboration, addressing challenges in sectors like healthcare, finance, transportation, and governance. The workshop covered both theoretical and practical perspectives on AI Safety, contextualized within the field of HHAI, including: • Technical robustness of AI systems working alongside humans • Governance and regulatory frameworks for hybrid human-AI systems • Ethical considerations in the design of AI systems that interact with people • Trust, acceptance, and societal implications of AI technologies in high-stakes domains • Transparency, fairness, and accountability architectures in human-AI collaboration • Challenges in enforcing safety standards across jurisdictions and technologies Link to the workshop website: https://lsai2025.github.io/</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Artificial Intelligence</kwd>
        <kwd>AI Safety</kwd>
        <kwd>AI Act</kwd>
        <kwd>Human-AI Interaction</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Workshop output</title>
      <p>The workshop featured 18 submissions, including 6 full papers and 12 abstracts, of which 4 papers and 9
abstracts were accepted and presented. The program was structured in two sessions: a morning session,
which combined paper presentations on legal, ethical, and technical aspects of AI with a dedicated
roundtable, and an afternoon session, which similarly alternated between research talks and collective
discussion.</p>
      <p>The roundtables served as an open forum for interdisciplinary dialogue, complementing the structured
presentations. Key issues addressed included the challenges of establishing a shared vocabulary across
disciplines, the need for AI literacy to counter both fear and overreliance, the legitimacy of limiting or
certifying AI use in specific domains, and the question of whether AI is always necessary (“question
zero”). The discussions also explored the limits of “safety by design” approaches, the evolving scope
of the EU AI Act in light of new technologies, and the importance of balancing risk-awareness with
recognition of AI’s potential societal benefits. In the afternoon roundtable, attention shifted to broader
conceptual dimensions of safety, including its relation to trustworthiness, interpretability, digital identity,
and regulation. The debate underscored that AI safety is not only a technical matter but also a social,
cultural, and legal issue, requiring holistic perspectives that bridge governance, design, and societal
values.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Organization</title>
      <p>3.1. Workshop Chairs
This section contains a list of our program committee, we thank them for their contributions to the
reviewing process.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>The 2nd LSAI Workshop confirmed the need for interdisciplinary perspectives in addressing AI safety.
While technical robustness and regulatory compliance remain central, discussions highlighted that
safety cannot be dissociated from social legitimacy, trust, and cultural understanding. The debates
made clear that AI safety should not be reduced to risk prevention alone, but must also encompass
opportunities for responsible innovation, democratic participation, and human-centred design. By
fostering dialogue between legal scholars, computer scientists, ethicists, and practitioners, the workshop
contributed to a more nuanced understanding of AI safety as a multidimensional challenge.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>3.2. Program Committe</mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>