<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Imagining the AI Landscape after the AI Act</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Desara Dushi</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesca Naretto</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cecilia Panigutti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesca Pratesi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>European Commission - Joint Research Centre</institution>
          ,
          <addr-line>Via E. Fermi 2749, 21027 Ispra</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute of Information Science and Technologies - National Research Council</institution>
          ,
          <addr-line>via G. Moruzzi 1, 56124 Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Scuola Normale Superiore</institution>
          ,
          <addr-line>Piazza dei Cavalieri 7, 56126 Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Vrije Universiteit Brussel</institution>
          ,
          <addr-line>Pleinlaan 2, 1050 Brussels</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <abstract>
        <p>We summarize the first Workshop on Imagining the AI Landscape after the AI Act (IAIL 2022), co-located with 1st International Conference on Hybrid Human-Artificial Intelligence (HHAI 2022), held on June 13, In April 2021, the EU Parliament published a proposal, the AI Act (AIA), for regulating the use of AI systems and services in the Union market. However, the efects of EU digital regulations usually transcend its confines. An example of what has been named the ”Brussel efect” the high impact of EU digital regulations around the world - is the General Data Protection Regulation (GDPR), which came into efect in May 2018 and rapidly became a world standard. The AIA seems to go in the same direction, having a clear extraterritorial scope, in that it applies to any AI system or service that has an impact on European Citizens, regardless of where its provider or user is located. The AIA adopts a risk-based approach that bans certain technologies, proposes strict regulations for ”high risk” ones, and imposes stringent transparency criteria for others. If adopted, the AIA will undoubtedly have a significant impact in the EU and beyond. A crucial question is whether we already have the technology to comply with the proposed regulation and to what extent can the requirements of this regulation be enforceable.</p>
      </abstract>
      <kwd-group>
        <kwd>https</kwd>
        <kwd>//lsts</kwd>
        <kwd>research</kwd>
        <kwd>vub</kwd>
        <kwd>be/en/desara-dushi (D</kwd>
        <kwd>Dushi)</kwd>
        <kwd>https</kwd>
        <kwd>//datasciencephd</kwd>
        <kwd>eu/students/francesca-naretto</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>(F. Naretto); https://kdd.isti.cnr.it/people/panigutti-cecilia (C. Panigutti);
https://kdd.isti.cnr.it/people/pratesi-francesca (F. Pratesi)
validating the goodness of an AI system in terms of privacy, fairness, explainability, and so on,
and how will the proposed AI Act impact (non-)EU tech companies operating in the EU.</p>
      <p>Topics of interest include, but are not limited to:
• The AI Act and future technologies
• Applications of AI in the legal domain
• Ethical and legal issues of AI technology and its application
• Dataset quality evaluation
• AI and human oversight
• AI and human autonomy
• Accountability and Liability of AI
• Algorithmic bias, discrimination, and inequality
• Trust in practical applications of and data-driven decision-making in AI systems
• Transparent AI
• AI and human rights
• The impact of AI and automatic decision-making on rule of law
• Explainable by design
• Privacy by design
• Fairness by design
• AI risk assessment
• Explainability metrics and evaluation</p>
      <p>Papers intended to foster discussion and exchange of ideas. Submissions with an
interdisciplinary orientation were particularly welcome, e.g. works at the boundary between machine
learning, AI, human-computer interaction, law, digital philosopher, and ethics.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Organization</title>
      <sec id="sec-2-1">
        <title>2.1. Workshop Chairs</title>
        <p>• Desara Dushi, Vrije Universiteit Brussel, Belgium
• Francesca Naretto, Scuola Normale Superiore, Italy
• Cecilia Panugutti, European Commission – Joint Research Centre, Italy
• Francesca Pratesi, Institute of Information Science and Technologies - National Research</p>
        <p>Council, Italy
2.2. Program Committee
• Denise Amram - Scuola Superiore Sant’Anna
• Nertil Bërdufi - University College Beder
• Andrea Gadotti - Imperial College London
• Olga Gkotsopoulou - Vrije Universiteit Brussels
• Sarah De Nigris - JRC - European Commission
• Joanna Kulesza - University of Lodz
• Gianclaudio Malgieri - EDHEC Business School
• Anna Monreale - University of Pisa
• Yves-Alexandre de Montjoye - Imperial College London
• Rūta Liepiņa - Maastricht University
• Roberto Pellungrini - University of Pisa
• Giorgia Pozzi - TUDelft
• Giulia Schneider - Università Cattolica del Sacro Cuore
• Dennis Vetter - Goethe University Frankfurt</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Summary of the workshop</title>
      <p>The workshop was highly interdisciplinary and brought together researchers from diferent
backgrounds: computer science, law, philosophy, and social sciences. Participants expressed
their appreciation for such an interdisciplinary venue of discussion.</p>
      <p>The workshop consisted of two sessions of paper presentations with a QA, one keynote speech
from Virginia Dignum, one fireside chat with Mireille Hildebrandt and one group activity.</p>
      <p>Both participants and the keynote speakers raised some concerns about the AI Act and had
a fruitful discussion about it. The main concerns raised were related to the enforcement of
the AI Act, the feasibility of the implementation of AIA technical requirements (for example
“Training, validation and testing data sets shall be relevant, representative, free of errors and
complete”, Article 10(3)) and the need to address the issues of power behind the development
and deployment of AI in a more fundamental way.</p>
      <p>The group activity used Design Fiction tools to perform a structured brainstorming around
how to implement a process/methodology to be compliant with Art.14 on Human Oversight.
More specifically, participants were presented with a fictional narrative describing how postcode
bias might lead to discrimination against the poor. This type of bias is more subtle compared to
other types of biases such as gender or race bias, so enabling human oversight is more dificult.
Overall the discussions allowed participants to have a deeper understanding of the implications
of the AI Act and EU digital policies</p>
      <sec id="sec-3-1">
        <title>3.1. Submissions</title>
        <p>The Program Committee (PC) received a total of 17 submissions. Each paper was peer-reviewed
by at least three PC members, by following a double-blind reviewing process. The committee
decided to accept 11 papers: 3 regular papers (i.e., 12+ pages), 6 short papers, and 2 abstracts.
The abstract can contain preliminary or already published work, while papers must contain
original work.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Detailed Program</title>
        <p>The IAIL 2022 program was organized in two thematic sessions, two invited talks, and one
group activity.</p>
        <p>The thematic sessions followed a highly interactive format. They were structured into short
pitches, and ample room for questions and comments. Session Chairs introduced sessions and
participants. The Chair moderated sessions and discussions.</p>
        <p>Papers were grouped in two sessions:
Session 1 - Technical Aspects of AI Act
Session 2 - Ethical and Legal Aspects about AI Act</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>This workshop was partially supported by the following:</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>• The TAILOR network (GA n. 952215) - Foundations of Trustworthy AI - Integrating Reasoning, Learning and Optimization (https://tailor-network</article-title>
          .
          <source>eu)</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>• HumanE-AI-Net</surname>
          </string-name>
          (GA n. 952026)
          <article-title>- European network of Human-Centered Artificial Intelligence (https://www.humane-ai</article-title>
          .eu/).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          • SoBigData++ (GA n. 871042)
          <article-title>- “European Integrated Infrastructure for Social Mining and Big Data Analytics” (https://plusplus</article-title>
          .sobigdata.eu/)
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <article-title>• COHUBICOL (GA</article-title>
          . n. 788734) “
          <article-title>Counting as a Human Being in the Era of Computational Law” (https://www</article-title>
          .cohubicol.com/)
          <article-title>Authors would like to thank the HHAI 2022 workshop chairs</article-title>
          and organization for providing
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>an excellent framework for IAIL 2022</article-title>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>