<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Imagining the AI Landscape after the AI Act, Third Edition ⋆</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Desara</forename><surname>Dushi</surname></persName>
							<email>desara.dushi@vub.be</email>
							<affiliation key="aff0">
								<orgName type="institution">Vrije Universiteit Brussel</orgName>
								<address>
									<country key="BE">Belgium</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Francesca</forename><surname>Naretto</surname></persName>
							<email>francesca.naretto@unipi.it</email>
							<affiliation key="aff1">
								<orgName type="department">Dept. of Computer Science</orgName>
								<orgName type="institution">University of Pisa</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Francesca</forename><surname>Pratesi</surname></persName>
							<email>francesca.pratesi@isti.cnr.it</email>
							<affiliation key="aff2">
								<orgName type="institution">CNR</orgName>
								<address>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Imagining the AI Landscape after the AI Act, Third Edition ⋆</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">CC74664D5250115463C2FF2221C2F12A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:13+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>IAIL</term>
					<term>AI Act</term>
					<term>Artificial Intelligence</term>
					<term>EU</term>
					<term>regulation</term>
					<term>technology</term>
					<term>law</term>
					<term>ethics</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We provide a summary of the</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>After long debates and several amendments to the initial draft, the AI Act was finally adopted and published in the Official Journal on 12 July 2024 becoming the world's first general legislation on artificial intelligence. It aims to provide a framework for the development, placing on the market and use of artificial intelligence (AI) systems, which may pose risks to health, safety and fundamental rights. The AI Act started its gradual entry into force on 1st of August. Its entry into force will take place gradually, encompassing several stages. The first rules entering into force are the ones on prohibited AI systems which will start applying 6 months after entry into force, in February 2025. Secondly, rules for general-purpose AI models will start applying 12 months after entry into force, hence in August 2025. Thirdly, 24 months after entry into force, the rules on high-risk AI systems in annex III become applicable (AI systems in the fields of biometrics, critical infrastructure, education, employment, access to essential private and public services, law enforcement, migration and border control management, democratic processes and the administration of justice). And lastly, 36 months after entry into force, the rules on high-risk AI systems listed in annex I become applicable (toys, radio equipment, in vitro diagnostic medical devices, civil aviation safety, agricultural vehicles, etc.). The entry into application will be based on "harmonised standards" at European level which must define precisely the requirements applicable to the AI systems concerned. The AI Act follows a risk-based approach by classifying AI systems into four levels: unacceptable risk which led to a list of prohibited practices, deemed contrary to the values and fundamental rights of the EU; high risk AI systems, subject to detailed requirements (conformity assessments, technical documentation, risk management mechanisms, fundamental rights impact assessment); specific transparency risk, following a set of transparency obligations; and minimal risk, for all other AI systems, without any specific obligations. Moreover, the AI Act also provides a framework for a new category of so-called general-purpose AI models, in particular in the field of generative AI. These models are defined by their ability to serve a large number of tasks making it difficult to classify them in the previous categories. For this category, the AI Act provides for several levels of obligations, ranging from minimum transparency and documentation measures to an in-depth assessment and the implementation of systemic risk mitigation measures that some of these models might entail, in particular, because of their risks towards major accidents, misuse potential, the spread of harmful biases and discriminatory effects against certain persons, etc. The AI Act entails a twolevel governance structure: European and national level. At European level, the AI board, comprised of representatives from each Member State and the European Data Protection Supervisor (EDPS) as observer, will ensure consistent application of the AI Act. The AI Board will be informed in its choices by an advisory forum and a scientific panel of independent experts. In addition, an AI office in the European Commission will supervise general-purpose AI models. At the national level, the AI Act provides for the designation of one or more competent authorities to assume the role of market surveillance authority. Despite being an EU legislation, the AI Act has an extraterritorial scope, applying to all AI systems that have an impact on European citizens, regardless of the location of the AI system's provider and deployer. Such a broad application will undoubtedly have a significant impact in the EU and beyond. Almost in parallel with the AI Act, on 17 May 2024, after two years of drafting and negotiation, the Council of Europe (CoE) adopted its Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (the CoE Framework Convention), the world's first binding AI treaty. It will be open for signature from 5 September 2024. The EU and CoE have been actively collaborating in the discussions for the draft texts of the two legal documents, ensuring consistency in the terminology and principles within the two texts. The CoE Framework and the EU AI Act complement each other, being an important step towards global AI regulation. The purpose of IAIL 2024 was to explore with young and established experts the effect of the AI Act on technological development in the EU and how it will impact non-EU based developers operating in the EU. It also aimed at analyzing how can the legal requirements of the AI Act be operationalized, to what extent the AI Act protects the fundamental rights of end-users, and much more. Topics of interest included but were not limited to:</p><p>• The AI Act and future technologies The Averageness Theory and EU Law". Following, Nimrod Mike (University of Budapest, Hungary) presented a paper titled "Global Perspectives on AI Governance: A Comparative Overview". Last, but not least, Roberta Savella (ISTI-CNR, Italy) presented a paper titled "The need for a new 'right to refuse' the results of emotion recognition AI". Thanks to the diverse contributions and the two inspiring keynotes, we were able to explore various topics and engage in meaningful discussions, exchanging ideas across multiple research fields, in particular law, computer science, sociology and philosophy. Katja de Vries conducted an analysis on technological advancements in the context of AI and data analytics from the point of view of the legal rights, the privacy implications and the discriminatory processes implicitly conducted in the generation of images and videos from generative AI. Yves-Alexandre de Montjoye discussed the topic of Ethical AI with a focus on privacy, concerning the risks posed by privacy attacks on machine learning models. He explored the balance between privacy and other ethical values required for trustworthy AI systems, as well as the legal implications of the AI Act and its relationship with the General Data Protection Regulation (GDPR). In particular, we discuss the fact that in recent years several regulatory frameworks have been proposed to regulate AI. Not only the AI Act, which was the primary focus of our workshop, but also laws from the United States, China, Japan and many others. Across all of these documents, the principles of transparency, fairness, and accountability are consistently emphasized as essential to achieving trustworthy AI systems. However, even if there is a general acknowledgment about the good work done so far from the different countries, there are still open questions and problems related to some AI applications. For instance, Roberta Savella highlighted that the use of emotion recognition techniques raises concerns that the AI Act does not offer sufficient safeguards for the rights and freedoms of individuals. The issue resides in the fact that, apart from educational institutions and workplaces, facial emotion recognition can be used for lawful purposes as long as it adheres to the obligations imposed on high-risk systems. However, this provision conflicts with Article 22 of the General Data Protection Regulation (GDPR). Therefore, it is important to consider the possibility of granting individuals the right to refuse the use of these technologies when they result in legal consequences. While there are still unresolved questions about the AI Act and its implementation, it remains the only framework that adopts a risk-based approach. This is an important foundation for regulating AI systems without limiting innovation and research in the field. In contrast, the United States relies on a mix of federal and state laws, coupled with industry self-regulation, while China primarily focuses on administrative laws aimed at the deployment of efficient AI systems. These approaches are important and highlight different interests of different countries, but it is necessary to find a shared framework for advancing trustworthy AI. In fact, even if values like transparency, fairness, and accountability are common across most regulations, they alone are not enough to establish a unified global standard. This is why international collaborations, such as the Global Partnership on AI, which aims to harmonize AI governance, are becoming increasingly important. As stated by Mike Nimrod, such initiatives are crucial for fostering the development of a trustworthy AI system. Another key point explored during our workshop, thanks to the presentation of Miriam Doh and Anastasia Karagianni, was the issue of bias in various contexts. The growing use of Large Language Models (LLMs), trained on massive datasets from diverse sources, has raised concerns for several reasons. Primarily, LLMs pose threats to user privacy and complicate accountability. However, one of the most pressing issue from our perspective is bias. In fact, since LLMs have been trained on human data, they are susceptible to inheriting human biases, which can lead to undesirable outcomes. In particular, it has been demonstrated that AI systems, similar to humans, show variations in gender classification accuracy based on the perceived attractiveness of the individual, reflecting human biases. This is just one example of the many problems caused by bias in LLMs, but it highlights the severity of the issue. This challenge must be addressed more comprehensively in future legislation to mitigate the risks posed by biased AI systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Sumbmissions</head><p>The Program Committee (PC) received a total of 7 submissions. Each paper was peer-reviewed by at least three PC members, following a double-blind reviewing process. The committee decided to accept 3 papers.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Detailed Program</head><p>The IAIL 2024 program was organized in welcome and final remarks sessions, two invited talks and two paper presentation sessions. The keynotes were in the morning, with an engaged discussion after each one. Following, during the afternoon we had the papers presentation. The papers presentation sessions followed a highly interactive format. They were structured into short presentations with ample room for questions and comments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Summary of the presentations</head><p>The workshop had diverse contributions and two keynotes, allowing for discussions across law, computer science, sociology, and philosophy on The AI Act. Several regulatory frameworks, including the AI Act, U.S., China, and Japan's laws, emphasize transparency, fairness, and accountability as crucial to trustworthy AI. However, concerns remain, such as the lack of sufficient safeguards for facial recognition under the AI Act, and conflicts with the GDPR, as highlighted by the contribution of Roberta Savella. While the AI Act adopts a risk-based approach, the U.S. relies on federal and state laws, and China on administrative rules. Global collaborations, like the Global Partnership on AI, are seen as essential to harmonizing AI governance, as reported in the work of Mike Nimrod. During the paper presentation, it was also examined the topic of bias in Large Language Models (LLMs). This topic is only marginally considered in the AI Act, but it is of utmost importance due to its several ethical implications. In particular, there is the problem of potential human biases injected into the LLMs during the training. As an example, it has been shown that there are biases related to gender classification based on attractiveness. This underlines the need for more robust regulatory measures to address bias in AI systems, as highlighted in the work of Miriam Doh and Anastasia Karagianni.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion and Remarks</head><p>From the discussion carried out in the IAIL 2024 workshop, it appears evident that multidisciplinarity is a key point for the effectiveness of the EU legal and ethical framework. The workshop itself is a small evidence of the productive results arising from the dialogue of scholars in the different disciplines, having different approaches and motivations. Engaging in conversations and collaborations on human rights is the main goal that needs to be pursued in Europe and hopefully even beyond. Other aspects are the importance of taking particular care of generative AI, the problem of many hands in dealing with the accountability principle, and the need for concrete steps to operationalize the AIA. The papers highlighted the importance and the strength of having a uniform EU legal and ethical framework, as well as the need for a global collaboration to better shape the regulations for achieving trustworthy AI systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Acknowledgments</head><p>This workshop was partially supported by the following:</p><p>• SoBigData++ (GA n. 871042) -"European Integrated Infrastructure for Social Mining and Big Data Analytics" (https://plusplus.sobigdata.eu). • The TAILOR network (GA n. 952215) -Foundations of Trustworthy AI -Integrating Reasoning, Learning and Optimization (https://tailor-network.eu). • HumanE-AI-Net (GA n. 952026) -European network of Human-Centered Artificial Intelligence (https://www.humane-ai.eu/). • ALTEP-DP (SRP54) -Articulating Law, Technology, Ethics and Politics (https://lsts.research.vub. be/ALTEP_DP) • TANGO (101120763) -European Union's Horizon Europe research and innovation programme (https://tango-horizon.eu/category/news-events/page/2/)</p><p>Authors would like to thank the HHAI 2024 workshop chairs and organization for providing an excellent framework for IAIL 2024.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>• Applications of AI in the legal domain • Ethical and legal issues of AI technology and its application •</figDesc><table><row><cell>• Francesca Naretto, Scuola Normale Superiore (Italy) and Computer Science Department -Univer-</cell></row><row><cell>sity of Pisa (Italy)</cell></row><row><cell>• Francesca Pratesi, Institute of Information Science and Technologies -National Research Council</cell></row><row><cell>(Italy)</cell></row><row><cell>2.2. Program Committee</cell></row><row><cell>• Costanza Alfieri -University of L'Aquila</cell></row><row><cell>• Denise Amram -Scuola Superiore Sant'Anna</cell></row><row><cell>• Valeria Caforio -Università Bocconi</cell></row><row><cell>• Federica Casarosa -Scuola Superiore Sant'Anna</cell></row><row><cell>• Olga Gkotsopoulou -Vrije Universiteit Brussel</cell></row><row><cell>• Rami Haffar -Universitat Rovira i Virgili</cell></row><row><cell>• Iulia Lefter -Delft University of Technology</cell></row><row><cell>• Irina Lishchuk -Leibniz Universität Hannover</cell></row><row><cell>• Giorgia Pozzi -Delft University of Technology</cell></row><row><cell>• Clara Punzi -Scuola Normale Superiore</cell></row><row><cell>• David van Putten -Erasmus University Rotterdam</cell></row><row><cell>• Giulia Schneider -Catholic University of the Sacred Heart</cell></row><row><cell>• Mattia Setsu -University of Pisa</cell></row><row><cell>• Francesco Spinnato -University of Pisa</cell></row><row><cell>3. Summary of the workshop</cell></row><row><cell>Dataset quality evaluation</cell></row><row><cell>• AI and human oversight</cell></row><row><cell>• AI and human autonomy</cell></row><row><cell>• Accountability and Liability of AI</cell></row><row><cell>• Algorithmic bias, discrimination, and inequality</cell></row><row><cell>• Fairness by design</cell></row><row><cell>• AI and trust</cell></row><row><cell>• Transparent AI</cell></row><row><cell>• Explainable by design</cell></row><row><cell>• Explainability metrics and evaluation</cell></row><row><cell>• AI and human rights</cell></row><row><cell>• The impact of AI and automatic decision-making on rule of law</cell></row><row><cell>• Privacy by design</cell></row><row><cell>• AI risk assessment</cell></row><row><cell>• AI certification</cell></row><row><cell>• Safety, reliance and trust in human-AI interactions</cell></row><row><cell>• Human-in-the-loop paradigm</cell></row><row><cell>• Federated learning</cell></row><row><cell>• Contestability of AI output</cell></row><row><cell>• Generative AI</cell></row><row><cell>2. Organization</cell></row><row><cell>2.1. Workshop Chairs</cell></row><row><cell>• Desara Dushi, Vrije Universiteit Brussel (Belgium)</cell></row></table><note>The workshop was highly interdisciplinary and brought together researchers from different backgrounds. The workshop consisted of two keynote speeches, one from Katja de Vries, Associate Professor in public law at Uppsala University, Sweden, and one from Yves-Alexandre de Montjoye, Associate Professor of Applied Mathematics and Computer Science at Imperial College, London, and two sessions of paper presentations with a QA. Regarding the paper presented, we had a contribution from Doh Miriam (University of Mons and The Free University of Bruxelles, Belgium) and Karagianni Anastasia (Vrije Universiteit Brussel, Belgium), titled "My kind of woman: Analysing Gender Stereotypes in AI through</note></figure>
		</body>
		<back>
			<div type="references">

				<listBibl/>
			</div>
		</back>
	</text>
</TEI>
