<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">The Ambiguous Risk-Based Approach of the Artificial Intelligence Act: Links and Discrepancies with Other Union Strategies</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Pietro</forename><surname>Dunn</surname></persName>
							<email>pietro.dunn2@unibo.it</email>
							<affiliation key="aff0">
								<orgName type="institution">Alma Mater Studiorum -Università di Bologna</orgName>
								<address>
									<addrLine>Via Zamboni 27/29</addrLine>
									<postCode>40126</postCode>
									<settlement>Bologna</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">University of Luxembourg</orgName>
								<address>
									<addrLine>4 Rue Alphonse Weicker</addrLine>
									<postCode>L-2721</postCode>
									<settlement>Luxembourg</settlement>
									<country key="LU">Luxembourg</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giovanni</forename><surname>De Gregorio</surname></persName>
							<email>giovanni.degregorio@csls.ox.ac.uk</email>
							<affiliation key="aff2">
								<orgName type="department">Centre for Socio-Legal Studies</orgName>
								<orgName type="institution">University of Oxford</orgName>
								<address>
									<addrLine>Manor Road</addrLine>
									<postCode>OX1 3UQ</postCode>
									<settlement>Oxford</settlement>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff3">
								<address>
									<settlement>Amsterdam</settlement>
									<country key="NL">Netherlands</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">The Ambiguous Risk-Based Approach of the Artificial Intelligence Act: Links and Discrepancies with Other Union Strategies</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">11D18ECA57B251A49317F6024586395E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T23:34+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Risk-Based Regulation</term>
					<term>Artificial Intelligence Act</term>
					<term>Proportionality</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The AI Act regulation proposal adopts a risk-based approach to the regulation of artificial intelligence systems. As a matter of fact, the risk-based approach has become more typical of Union strategies with respect to digital policies. However, the way such an approach has been declined varies greatly: most notably, whereas the GDPR and, to a limited extent, the DSA regulation proposal adopt a bottom-up perspective, the AI Act rather reflects a top-down scheme, where the task of risk assessment is kept within the hands of the legislator. This position paper aims at highlighting the common features, as well as the differences, between the various legal acts discussed: in particular, by considering (optimal) proportionality and due diligence as a characterizing features of the risk-based approach, the goal is to understand whether the AI Act does indeed reflect the typical principles of this developing legal model. Although noting that the role of due diligence is feebler within the regulation proposal, we argue that the central common point is represented by the (constitutionally relevant) goal of proportionality.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The advancement of progress and technology always represents a challenge for regulators, who are called upon to strike a fair balance between the need to foster innovation and the often conflicting need to reduce the risk for collateral effects on individuals' lives and fundamental rights and freedoms. Such a tension between progress and risk is also typical of digital technologies <ref type="bibr" target="#b0">[1]</ref>: indeed, in the last few years, the Union has had to face the complex task of designing the appropriate regulatory strategy for the development of a digital single market competitive in the international landscape but respectful, at the same time, of human rights and democratic principles. <ref type="foot" target="#foot_0">1</ref> This task has become increasingly important vis-à-vis the rise of artificial intelligence and of the algorithmic society <ref type="bibr" target="#b1">[2]</ref>.</p><p>In its 2021 Communication on fostering a European approach to artificial intelligence,<ref type="foot" target="#foot_1">2</ref> accompanying the presentation of its proposal for an Artificial Intelligence Act (AI Act), <ref type="foot" target="#foot_2">3</ref> the Commission underscored the manifold potential benefits of AI: throughout the COVID-19 pandemic, for instance, AI was used to predict the geographical spread of the virus, as well as for diagnostic purposes and for developing new vaccines and drugs against it. However, algorithms and AI can also carry risks. A flaw in the design or in the training of AI, in some instances, could lead for example to personal injuries or physical damages when those systems are used as safety components of a product. Moreover, when used for automated decision-making, algorithms can influence and sometimes affect individuals' exercise of fundamental rights <ref type="bibr" target="#b2">[3]</ref>. AI systems are particularly problematic since, in most cases, they lack transparency <ref type="bibr" target="#b3">[4]</ref>: this is worrying, for instance, vis-à-vis the risk of incorrect, biased and discriminatory results <ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref><ref type="bibr" target="#b6">[7]</ref><ref type="bibr" target="#b7">[8]</ref>.</p><p>To face the challenges raised by technological progress, Western countries have resorted more and more to regulatory models based on the concept of risk <ref type="bibr" target="#b8">[9]</ref>, to be intended, technically, as a combination between the probability of a defined hazard occurring and the magnitude of the consequences that hazard may entail <ref type="bibr" target="#b9">[10]</ref>. Risk is thus used as a proxy for decision-making. Through the practices of risk analysis <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref>, it is indeed possible to forecast, on a probabilistic logic, the future developments of a specific conduct or activity: based on this, the necessary mitigation strategies and tools may be identified.</p><p>All in all, risk-based regulation represents an attempt to face the new challenges of innovation through a rational and technocratic approach that fosters more efficient, objective, and fair governance, whilst fighting against "over-regulation, legalistic and prescriptive rules, and the high costs of regulation" <ref type="bibr" target="#b12">[13]</ref>. In particular, it uses risk as a tool to prioritize and target enforcement action in a manner that is proportionate to an actual hazard: regulation is thus calibrated to the actual needs of society vis-à-vis the risks connected to a product, service or activity <ref type="bibr" target="#b13">[14]</ref>.</p><p>The resort to risk-based regulation to face the new digital age is particularly evident when considering at least three fields: that of private and data protection; that of content moderation; and, finally, that of AI. As described elsewhere, indeed, the General Data Protection Regulation (GDPR) <ref type="foot" target="#foot_3">4</ref> , as well as the proposal for a Digital Services Act (DSA) <ref type="foot" target="#foot_4">5</ref> and the AI Act all adopt forms of risk-based approaches, although the perspective they take seems to shift progressively from a bottom-up to a topdown model. Because of such a different approach, doubts may arise with respect to the consistency of the legal framework about digital technologies. In particular, as has been already done by some researchers <ref type="bibr" target="#b14">[15]</ref>, the question which may be posed is whether the AI Act actually entails a risk-based approach. The argument of the present position paper is that the link between the AI Act and previous legislative measure is based on the principle of (optimal) proportionality among conflicting constitutional interests: in this sense, risk-based regulation represents a declination of the developing digital constitutionalism in Europe <ref type="bibr" target="#b15">[16]</ref>.</p><p>Section 2 analyses the relationship of the risk-based regulatory model with the principles of proportionality and due diligence. Section 3 compares the GDPR, the DSA, and the AI Act to outline the progressive shift from a bottom-up to a top-down perspective. Section 4 draws highlights what the roles of proportionality and due diligence are in the AI Act. Finally, Section 6 draws some conclusions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Risk, "optimal" proportionality, and due diligence</head><p>Risk-based regulation is characterized by some typical features differentiating it from more traditional models of law. The present subsection focuses on two aspects which appear to be fundamental in the context of contemporary Union risk-based policies: the pursuit of an "optimal" balance of interests and the reliance on due diligence.</p><p>First of all, as mentioned above, the characteristic goal of the risk-based approach is that of creating a framework where legal obligations are tailored to the specific risks entailed by a particular activity or service, with a view to avoiding the overburdening of the regulated actors. The scheme of the risk-based approach differs from that of traditional "command-and-control" mechanisms, where the state, as the entity endowed with legal authority, sets the rules on a top-down basis to impose certain duties and obligations applicable indiscriminately to all natural and legal persons subject to its jurisdiction <ref type="bibr" target="#b10">[11]</ref>. In fact, risk-based regulation inherently seeks to operate a "discrimination" between the subjects of law, thus differentiating the legal regime governing them based, precisely, on the proxy of risk.</p><p>In this sense, risk-based regulation aims at pursuing goals similar to what Adrian Vermeule has defined as "optimizing constitutionalism", or the "mature position" to (constitutional) risk regulation <ref type="bibr" target="#b16">[17]</ref>. Vermeule, in fact, operates a distinction between "precautionary constitutionalism" and "optimizing constitutionalism":<ref type="foot" target="#foot_5">6</ref> whereas the former, in synthesis, implies that "new instruments, technologies, and policies should be rejected unless and until they can be shown to be safe", the latter, instead of seeking "maximal" precautions", aims to introduce "optimal precautions" in terms of costs and benefits. In other words, whereas the concern of precautionary constitutionalism is to prevent in toto the potential consequences of a risk, optimizing constitutionalism takes a more consequentialist view on the regulation of risk, and, taking into account the potential downsides and collateral effects of a "no-risk" policy, seeks to balance the need to contain risk and the need to avoid over-regulation. In this sense, the EU risk-based approach to digital technologies is somehow consistent with the notion of "optimizing constitutionalism", since its aim is to reduce the potential harms such technologies may entail for individuals and society, while at the same time ensuring the development of industry and the market.</p><p>Besides, within risk-based regulation, such a balancing operation is to some extent left directly to the discretion of the "regulatee", who retains some leeway as to the identification of the measures to be implemented to reduce and mitigate the risk of harms. As will be underscored below, this is especially true for the GDPR and, in part, for the DSA, whereas such a margin of discretion is much more limited within the AI Act.</p><p>Be it as it may, the reliance on the targets of regulation for the purpose of identifying the exact content of the measures to be put into place inherently implies the need for such actors to operate with due diligence. This should not come as a surprise: indeed, in the field of international law, the notion of "due diligence" has come to play an increasingly central role with respect to the duty of states to manage risks (for the environment, for economy, for human rights, etc.) within their jurisdictions. As highlighted by Peters, Krieger, and Kreuzer, "due diligence is needed when a risk has to be controlled or contained, in order to prevent harm and damage done to another actor or to a public interest"; indeed, "the rise of the concept [of due diligence] is […] tied to the rise of the 'risk society' and the idea of risk management" <ref type="bibr" target="#b17">[18]</ref>. Risk-based regulation thus transposes the principle of due diligence from the framework of international law, and thus from the relations between states, to the framework of national law, translating it into a fundamental rule governing the behaviour of natural or legal persons acting within the state.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">The spectrum of the risk-based approach in EU digital policies</head><p>The risk-based approach towards digital policies has been developed through the last decade by EU law <ref type="bibr" target="#b18">[19]</ref>. Since the launch of the Digital Single Market Strategy, the Union has increasingly relied on a risk-based approach. Rather than just setting new rights and safeguards, the Union has tried to regulate risks by increasing the accountability of both public and private actors with respect to the risks and potential collateral effects resulting from their activities. The emergence of the risk-based approach within European digital policies is particularly evident when considering the recent legislative developments concerning the fields of data, online content, and artificial intelligence. Nonetheless, the way such an approach has been declined varies significantly.</p><p>The General Data Protection Regulation (GDPR) follows a bottom-up perspective, in the sense that the evaluation of risk and the choice of mitigating measures are not defined by the law but are primarily left to the discretion of the targets of regulation themselves, i.e., to data controllers and processors: in this sense, the principle of accountability is the result of a legislative strategy aiming to greatly reduce the imposition of duties coming from "above". Quite to the opposite, the proposed Artificial Intelligence Act (AI Act) takes a very different point of view, in that, although it provides for very different degrees of responsibility and imposes differentiated duties depending on the risk scores of regulated AI systems, it does not leave the task of evaluating such risk scores to the targets of regulation: in fact, it is the AI Act itself that, on a top-down basis, identifies directly the various categories of risk. Finally, in the field of online content, the Digital Services Act (DSA) aims at creating a hybrid system, which mixes the two opposite perspectives of the GDPR and the AI Act by identifying on a top-down basis four risk categories for providers of intermediary services while leaving them ample leeway to choose which measures to employ to reduce the negative externalities their activities entail</p><p>The present section thus briefly describes the shift from a bottom-up perspective, characteristic of the GDPR, to the top-down one, typical of the AI Act.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">The risk-based approach in the GDPR and DSA</head><p>The bottom-up perspective of the GDPR emerges from the fact that data controllers are entrusted themselves with the duty to ensure that the processing of personal data is aligned with the general principles of the Regulation. In fact, data controllers must operate a risk assessment with respect to the activities they conduct and develop the appropriate response to reduce any collateral effects affecting individuals' rights to privacy and data protection. It is from these duties that the concept of accountability arises, meaning that data controllers are held responsible for the decisions they make to minimize and mitigate damages: "the data holder […] is accountable for ensuring compliance with the principles (and rights of the data subject)" <ref type="bibr" target="#b19">[20]</ref>.</p><p>Accountability thus takes a dynamic form, since it varies depending on the nature, scope, context and purposes of processing as well as on the risks of varying likelihood and/or severity for the rights and freedoms of natural persons. In other words, the risk-based approach of the GDPR is inherently grounded upon a form of "responsibilisation of the regulatee" <ref type="bibr" target="#b9">[10]</ref> which translates, in turn, into the notion of accountability. It also translates into a model of "compliance 2.0", where the regulatee is not required to simply engage in a form of compliance consisting of "ticking boxes" but has to tailor the measures adopted to the situation at hand, with a view to respecting the rights and freedoms of data subjects <ref type="bibr" target="#b13">[14]</ref>. In other words, the binary logic of compliance/non-compliance, typical of the traditional rights-based approach of the European Union <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b20">21]</ref>, is overcome by the scalable logic of risk analysis. As a result, obligations may be "uneven" depending on the actors who are called to comply with the GDPR, but this different outcome is justified by the existence of a preliminary balancing test operated directly by data controllers.</p><p>This last aspect, which is precisely what characterizes the GDPR as a bottom-up risk-based regulation, emerges from a range of different provisions. For instance, apart from the provisions regulating in general the responsibility of data controllers 7 and introducing the principle of data protection by design and by the default 8 <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b12">13]</ref>, the Regulation foresees a mandatory requirement that controllers carry out a data protection impact assessment (DPIA) whenever a specific type of processing is likely to result in a "high" risk to the rights and freedoms of natural persons. 9  Whereas the GDPR adopted a risk-based approach for the regulation of personal data in the EU, the DSA proposal features, with specific respect to content moderation practices, a "supervised risk management approach". 10 Indeed, presented together with the Digital Markets Act (DMA) in December 2020, the DSA aims inter alia at updating the intermediary liability regime established in 2000 by the e-Commerce Directive (ECD). 11 Though maintaining substantially unaltered the "safe harbor" approach developed by the ECD and inherited from the US <ref type="bibr" target="#b21">[22]</ref><ref type="bibr" target="#b22">[23]</ref><ref type="bibr" target="#b23">[24]</ref>, the Regulation proposal envisages a broad array of new duties and obligations for providers of intermediary services, with a view to guaranteeing a transparent and safe online environment <ref type="bibr" target="#b24">[25]</ref>. These duties and obligations, moreover, reveal the peculiar traits of the DSA's risk-based approach. In fact, said obligations are not applicable to all providers of intermediary services indiscriminately, but follow a pyramidal structure, based on which they are divided into four tiers. Indeeed, on the basis of specific criteria concerning their dimension and the services they provide, providers are assigned to risk categories variously disciplined. 12   7 Art. 24 GDPR. 8 Art. 25 GDPR. 9 Art. 35 GDPR. 10 Explanatory memorandum to the DSA proposal, p. 1. 11 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic commerce'), O.J. 2020 L 178/1. 12 A small group of provisions thus applies to all providers of intermediary services, whereas the subsequent Articles have an increasingly narrow scope of application: hosting providers; online platforms; and "very large online platforms" (VLOPs). The obligations set by the DSA mainly move in two directions: first, that of fostering transparency concerning content moderation practices; second, that of making intermediaries, notably hosting providers and online platforms, more responsible for the content they host and contribute to disseminating. In Therefore, as in the GDPR, the measures to be adopted by providers to face the risks arising from the services they offer are not horizontally equal but are directly calibrated based on varying risk assessment strategies. However, the DSA moves away from the pure bottom-up structure adopted by the GDPR, since decisions concerning the measures to adopt are not left entirely to the discretion of the targets of regulation. Indeed, the four categories for online intermediaries are established directly by the Regulation proposal and are disciplined in a progressively more severe manner depending on a preliminary top-down risk assessment <ref type="bibr" target="#b25">[26]</ref>. The "responsibilisation of the regulatee" is thus more feeble in the DSA if compared to the GDPR.</p><p>Besides, a certain margin of discretion is still left to the appreciation of the targets of regulation. In particular, in the case of very large online platforms (VLOPs), a significantly important duty is represented by the need to assess any significant risks entailed by their activities (including those concerning the dissemination of unlawful or harmful content and those potentially affecting the fundamental rights and freedoms of individuals) and to put in place the appropriate mitigation measures. 13 Such a provision shows how the gap between the DSA and the GDPR is only partial. Also, the establishment of an internal complaint-handling mechanism, 14 applicable to all online platforms, is another key example showing that these actors still retain a central role in defining which content items may or may not represent an unlawful or harmful content. All in all, the approach followed by the DSA, rather than being strictly top-down, seems to be hybrid. As such, both the GDPR and the DSA must necessarily rely, to a certain degree, on the due diligence of the targets of regulation: failure to develop mitigation strategies in a diligent manner will, inevitably, entail liability.</p><p>Moreover, both the GDPR and the DSA ultimately aim to establish an optimal balance between the goal of preventing harms deriving from digital technologies and the goal of guaranteeing an environment where the digital single market can fully flourish. Indeed, both acts incentivise the imposition of duties and obligations that are as much tailored as possible to the single specific cases. The GDPR's choice of delegating to data controllers and processors the decisions concerning the measures to be implemented, as well as the DSA's choice of creating an asymmetric legal regime for providers of intermediary services, are ultimately aimed at fostering a proportionate and optimal framework for actors in the digital market <ref type="bibr" target="#b16">[17,</ref><ref type="bibr" target="#b18">19]</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">The risk-based approach in the Artificial Intelligence Act</head><p>Within the AI Act, the trajectory from a bottom-up to a top-down perspective is seemingly complete. In fact, notwithstanding the explicit statement of the Commission, according to which the AI Act is fundamentally based upon a risk-based approach, some commentators have raised serious doubts concerning the possibility of actually recognising it as such <ref type="bibr" target="#b14">[15]</ref>. The Commission's intentions to adopt a balanced risk-based approach to the regulation of artificial intelligence already emerged within the 2020 White Paper on Artificial Intelligence. 15 The document highlighted the role that AI should play in the improvement of many aspects of our society, including healthcare, the mitigation of climate change, and efficiency in production. At the same time, it stressed the potential collateral impact of artificial intelligence systems on people's physical integrity as well as on their individual rights and liberties. According to the Union's strategy towards AI, the ultimate goal must be that of building an ecosystem of trust <ref type="bibr" target="#b26">[27]</ref> and excellence as a means to strike the correct balance between risk and innovation. 16  The AI Act proposal aims to build precisely that ecosystem of trust and excellence, thus representing a new critical step in the developing digital strategy of the Union. As is well known, the text of the proposal is structured upon four levels of risk, associated with certain AI systems and their use <ref type="bibr" target="#b27">[28]</ref>. This structure recalls, to a certain extent, that of the DSA: however, the AI Act leaves very little, if any, discretion to users and providers of AI. Rather than entrusting them with the task of assessing risks and particular, all providers of hosting services will need to put in place a "notice and action" procedure: individuals or entities shall thus have the opportunity of flagging the presence of unlawful content, following which intermediaries will have to act expeditiously in order to avoid subsidiary liability for third-party content (Art. 14 DSA). 13 Artt. 26-27 DSA. 14 Art. 17 DSA. 15 COM/2020/65 final, "White Paper on Artificial Intelligence -A European approach to excellence and trust". 16 ibid., at 3. developing the appropriate risk mitigation strategies, the choice of the AI Act is to set from above the rules of the game which must be complied with.</p><p>What truly changes with the AI Act is how the assessment of risk is carried out and by whom: in the GDPR, such a task is in the hands of data controllers; in the DSA, the Union legislator sets a top-down framework applicable to all providers of intermediary services, while still leaving space for a certain margin of discretion as far as enforcement of the law is concerned (especially in the case of VLOPs). Within the AI Act, conversely, it is the legislator (together with the Commission) that is vested with the task of assessing risk: the leeway granted to providers and users is, in fact, minimal.</p><p>First, the AI Act proposal prohibits some practices involving systems which are deemed to be "unacceptable" and thus prohibited because considered a priori too dangerous 17 (these include applications that manipulate human behaviour to circumvent the free will of users; personal creditbased rating systems managed by governments; real-time biometric recognition systems in publicly accessible spaces for the purposes of law enforcement).</p><p>Second, the Commission identifies a "high-risk" threshold for AI systems, 18 of which are identified by the list which is contained within Annex III and can be amended by the Commission based on a range of set criteria. 19 High-risk AI systems shall have to comply with a long and extensive series of requirements. Most interestingly, they seem to represent the only class where the legislator gives some leeway to the targets of regulation. Indeed, providers and users of those systems will have to establish, implement, document and maintain a risk management system, with a view to adopting suitable measures to face any known or foreseeable hazard. 20 Additionally, providers of high-risk AI systems are required to put in place a quality management system to ensure compliance with the entire Regulation. 21 Nonetheless, it must be stressed that the actual margin of discretion for providers and users of high risk systems is still very residual.</p><p>Third, some AI applications are included in a category characterized by "limited risks" (systems intended to interact with natural persons; emotion recognition or biometric categorization systems; systems capable of generating "deep fake" contents). 22 Providers and users of such tools shall comply with specific transparency requirements. Finally, a residual category of "minimal risk" is associated with AI applications that do not have the same invasiveness as those described above: since it is constructed as a residual category, it embraces an ample set of AI applications and systems. Minimal risk AI applications are not subject to any specific duty or obligation, although the Commission and Member States should encourage and facilitate the drawing up of codes of conduct intended to foster on their part the voluntary application of the requirements set for high-risk systems. 23  In this case, the shift from a bottom-up to a top-down interpretation of risk-based regulation, already partially emerging from the DSA, reached its apex. The categories of risk are defined directly by the EU Commission and set in stone within the law. The list of "unacceptable", and therefore prohibited, AI systems is directly set by the law and is independent of any a posteriori risk assessment by providers or users of those systems. The definition of high-risk technologies is also already defined by the law: in this case, the category is seemingly less stiff and more open to ex post change, since a procedure to amend the Annex III is possible. However, it is once again up to the EU Commission to make the necessary adjustments. The AI Act sets a range of risk criteria: however, in this case, they are meant as a guide for the Commission itself, and not for the targets of regulation. Moreover, although it is true that a risk management system for high-risk AI systems is introduced, extensive top-down rules specify how to implement it, thus leaving a relatively limited margin of discretion to providers and users. Additionally, high-risk systems have to comply with a far-reaching set of duties and obligations which follow a binary compliance/non-compliance logic. 17 Art. 5 AI Act. 18 ibid., Art. 6. 19 ibid., Art. 7. 20 ibid., Art. 9. 21 ibid., Art. 17. 22 ibid., Art. 52. 23 ibid., Art. 69.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">(Optimal) proportionality and due diligence in the AI Act</head><p>Having outlined the peculiar perspective adopted by the AI Act with respect to the regulation of the risks posed by AI systems, it is important to focus on the role played by the features of proportionality and due diligence within the system created by the Regulation proposal, so as to understand what the link is between the AI Act and previous risk-based regulatory models devised by the Union with respect to matters concerning the digital field.</p><p>The goal of (optimal) proportionality within the AI Act emerges explicitly from the Explanatory Memorandum, where the European Commission stated that the proposal "puts in place a proportionate regulatory system centred on a well-defined risk-based regulatory approach that does not create unnecessary restrictions to trade", also adding that "legal intervention is tailored to those concrete situations where there is a justified cause for concern or where such concern can reasonably be anticipated in the near future". <ref type="foot" target="#foot_6">24</ref>These statements, focusing especially on the centrality of proportionality between regulation and risk, seem to resonate with the GDPR and the DSA. It is true, as a matter of fact, that the choice of resorting to a top-down structure makes the law much more rigid: if compared with the GDPR, the AI Act does not allow much space to tailor the measures to the specific risks. Nevertheless, the spirit of the law, as confirmed by the words of the Commission, is still that of implementing a legal framework where proportionality is the ultimate goal to be attained. Although the system is more rigid, nonetheless the envisioning of a differentiated regulatory regime based on risk represents the core essence of the principle of proportionality characterizing the digital policies of the European Union.</p><p>Of course, the adoption of a more rigid scheme directly affects the principle of accountability which, within the system developed by the GDPR, is directly related to the freedom given to data controllers and processors with respect to the measures to adopt to protect data subjects' rights to privacy and data protection. Accountability is a direct corollary of a regulatory system which, to a certain extent, delegates to its targets the power to decide how to balance their own interests with the need to protect, guarantee and foster the rights and liberties of individuals <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b18">19]</ref>. In the AI Act, what changes, at a deeper level, is thus the relationship between regulator and regulatee: whereas in the GDPR, the latter was delegated with the duty of assessing risk by the former, and was thus responsible for such a duty, this delegation is almost absent within the AI Act.</p><p>As a result, also the principle of due diligence is much less present within the AI Act than within the GDPR and the DSA. Because they are given less choice as to the means adopted to comply with the law, the principle of due diligence mainly applies at the level of the implementation of the necessary measures, and not so much at the level of their designation. A few provisions, as mentioned above, give leeway for a minor customization in the choice of the mitigation system to adopt: however, such a liberty is quite reduced.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>The bottom-up, hybrid, and top-down approaches to risk-based regulation Bottom-up (GDPR) Hybrid (DSA) Top-down (AI Act) Risk assessment made by the targets of regulation Risk assessment shared between the law maker and the targets of regulation Risk assessment made by the law maker Wide margin of discretion Moderate margin of discretion Limited margin of discretion Goal: optimal balancing (proportionality) Goal: optimal balancing (proportionality) Goal: optimal balancing (proportionality)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions</head><p>Risk regulation has gathered increasing momentum across Western democracies and has become increasingly popular as a regulatory tool to foster Union policies in a range of operative fields, including, lately, the governance of the Digital Single Market in the context of the algorithmic society.</p><p>Ultimately, the fil rouge connecting the AI Act with the GDPR and the DSA, and with the risk-based approach in general, is the goal of developing a legal framework for digital technologies that promotes an "optimal" balancing between the interests involved. If the European constitutional experience, is characterized by the strive to strike an equal, and proportionate, balance between the various interests of social parties, the common feature at the heart of the GDPR, DSA, and AI Act is precisely their aspiration to create a digital environment which embraces European constitutional values and principles.</p><p>Although due diligence still represents an important aspect of the AI Act, it appears that proportionality is, ultimately, the common and central aspect unifying the strategies of the EU in such a field. To this extent, the risk-based approach ultimately represents an instrument to develop a constitutionally sound environment. It is one of the expression of European digital constitutionalism <ref type="bibr" target="#b28">[29]</ref>, where the interests of the market and the protection of societal, democratic, and fundamental rights interests, must be equally protected.</p></div>			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Cf. Commission's Communication on a Digital Single Market Strategy for Europe, COM(2015)192 final.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">COM(2021)205 final.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">COM(2021)206 final, "Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts".</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), O.J. 2016, L 119/1.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">COM(2020)825 final, "Proposal for a Regulation of the European Parliament and of the Council on a Single Market for Digital Services (Digital Services Act) and amending Directive 2000/31/EC".</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_5">In its analysis, Vermeule focuses on "political risks". Nonetheless, such a distinction may ultimately applied to all types of risk.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="24" xml:id="foot_6">Explanatory memorandum to the AI Act proposal, at 3.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Digital risk society</title>
		<author>
			<persName><forename type="first">D</forename><surname>Lupton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Routledge Handbook of Risk Studies</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Burgess</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Alemanno</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">O</forename><surname>Zinn</surname></persName>
		</editor>
		<meeting><address><addrLine>London</addrLine></address></meeting>
		<imprint>
			<publisher>Routledge</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="301" to="309" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Balkin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">U.C.D. L. Rev</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="page" from="1149" to="1210" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m">European Union Agency for Fundamental Rights (FRA), Getting the Future Right. Artificial Intelligence and Fundamental Rights</title>
				<meeting><address><addrLine>Luxembourg</addrLine></address></meeting>
		<imprint>
			<publisher>Publications Office of the European Union</publisher>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">How the machine &apos;thinks&apos;: Understanding opacity in machine learning algorithms</title>
		<author>
			<persName><forename type="first">J</forename><surname>Burrell</surname></persName>
		</author>
		<idno type="DOI">10.1177/2053951715622512</idno>
	</analytic>
	<monogr>
		<title level="j">Big Data &amp; Society</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Algorithms of oppression: how search engines reinforce racism</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">U</forename><surname>Noble</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>New York University Press</publisher>
			<pubPlace>New York, NY</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">New laws of robotics: defending human expertise in the age of AI</title>
		<author>
			<persName><forename type="first">F</forename><surname>Pasquale</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
			<publisher>The Belknap Press of Harvard University Press</publisher>
			<pubPlace>Cambridge, MA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m">European Commission, Algorithmic discrimination in Europe: Challenges and opportunities for gender equality and non-discrimination law</title>
				<meeting><address><addrLine>Luxembourg</addrLine></address></meeting>
		<imprint>
			<publisher>Publications Office of the European Union</publisher>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Bias Preservation in Machine Learning: The Legality of Fairness Metrics under EU Non-Discrimination Law</title>
		<author>
			<persName><forename type="first">S</forename><surname>Wachter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mittelstadt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Russell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">W. Va. L. Rev</title>
		<imprint>
			<biblScope unit="volume">123</biblScope>
			<biblScope unit="page" from="735" to="790" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Risk as an Approach to Regulatory Governance: An Evidence Synthesis and Research Agenda</title>
		<author>
			<persName><forename type="first">J</forename><surname>Van Der Heijden</surname></persName>
		</author>
		<idno type="DOI">10.1177/21582440211032202</idno>
	</analytic>
	<monogr>
		<title level="j">SAGE Open</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Gellert</surname></persName>
		</author>
		<title level="m">The Risk-Based Approach to Data Protection</title>
				<meeting><address><addrLine>Oxford</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Risk, Regulation, and Management</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">M</forename><surname>Hutter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Risk in Social Science</title>
				<editor>
			<persName><forename type="first">P</forename><surname>Taylor-Gooby</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">O</forename><surname>Zinn</surname></persName>
		</editor>
		<meeting><address><addrLine>Oxford</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="202" to="227" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Regulating the European Risk Society</title>
		<author>
			<persName><forename type="first">A</forename><surname>Alemanno</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-1-4614-4406-0_3</idno>
	</analytic>
	<monogr>
		<title level="m">Better Business Regulation in a Risk Society</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Alemanno</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Butter</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Nijsen</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Torriti</surname></persName>
		</editor>
		<meeting><address><addrLine>New York, NY</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="37" to="56" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">The &quot;Riskification&quot; of European Data Protection Law through a two-fold Shift</title>
		<author>
			<persName><forename type="first">M</forename><surname>Macenaite</surname></persName>
		</author>
		<idno type="DOI">10.1017/err.2017.40</idno>
	</analytic>
	<monogr>
		<title level="j">European Journal of Risk Regulation</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="506" to="540" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Enhancing Compliance under the General Data Protection Regulation: The Risky Upshot of the Accountability-and Risk-based Approach</title>
		<author>
			<persName><forename type="first">C</forename><surname>Quelle</surname></persName>
		</author>
		<idno type="DOI">10.1017/err.2018.47</idno>
		<ptr target="https://doi.org/10.1017/err.2018.47" />
	</analytic>
	<monogr>
		<title level="j">European Journal of Risk Regulation</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="502" to="526" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Edwards</surname></persName>
		</author>
		<ptr target="https://www.adalovelaceinstitute.org/wp-content/uploads/2022/03/Expert-opinion-Lilian-Edwards-Regulating-AI-in-Europe.pdf" />
		<title level="m">Regulating AI in Europe: four problems and four solutions</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note>Ada Lovelace Institute</note>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">De</forename><surname>Gregorio</surname></persName>
		</author>
		<title level="m">Digital Constitutionalism in Europe: Reframing Rights and Powers in the Algorithmic Society</title>
				<meeting><address><addrLine>Cambridge</addrLine></address></meeting>
		<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Vermeule</surname></persName>
		</author>
		<title level="m">The Constitution of Risk</title>
				<meeting><address><addrLine>Cambridge</addrLine></address></meeting>
		<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Due Diligence in the International Legal Order: Dissecting the Leitmotif of Current Accountability Debates</title>
		<author>
			<persName><forename type="first">A</forename><surname>Peters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Krieger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kreuzer</surname></persName>
		</author>
		<idno type="DOI">10.1093/oso/9780198869900.003.0001</idno>
	</analytic>
	<monogr>
		<title level="m">Due Diligence in the International Legal Order</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Krieger</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Peters</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Kreuzer</surname></persName>
		</editor>
		<meeting><address><addrLine>Oxford</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1" to="19" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">The European risk-based approaches: Connecting constitutional dots in the digital age</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">De</forename><surname>Gregorio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dunn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">CMLR</title>
		<imprint>
			<biblScope unit="volume">59</biblScope>
			<biblScope unit="page" from="473" to="500" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Accountability of Algorithms in the GDPR and Beyond: A European Legal Framework on Automated Decision-Making</title>
		<author>
			<persName><forename type="first">C</forename><surname>Castets-Renard</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Fordham Intellectual Property, Media and Entertainment Law Journal</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="page" from="91" to="137" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">O</forename><surname>Lynskey</surname></persName>
		</author>
		<title level="m">The Foundations of EU Data Protection Law</title>
				<meeting><address><addrLine>Oxford</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Edwards</surname></persName>
		</author>
		<title level="m">Articles 12-15 ECD: ISP liability. The problem of intermediary service provider liability</title>
				<editor>
			<persName><forename type="first">L</forename><surname>Edwards</surname></persName>
		</editor>
		<meeting><address><addrLine>Oxford</addrLine></address></meeting>
		<imprint>
			<publisher>Hart</publisher>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="93" to="136" />
		</imprint>
	</monogr>
	<note>The new legal framework for e-commerce in Europe</note>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">The Immunity of Internet Intermediaries Reconsidered?</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">N</forename><surname>Yannopoulos</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-47852-4_3</idno>
	</analytic>
	<monogr>
		<title level="m">The Responsibilities of Online Service Providers</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Taddeo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Floridi</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="43" to="59" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity</title>
		<author>
			<persName><forename type="first">D</forename><surname>Citron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Wittes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Fordham Law Review</title>
		<imprint>
			<biblScope unit="volume">86</biblScope>
			<biblScope unit="page" from="401" to="424" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">A New Order: The Digital Services Act and Consumer Protection</title>
		<author>
			<persName><forename type="first">C</forename><surname>Cauffman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Goanta</surname></persName>
		</author>
		<idno type="DOI">10.1017/err.2021.8</idno>
	</analytic>
	<monogr>
		<title level="j">European Journal of Risk Regulation</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="758" to="774" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<author>
			<persName><forename type="first">Z</forename><surname>Efroni</surname></persName>
		</author>
		<ptr target="https://policyreview.info/articles/news/digital-services-act-risk-based-regulation-online-platforms/1606" />
	</analytic>
	<monogr>
		<title level="m">The Digital Services Act: risk-based regulation of online platforms</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Establishing the rules for building trustworthy AI</title>
		<author>
			<persName><forename type="first">L</forename><surname>Floridi</surname></persName>
		</author>
		<idno type="DOI">10.1038/s42256-019-0055-y</idno>
	</analytic>
	<monogr>
		<title level="j">Nat Mach Intell</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="261" to="262" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Standardizing AI -The Case of the European Commission&apos;s Proposal for an Artificial Intelligence Act</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ebers</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics</title>
				<editor>
			<persName><forename type="first">L</forename><surname>Di Matteo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Poncibò</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Cannarsa</surname></persName>
		</editor>
		<meeting><address><addrLine>Cambridge</addrLine></address></meeting>
		<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note>forthcoming</note>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">The rise of digital constitutionalism in the European Union</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">De</forename><surname>Gregorio</surname></persName>
		</author>
		<idno type="DOI">10.1093/icon/moab001</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Constitutional Law</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="41" to="70" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
