<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The Ambiguous Risk-Based Approach of the Artificial Intelligence Act: Links and Discrepancies with Other Union Strategies</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Pietro Dunn</string-name>
          <email>pietro.dunn2@unibo.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giovanni De Gregorio</string-name>
          <email>giovanni.degregorio@csls.ox.ac.uk</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Risk-Based Regulation, Artificial Intelligence Act, Proportionality</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Alma Mater Studiorum - Università di Bologna</institution>
          ,
          <addr-line>Via Zamboni 27/29, Bologna, 40126</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Centre for Socio-Legal Studies, University of Oxford</institution>
          ,
          <addr-line>Manor Road, Oxford, OX1 3UQ</addr-line>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Luxembourg</institution>
          ,
          <addr-line>4 Rue Alphonse Weicker, Luxembourg, L-2721</addr-line>
          ,
          <country country="LU">Luxembourg</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2015</year>
      </pub-date>
      <abstract>
        <p>The AI Act regulation proposal adopts a risk-based approach to the regulation of artificial intelligence systems. As a matter of fact, the risk-based approach has become more typical of Union strategies with respect to digital policies. However, the way such an approach has been declined varies greatly: most notably, whereas the GDPR and, to a limited extent, the DSA regulation proposal adopt a bottom-up perspective, the AI Act rather reflects a top-down scheme, where the task of risk assessment is kept within the hands of the legislator. This position paper aims at highlighting the common features, as well as the differences, between the various legal acts discussed: in particular, by considering (optimal) proportionality and due diligence as a characterizing features of the risk-based approach, the goal is to understand whether the AI Act does indeed reflect the typical principles of this developing legal model. Although noting that the role of due diligence is feebler within the regulation proposal, we argue that the central common point is represented by the (constitutionally relevant) goal of proportionality.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The advancement of progress and technology always represents a challenge for regulators, who are
called upon to strike a fair balance between the need to foster innovation and the often conflicting need
to reduce the risk for collateral effects on individuals’ lives and fundamental rights and freedoms. Such
a tension between progress and risk is also typical of digital technologies [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]: indeed, in the last few
years, the Union has had to face the complex task of designing the appropriate regulatory strategy for
the development of a digital single market competitive in the international landscape but respectful, at
the same time, of human rights and democratic principles.1 This task has become increasingly important
vis-à-vis the rise of artificial intelligence and of the algorithmic society [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>In its 2021</p>
      <p>Communication on fostering a European approach to artificial intelligence,2
accompanying the presentation of its proposal for an Artificial Intelligence Act (AI Act),3 the
Commission underscored the manifold potential benefits of AI: throughout the COVID-19 pandemic,
for instance, AI was used to predict the geographical spread of the virus, as well as for diagnostic
purposes and for developing new vaccines and drugs against it. However, algorithms and AI can also
carry risks. A flaw in the design or in the training of AI, in some instances, could lead for example to
personal injuries or physical damages when those systems are used as safety components of a product.</p>
      <p>
        2020 Copyright for this paper by its authors.
Moreover, when used for automated decision-making, algorithms can influence and sometimes affect
individuals’ exercise of fundamental rights [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. AI systems are particularly problematic since, in most
cases, they lack transparency [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]: this is worrying, for instance, vis-à-vis the risk of incorrect, biased
and discriminatory results [
        <xref ref-type="bibr" rid="ref5 ref6 ref7 ref8">5–8</xref>
        ].
      </p>
      <p>
        To face the challenges raised by technological progress, Western countries have resorted more and
more to regulatory models based on the concept of risk [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], to be intended, technically, as a combination
between the probability of a defined hazard occurring and the magnitude of the consequences that
hazard may entail [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Risk is thus used as a proxy for decision-making. Through the practices of risk
analysis [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ], it is indeed possible to forecast, on a probabilistic logic, the future developments of a
specific conduct or activity: based on this, the necessary mitigation strategies and tools may be
identified.
      </p>
      <p>
        All in all, risk-based regulation represents an attempt to face the new challenges of innovation
through a rational and technocratic approach that fosters more efficient, objective, and fair governance,
whilst fighting against “over-regulation, legalistic and prescriptive rules, and the high costs of
regulation” [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. In particular, it uses risk as a tool to prioritize and target enforcement action in a
manner that is proportionate to an actual hazard: regulation is thus calibrated to the actual needs of
society vis-à-vis the risks connected to a product, service or activity [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>
        The resort to risk-based regulation to face the new digital age is particularly evident when
considering at least three fields: that of private and data protection; that of content moderation; and,
finally, that of AI. As described elsewhere, indeed, the General Data Protection Regulation (GDPR)4,
as well as the proposal for a Digital Services Act (DSA)5 and the AI Act all adopt forms of risk-based
approaches, although the perspective they take seems to shift progressively from a bottom-up to a
topdown model. Because of such a different approach, doubts may arise with respect to the consistency of
the legal framework about digital technologies. In particular, as has been already done by some
researchers [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], the question which may be posed is whether the AI Act actually entails a risk-based
approach. The argument of the present position paper is that the link between the AI Act and previous
legislative measure is based on the principle of (optimal) proportionality among conflicting
constitutional interests: in this sense, risk-based regulation represents a declination of the developing
digital constitutionalism in Europe [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>Section 2 analyses the relationship of the risk-based regulatory model with the principles of
proportionality and due diligence. Section 3 compares the GDPR, the DSA, and the AI Act to outline
the progressive shift from a bottom-up to a top-down perspective. Section 4 draws highlights what the
roles of proportionality and due diligence are in the AI Act. Finally, Section 6 draws some conclusions.
2. Risk, “optimal” proportionality, and due diligence</p>
      <p>Risk-based regulation is characterized by some typical features differentiating it from more
traditional models of law. The present subsection focuses on two aspects which appear to be
fundamental in the context of contemporary Union risk-based policies: the pursuit of an “optimal”
balance of interests and the reliance on due diligence.</p>
      <p>
        First of all, as mentioned above, the characteristic goal of the risk-based approach is that of creating
a framework where legal obligations are tailored to the specific risks entailed by a particular activity or
service, with a view to avoiding the overburdening of the regulated actors. The scheme of the risk-based
approach differs from that of traditional “command-and-control” mechanisms, where the state, as the
entity endowed with legal authority, sets the rules on a top-down basis to impose certain duties and
obligations applicable indiscriminately to all natural and legal persons subject to its jurisdiction [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
In fact, risk-based regulation inherently seeks to operate a “discrimination” between the subjects of law,
thus differentiating the legal regime governing them based, precisely, on the proxy of risk.
4 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard
to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection
Regulation), O.J. 2016, L 119/1.
5 COM(2020)825 final, “Proposal for a Regulation of the European Parliament and of the Council on a Single Market for Digital Services
(Digital Services Act) and amending Directive 2000/31/EC”.
      </p>
      <p>
        In this sense, risk-based regulation aims at pursuing goals similar to what Adrian Vermeule has
defined as “optimizing constitutionalism”, or the “mature position” to (constitutional) risk regulation
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. Vermeule, in fact, operates a distinction between “precautionary constitutionalism” and
“optimizing constitutionalism”:6 whereas the former, in synthesis, implies that “new instruments,
technologies, and policies should be rejected unless and until they can be shown to be safe”, the latter,
instead of seeking “maximal” precautions”, aims to introduce “optimal precautions” in terms of costs
and benefits. In other words, whereas the concern of precautionary constitutionalism is to prevent in
toto the potential consequences of a risk, optimizing constitutionalism takes a more consequentialist
view on the regulation of risk, and, taking into account the potential downsides and collateral effects of
a “no-risk” policy, seeks to balance the need to contain risk and the need to avoid over-regulation. In
this sense, the EU risk-based approach to digital technologies is somehow consistent with the notion of
“optimizing constitutionalism”, since its aim is to reduce the potential harms such technologies may
entail for individuals and society, while at the same time ensuring the development of industry and the
market.
      </p>
      <p>Besides, within risk-based regulation, such a balancing operation is to some extent left directly to
the discretion of the “regulatee”, who retains some leeway as to the identification of the measures to be
implemented to reduce and mitigate the risk of harms. As will be underscored below, this is especially
true for the GDPR and, in part, for the DSA, whereas such a margin of discretion is much more limited
within the AI Act.</p>
      <p>
        Be it as it may, the reliance on the targets of regulation for the purpose of identifying the exact
content of the measures to be put into place inherently implies the need for such actors to operate with
due diligence. This should not come as a surprise: indeed, in the field of international law, the notion
of “due diligence” has come to play an increasingly central role with respect to the duty of states to
manage risks (for the environment, for economy, for human rights, etc.) within their jurisdictions. As
highlighted by Peters, Krieger, and Kreuzer, “due diligence is needed when a risk has to be controlled
or contained, in order to prevent harm and damage done to another actor or to a public interest”; indeed,
“the rise of the concept [of due diligence] is […] tied to the rise of the ‘risk society’ and the idea of risk
management” [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. Risk-based regulation thus transposes the principle of due diligence from the
framework of international law, and thus from the relations between states, to the framework of national
law, translating it into a fundamental rule governing the behaviour of natural or legal persons acting
within the state.
      </p>
    </sec>
    <sec id="sec-2">
      <title>3. The spectrum of the risk-based approach in EU digital policies</title>
      <p>
        The risk-based approach towards digital policies has been developed through the last decade by EU
law [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Since the launch of the Digital Single Market Strategy, the Union has increasingly relied on a
risk-based approach. Rather than just setting new rights and safeguards, the Union has tried to regulate
risks by increasing the accountability of both public and private actors with respect to the risks and
potential collateral effects resulting from their activities. The emergence of the risk-based approach
within European digital policies is particularly evident when considering the recent legislative
developments concerning the fields of data, online content, and artificial intelligence. Nonetheless, the
way such an approach has been declined varies significantly.
      </p>
      <p>The General Data Protection Regulation (GDPR) follows a bottom-up perspective, in the sense that
the evaluation of risk and the choice of mitigating measures are not defined by the law but are primarily
left to the discretion of the targets of regulation themselves, i.e., to data controllers and processors: in
this sense, the principle of accountability is the result of a legislative strategy aiming to greatly reduce
the imposition of duties coming from “above”. Quite to the opposite, the proposed Artificial Intelligence
Act (AI Act) takes a very different point of view, in that, although it provides for very different degrees
of responsibility and imposes differentiated duties depending on the risk scores of regulated AI systems,
it does not leave the task of evaluating such risk scores to the targets of regulation: in fact, it is the AI
Act itself that, on a top-down basis, identifies directly the various categories of risk. Finally, in the field
of online content, the Digital Services Act (DSA) aims at creating a hybrid system, which mixes the
6 In its analysis, Vermeule focuses on “political risks”. Nonetheless, such a distinction may ultimately applied to all types of risk.
two opposite perspectives of the GDPR and the AI Act by identifying on a top-down basis four risk
categories for providers of intermediary services while leaving them ample leeway to choose which
measures to employ to reduce the negative externalities their activities entail</p>
      <p>The present section thus briefly describes the shift from a bottom-up perspective, characteristic of
the GDPR, to the top-down one, typical of the AI Act.
3.1.</p>
    </sec>
    <sec id="sec-3">
      <title>The risk-based approach in the GDPR and DSA</title>
      <p>
        The bottom-up perspective of the GDPR emerges from the fact that data controllers are entrusted
themselves with the duty to ensure that the processing of personal data is aligned with the general
principles of the Regulation. In fact, data controllers must operate a risk assessment with respect to the
activities they conduct and develop the appropriate response to reduce any collateral effects affecting
individuals’ rights to privacy and data protection. It is from these duties that the concept of
accountability arises, meaning that data controllers are held responsible for the decisions they make to
minimize and mitigate damages: “the data holder […] is accountable for ensuring compliance with the
principles (and rights of the data subject)” [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
      </p>
      <p>
        Accountability thus takes a dynamic form, since it varies depending on the nature, scope, context
and purposes of processing as well as on the risks of varying likelihood and/or severity for the rights
and freedoms of natural persons. In other words, the risk-based approach of the GDPR is inherently
grounded upon a form of “responsibilisation of the regulatee” [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] which translates, in turn, into the
notion of accountability. It also translates into a model of “compliance 2.0”, where the regulatee is not
required to simply engage in a form of compliance consisting of “ticking boxes” but has to tailor the
measures adopted to the situation at hand, with a view to respecting the rights and freedoms of data
subjects [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. In other words, the binary logic of compliance/non-compliance, typical of the traditional
rights-based approach of the European Union [
        <xref ref-type="bibr" rid="ref10 ref21">10, 21</xref>
        ], is overcome by the scalable logic of risk analysis.
As a result, obligations may be “uneven” depending on the actors who are called to comply with the
GDPR, but this different outcome is justified by the existence of a preliminary balancing test operated
directly by data controllers.
      </p>
      <p>
        This last aspect, which is precisely what characterizes the GDPR as a bottom-up risk-based
regulation, emerges from a range of different provisions. For instance, apart from the provisions
regulating in general the responsibility of data controllers7 and introducing the principle of data
protection by design and by the default8 [
        <xref ref-type="bibr" rid="ref10 ref13">10, 13</xref>
        ], the Regulation foresees a mandatory requirement that
controllers carry out a data protection impact assessment (DPIA) whenever a specific type of processing
is likely to result in a “high” risk to the rights and freedoms of natural persons.9
      </p>
      <p>
        Whereas the GDPR adopted a risk-based approach for the regulation of personal data in the EU, the
DSA proposal features, with specific respect to content moderation practices, a “supervised risk
management approach”.10 Indeed, presented together with the Digital Markets Act (DMA) in December
2020, the DSA aims inter alia at updating the intermediary liability regime established in 2000 by the
e-Commerce Directive (ECD).11 Though maintaining substantially unaltered the “safe harbor” approach
developed by the ECD and inherited from the US [
        <xref ref-type="bibr" rid="ref22 ref23 ref24">22–24</xref>
        ], the Regulation proposal envisages a broad
array of new duties and obligations for providers of intermediary services, with a view to guaranteeing
a transparent and safe online environment [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. These duties and obligations, moreover, reveal the
peculiar traits of the DSA’s risk-based approach. In fact, said obligations are not applicable to all
providers of intermediary services indiscriminately, but follow a pyramidal structure, based on which
they are divided into four tiers. Indeeed, on the basis of specific criteria concerning their dimension and
the services they provide, providers are assigned to risk categories variously disciplined.12
7 Art. 24 GDPR.
8 Art. 25 GDPR.
9 Art. 35 GDPR.
10 Explanatory memorandum to the DSA proposal, p. 1.
11 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services,
in particular electronic commerce, in the Internal Market (‘Directive on electronic commerce’), O.J. 2020 L 178/1.
12 A small group of provisions thus applies to all providers of intermediary services, whereas the subsequent Articles have an increasingly
narrow scope of application: hosting providers; online platforms; and “very large online platforms” (VLOPs). The obligations set by the DSA
mainly move in two directions: first, that of fostering transparency concerning content moderation practices; second, that of making
intermediaries, notably hosting providers and online platforms, more responsible for the content they host and contribute to disseminating. In
      </p>
      <p>
        Therefore, as in the GDPR, the measures to be adopted by providers to face the risks arising from
the services they offer are not horizontally equal but are directly calibrated based on varying risk
assessment strategies. However, the DSA moves away from the pure bottom-up structure adopted by
the GDPR, since decisions concerning the measures to adopt are not left entirely to the discretion of the
targets of regulation. Indeed, the four categories for online intermediaries are established directly by
the Regulation proposal and are disciplined in a progressively more severe manner depending on a
preliminary top-down risk assessment [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. The “responsibilisation of the regulatee” is thus more feeble
in the DSA if compared to the GDPR.
      </p>
      <p>Besides, a certain margin of discretion is still left to the appreciation of the targets of regulation. In
particular, in the case of very large online platforms (VLOPs), a significantly important duty is
represented by the need to assess any significant risks entailed by their activities (including those
concerning the dissemination of unlawful or harmful content and those potentially affecting the
fundamental rights and freedoms of individuals) and to put in place the appropriate mitigation
measures.13 Such a provision shows how the gap between the DSA and the GDPR is only partial. Also,
the establishment of an internal complaint-handling mechanism,14 applicable to all online platforms, is
another key example showing that these actors still retain a central role in defining which content items
may or may not represent an unlawful or harmful content. All in all, the approach followed by the DSA,
rather than being strictly top-down, seems to be hybrid. As such, both the GDPR and the DSA must
necessarily rely, to a certain degree, on the due diligence of the targets of regulation: failure to develop
mitigation strategies in a diligent manner will, inevitably, entail liability.</p>
      <p>
        Moreover, both the GDPR and the DSA ultimately aim to establish an optimal balance between the
goal of preventing harms deriving from digital technologies and the goal of guaranteeing an
environment where the digital single market can fully flourish. Indeed, both acts incentivise the
imposition of duties and obligations that are as much tailored as possible to the single specific cases.
The GDPR’s choice of delegating to data controllers and processors the decisions concerning the
measures to be implemented, as well as the DSA’s choice of creating an asymmetric legal regime for
providers of intermediary services, are ultimately aimed at fostering a proportionate and optimal
framework for actors in the digital market [
        <xref ref-type="bibr" rid="ref17 ref19">17, 19</xref>
        ]
3.2.
      </p>
    </sec>
    <sec id="sec-4">
      <title>The risk-based approach in the Artificial Intelligence Act</title>
      <p>
        Within the AI Act, the trajectory from a bottom-up to a top-down perspective is seemingly complete.
In fact, notwithstanding the explicit statement of the Commission, according to which the AI Act is
fundamentally based upon a risk-based approach, some commentators have raised serious doubts
concerning the possibility of actually recognising it as such [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>
        The Commission’s intentions to adopt a balanced risk-based approach to the regulation of artificial
intelligence already emerged within the 2020 White Paper on Artificial Intelligence.15 The document
highlighted the role that AI should play in the improvement of many aspects of our society, including
healthcare, the mitigation of climate change, and efficiency in production. At the same time, it stressed
the potential collateral impact of artificial intelligence systems on people’s physical integrity as well as
on their individual rights and liberties. According to the Union’s strategy towards AI, the ultimate goal
must be that of building an ecosystem of trust [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ] and excellence as a means to strike the correct
balance between risk and innovation.16
      </p>
      <p>
        The AI Act proposal aims to build precisely that ecosystem of trust and excellence, thus representing
a new critical step in the developing digital strategy of the Union. As is well known, the text of the
proposal is structured upon four levels of risk, associated with certain AI systems and their use [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ].
This structure recalls, to a certain extent, that of the DSA: however, the AI Act leaves very little, if any,
discretion to users and providers of AI. Rather than entrusting them with the task of assessing risks and
particular, all providers of hosting services will need to put in place a “notice and action” procedure: individuals or entities shall thus have the
opportunity of flagging the presence of unlawful content, following which intermediaries will have to act expeditiously in order to avoid
subsidiary liability for third-party content (Art. 14 DSA).
13 Artt. 26-27 DSA.
14 Art. 17 DSA.
15 COM/2020/65 final, “White Paper on Artificial Intelligence – A European approach to excellence and trust”.
16 ibid., at 3.
developing the appropriate risk mitigation strategies, the choice of the AI Act is to set from above the
rules of the game which must be complied with.
      </p>
      <p>What truly changes with the AI Act is how the assessment of risk is carried out and by whom: in the
GDPR, such a task is in the hands of data controllers; in the DSA, the Union legislator sets a top-down
framework applicable to all providers of intermediary services, while still leaving space for a certain
margin of discretion as far as enforcement of the law is concerned (especially in the case of VLOPs).
Within the AI Act, conversely, it is the legislator (together with the Commission) that is vested with the
task of assessing risk: the leeway granted to providers and users is, in fact, minimal.</p>
      <p>First, the AI Act proposal prohibits some practices involving systems which are deemed to be
“unacceptable” and thus prohibited because considered a priori too dangerous17 (these include
applications that manipulate human behaviour to circumvent the free will of users; personal
creditbased rating systems managed by governments; real-time biometric recognition systems in publicly
accessible spaces for the purposes of law enforcement).</p>
      <p>Second, the Commission identifies a “high-risk” threshold for AI systems,18 most of which are
identified by the list which is contained within Annex III and can be amended by the Commission based
on a range of set criteria.19 High-risk AI systems shall have to comply with a long and extensive series
of requirements. Most interestingly, they seem to represent the only class where the legislator gives
some leeway to the targets of regulation. Indeed, providers and users of those systems will have to
establish, implement, document and maintain a risk management system, with a view to adopting
suitable measures to face any known or foreseeable hazard.20 Additionally, providers of high-risk AI
systems are required to put in place a quality management system to ensure compliance with the entire
Regulation.21 Nonetheless, it must be stressed that the actual margin of discretion for providers and
users of high risk systems is still very residual.</p>
      <p>Third, some AI applications are included in a category characterized by “limited risks” (systems
intended to interact with natural persons; emotion recognition or biometric categorization systems;
systems capable of generating “deep fake” contents). 22 Providers and users of such tools shall comply
with specific transparency requirements. Finally, a residual category of “minimal risk” is associated
with AI applications that do not have the same invasiveness as those described above: since it is
constructed as a residual category, it embraces an ample set of AI applications and systems. Minimal
risk AI applications are not subject to any specific duty or obligation, although the Commission and
Member States should encourage and facilitate the drawing up of codes of conduct intended to foster
on their part the voluntary application of the requirements set for high-risk systems.23</p>
      <p>In this case, the shift from a bottom-up to a top-down interpretation of risk-based regulation, already
partially emerging from the DSA, reached its apex. The categories of risk are defined directly by the
EU Commission and set in stone within the law. The list of “unacceptable”, and therefore prohibited,
AI systems is directly set by the law and is independent of any a posteriori risk assessment by providers
or users of those systems. The definition of high-risk technologies is also already defined by the law:
in this case, the category is seemingly less stiff and more open to ex post change, since a procedure to
amend the Annex III is possible. However, it is once again up to the EU Commission to make the
necessary adjustments. The AI Act sets a range of risk criteria: however, in this case, they are meant as
a guide for the Commission itself, and not for the targets of regulation. Moreover, although it is true
that a risk management system for high-risk AI systems is introduced, extensive top-down rules specify
how to implement it, thus leaving a relatively limited margin of discretion to providers and users.
Additionally, high-risk systems have to comply with a far-reaching set of duties and obligations which
follow a binary compliance/non-compliance logic.
17 Art. 5 AI Act.
18 ibid., Art. 6.
19 ibid., Art. 7.
20 ibid., Art. 9.
21 ibid., Art. 17.
22 ibid., Art. 52.
23 ibid., Art. 69.</p>
    </sec>
    <sec id="sec-5">
      <title>4. (Optimal) proportionality and due diligence in the AI Act</title>
      <p>Having outlined the peculiar perspective adopted by the AI Act with respect to the regulation of the
risks posed by AI systems, it is important to focus on the role played by the features of proportionality
and due diligence within the system created by the Regulation proposal, so as to understand what the
link is between the AI Act and previous risk-based regulatory models devised by the Union with respect
to matters concerning the digital field.</p>
      <p>The goal of (optimal) proportionality within the AI Act emerges explicitly from the Explanatory
Memorandum, where the European Commission stated that the proposal “puts in place a proportionate
regulatory system centred on a well-defined risk-based regulatory approach that does not create
unnecessary restrictions to trade”, also adding that “legal intervention is tailored to those concrete
situations where there is a justified cause for concern or where such concern can reasonably be
anticipated in the near future”.24</p>
      <p>These statements, focusing especially on the centrality of proportionality between regulation and
risk, seem to resonate with the GDPR and the DSA. It is true, as a matter of fact, that the choice of
resorting to a top-down structure makes the law much more rigid: if compared with the GDPR, the AI
Act does not allow much space to tailor the measures to the specific risks. Nevertheless, the spirit of
the law, as confirmed by the words of the Commission, is still that of implementing a legal framework
where proportionality is the ultimate goal to be attained. Although the system is more rigid, nonetheless
the envisioning of a differentiated regulatory regime based on risk represents the core essence of the
principle of proportionality characterizing the digital policies of the European Union.</p>
      <p>
        Of course, the adoption of a more rigid scheme directly affects the principle of accountability which,
within the system developed by the GDPR, is directly related to the freedom given to data controllers
and processors with respect to the measures to adopt to protect data subjects’ rights to privacy and data
protection. Accountability is a direct corollary of a regulatory system which, to a certain extent,
delegates to its targets the power to decide how to balance their own interests with the need to protect,
guarantee and foster the rights and liberties of individuals [
        <xref ref-type="bibr" rid="ref10 ref19">10, 19</xref>
        ]. In the AI Act, what changes, at a
deeper level, is thus the relationship between regulator and regulatee: whereas in the GDPR, the latter
was delegated with the duty of assessing risk by the former, and was thus responsible for such a duty,
this delegation is almost absent within the AI Act.
      </p>
      <p>As a result, also the principle of due diligence is much less present within the AI Act than within the
GDPR and the DSA. Because they are given less choice as to the means adopted to comply with the
law, the principle of due diligence mainly applies at the level of the implementation of the necessary
measures, and not so much at the level of their designation. A few provisions, as mentioned above, give
leeway for a minor customization in the choice of the mitigation system to adopt: however, such a
liberty is quite reduced.</p>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusions</title>
      <p>Risk regulation has gathered increasing momentum across Western democracies and has become
increasingly popular as a regulatory tool to foster Union policies in a range of operative fields,
including, lately, the governance of the Digital Single Market in the context of the algorithmic society.</p>
      <p>Ultimately, the fil rouge connecting the AI Act with the GDPR and the DSA, and with the risk-based
approach in general, is the goal of developing a legal framework for digital technologies that promotes
an “optimal” balancing between the interests involved. If the European constitutional experience, is
characterized by the strive to strike an equal, and proportionate, balance between the various interests
of social parties, the common feature at the heart of the GDPR, DSA, and AI Act is precisely their
aspiration to create a digital environment which embraces European constitutional values and
principles.</p>
      <p>
        Although due diligence still represents an important aspect of the AI Act, it appears that
proportionality is, ultimately, the common and central aspect unifying the strategies of the EU in such
a field. To this extent, the risk-based approach ultimately represents an instrument to develop a
constitutionally sound environment. It is one of the expression of European digital constitutionalism
[
        <xref ref-type="bibr" rid="ref29">29</xref>
        ], where the interests of the market and the protection of societal, democratic, and fundamental rights
interests, must be equally protected.
      </p>
    </sec>
    <sec id="sec-7">
      <title>6. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Lupton</surname>
          </string-name>
          ,
          <article-title>Digital risk society</article-title>
          , in: A.
          <string-name>
            <surname>Burgess</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Alemanno</surname>
            ,
            <given-names>J.O.</given-names>
          </string-name>
          <string-name>
            <surname>Zinn</surname>
          </string-name>
          (Eds.),
          <source>Routledge Handbook of Risk Studies, Routledge</source>
          , London,
          <year>2016</year>
          , pp.
          <fpage>301</fpage>
          -
          <lpage>309</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.M.</given-names>
            <surname>Balkin</surname>
          </string-name>
          ,
          <article-title>Free Speech in the Algorithmic Society: Big Data, Private Governance</article-title>
          , and New School Speech Regulation,
          <string-name>
            <given-names>U.C.D. L.</given-names>
            <surname>Rev</surname>
          </string-name>
          .
          <volume>51</volume>
          (
          <year>2018</year>
          )
          <fpage>1149</fpage>
          -
          <lpage>1210</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>European</given-names>
            <surname>Union</surname>
          </string-name>
          <article-title>Agency for Fundamental Rights (FRA), Getting the Future Right</article-title>
          .
          <source>Artificial Intelligence and Fundamental Rights, Publications Office of the European Union, Luxembourg</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Burrell</surname>
          </string-name>
          ,
          <article-title>How the machine 'thinks': Understanding opacity in machine learning algorithms</article-title>
          ,
          <source>Big Data &amp; Society</source>
          <volume>3</volume>
          (
          <year>2016</year>
          ). doi:
          <volume>10</volume>
          .1177/2053951715622512.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.U.</given-names>
            <surname>Noble</surname>
          </string-name>
          ,
          <article-title>Algorithms of oppression: how search engines reinforce racism</article-title>
          , New York University Press, New York, NY,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F.</given-names>
            <surname>Pasquale</surname>
          </string-name>
          ,
          <article-title>New laws of robotics: defending human expertise in the age of AI, The Belknap</article-title>
          Press of Harvard University Press, Cambridge, MA,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>European</given-names>
            <surname>Commission</surname>
          </string-name>
          ,
          <article-title>Algorithmic discrimination in Europe: Challenges and opportunities for gender equality and non-discrimination law, Publications Office of the European Union</article-title>
          , Luxembourg,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wachter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mittelstadt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <article-title>Bias Preservation in Machine Learning: The Legality of Fairness Metrics under EU Non-Discrimination Law</article-title>
          ,
          <string-name>
            <given-names>W. Va. L.</given-names>
            <surname>Rev</surname>
          </string-name>
          .
          <volume>123</volume>
          (
          <year>2020</year>
          )
          <fpage>735</fpage>
          -
          <lpage>790</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>J. van der Heijden</surname>
          </string-name>
          ,
          <article-title>Risk as an Approach to Regulatory Governance: An Evidence Synthesis</article-title>
          and Research Agenda, SAGE Open 11 (
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .1177/21582440211032202.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>R.</given-names>
            <surname>Gellert</surname>
          </string-name>
          ,
          <article-title>The Risk-Based Approach to Data Protection</article-title>
          , Oxford University Press, Oxford,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>B.M.</given-names>
            <surname>Hutter</surname>
          </string-name>
          , Risk, Regulation, and Management, in: P.
          <string-name>
            <surname>Taylor-Gooby</surname>
            ,
            <given-names>J.O.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Zinn</surname>
          </string-name>
          (Eds.),
          <source>Risk in Social Science</source>
          , Oxford University Press, Oxford,
          <year>2006</year>
          , pp.
          <fpage>202</fpage>
          -
          <lpage>227</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Alemanno</surname>
          </string-name>
          ,
          <article-title>Regulating the European Risk Society</article-title>
          , in: A.
          <string-name>
            <surname>Alemanno</surname>
          </string-name>
          , F. den
          <string-name>
            <surname>Butter</surname>
          </string-name>
          , A. Nijsen, J. Torriti (Eds.),
          <source>Better Business Regulation in a Risk Society</source>
          , Springer, New York, NY,
          <year>2013</year>
          , pp.
          <fpage>37</fpage>
          -
          <lpage>56</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-1-
          <fpage>4614</fpage>
          -4406-
          <issue>0</issue>
          _
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Macenaite</surname>
          </string-name>
          , The “Riskification”
          <article-title>of European Data Protection Law through a two-fold Shift</article-title>
          ,
          <source>European Journal of Risk Regulation</source>
          <volume>8</volume>
          (
          <year>2017</year>
          )
          <fpage>506</fpage>
          -
          <lpage>540</lpage>
          . doi:
          <volume>10</volume>
          .1017/err.
          <year>2017</year>
          .
          <volume>40</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>C.</given-names>
            <surname>Quelle</surname>
          </string-name>
          ,
          <article-title>Enhancing Compliance under the General Data Protection Regulation: The Risky Upshot of the Accountability-</article-title>
          and
          <string-name>
            <surname>Risk-based</surname>
            <given-names>Approach</given-names>
          </string-name>
          ,
          <source>European Journal of Risk Regulation</source>
          <volume>9</volume>
          (
          <year>2018</year>
          )
          <fpage>502</fpage>
          -
          <lpage>526</lpage>
          . https://doi.org/10.1017/err.
          <year>2018</year>
          .
          <volume>47</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>L.</given-names>
            <surname>Edwards</surname>
          </string-name>
          ,
          <article-title>Regulating AI in Europe: four problems and four solutions</article-title>
          ,
          <source>Ada Lovelace Institute</source>
          ,
          <year>2022</year>
          . URL: https://www.adalovelaceinstitute.org/wp-content/uploads/2022/03/Expert-opinionLilian-
          <article-title>Edwards-Regulating-AI-in-Europe.pdf</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>G. De Gregorio</surname>
          </string-name>
          ,
          <article-title>Digital Constitutionalism in Europe: Reframing Rights and Powers in the Algorithmic Society</article-title>
          , Cambridge University Press, Cambridge,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vermeule</surname>
          </string-name>
          ,
          <source>The Constitution of Risk</source>
          , Cambridge University Press, Cambridge,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Peters</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Krieger</surname>
          </string-name>
          , L. Kreuzer,
          <article-title>Due Diligence in the International Legal Order: Dissecting the Leitmotif of Current Accountability Debates</article-title>
          , in: H.
          <string-name>
            <surname>Krieger</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Peters</surname>
          </string-name>
          , L. Kreuzer (Eds.),
          <source>Due Diligence in the International Legal Order</source>
          , Oxford University Press, Oxford,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>19</lpage>
          . doi:
          <volume>10</volume>
          .1093/oso/9780198869900.003.0001.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>G. De Gregorio</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Dunn</surname>
          </string-name>
          ,
          <article-title>The European risk-based approaches: Connecting constitutional dots in the digital age</article-title>
          ,
          <source>CMLR</source>
          <volume>59</volume>
          (
          <year>2022</year>
          )
          <fpage>473</fpage>
          -
          <lpage>500</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>C.</given-names>
            <surname>Castets-Renard</surname>
          </string-name>
          ,
          <article-title>Accountability of Algorithms in the GDPR and Beyond: A European Legal Framework on Automated Decision-Making, Fordham Intellectual Property</article-title>
          ,
          <source>Media and Entertainment Law Journal</source>
          <volume>30</volume>
          (
          <year>2019</year>
          )
          <fpage>91</fpage>
          -
          <lpage>137</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>O.</given-names>
            <surname>Lynskey</surname>
          </string-name>
          ,
          <source>The Foundations of EU Data Protection Law</source>
          , Oxford University Press, Oxford,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>L.</given-names>
            <surname>Edwards</surname>
          </string-name>
          , Articles
          <volume>12</volume>
          -
          <string-name>
            <surname>15</surname>
            <given-names>ECD</given-names>
          </string-name>
          :
          <article-title>ISP liability. The problem of intermediary service provider liability</article-title>
          , in: L.
          <string-name>
            <surname>Edwards</surname>
          </string-name>
          (Ed.),
          <article-title>The new legal framework for e-commerce in</article-title>
          <string-name>
            <surname>Europe</surname>
          </string-name>
          , Hart, Oxford,
          <year>2005</year>
          , pp.
          <fpage>93</fpage>
          -
          <lpage>136</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>G.N.</given-names>
            <surname>Yannopoulos</surname>
          </string-name>
          ,
          <article-title>The Immunity of Internet Intermediaries Reconsidered?</article-title>
          , in: M.
          <string-name>
            <surname>Taddeo</surname>
          </string-name>
          , L. Floridi (Eds.),
          <source>The Responsibilities of Online Service Providers</source>
          , Springer, Cham,
          <year>2017</year>
          , pp.
          <fpage>43</fpage>
          -
          <lpage>59</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -47852-
          <issue>4</issue>
          _
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>D.</given-names>
            <surname>Citron</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wittes</surname>
          </string-name>
          ,
          <source>The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity, Fordham Law Review</source>
          <volume>86</volume>
          (
          <year>2017</year>
          )
          <fpage>401</fpage>
          -
          <lpage>424</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>C.</given-names>
            <surname>Cauffman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Goanta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A New</given-names>
            <surname>Order: The Digital Services Act and Consumer Protection</surname>
          </string-name>
          ,
          <source>European Journal of Risk Regulation</source>
          <volume>12</volume>
          (
          <year>2021</year>
          )
          <fpage>758</fpage>
          -
          <lpage>774</lpage>
          . doi:
          <volume>10</volume>
          .1017/err.
          <year>2021</year>
          .
          <volume>8</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Efroni</surname>
          </string-name>
          , The Digital Services Act:
          <article-title>risk-based regulation of online platforms</article-title>
          ,
          <source>Internet Policy Review</source>
          ,
          <year>2021</year>
          . URL: https://policyreview.info/articles/news/digital-services
          <article-title>-act-risk-basedregulation-online-platforms/1606.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>L.</given-names>
            <surname>Floridi</surname>
          </string-name>
          ,
          <article-title>Establishing the rules for building trustworthy AI</article-title>
          ,
          <source>Nat Mach Intell</source>
          .
          <volume>1</volume>
          (
          <year>2019</year>
          )
          <fpage>261</fpage>
          -
          <lpage>262</lpage>
          . doi:
          <volume>10</volume>
          .1038/s42256-019-0055-y.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ebers</surname>
          </string-name>
          ,
          <string-name>
            <surname>Standardizing</surname>
            <given-names>AI</given-names>
          </string-name>
          -
          <article-title>The Case of the European Commission's Proposal for an Artificial Intelligence Act</article-title>
          , in: L. Di
          <string-name>
            <surname>Matteo</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Poncibò</surname>
          </string-name>
          , M. Cannarsa (Eds.),
          <source>The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics</source>
          , Cambridge University Press, Cambridge (
          <year>2022</year>
          , forthcoming).
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <surname>G. De Gregorio</surname>
          </string-name>
          ,
          <article-title>The rise of digital constitutionalism in the European Union</article-title>
          ,
          <source>International Journal of Constitutional Law</source>
          <volume>19</volume>
          (
          <year>2021</year>
          )
          <fpage>41</fpage>
          -
          <lpage>70</lpage>
          . doi:
          <volume>10</volume>
          .1093/icon/moab001.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>