=Paper= {{Paper |id=Vol-3762/516 |storemode=property |title=Regulating Generative AI towards the future |pdfUrl=https://ceur-ws.org/Vol-3762/516.pdf |volume=Vol-3762 |authors=Giovanna De Minico,Michela Tuozzo |dblpUrl=https://dblp.org/rec/conf/ital-ia/MinicoT24 }} ==Regulating Generative AI towards the future== https://ceur-ws.org/Vol-3762/516.pdf
                                Regulating Generative AI towards the future1
                                Giovanna De Minico* - Michela Tuozzo*

                                University of Naples Federico II, via Marina 33, Naples, 80125, Italy


                                                    Abstract
                                                    This intervention is focused on two issues. The first one aims to consider the current regulatory
                                                    framework of Generative Artificial Intelligence systems, with specific attention to the obligations
                                                    of providers and deployers and system governance as dictated in the AI Act.
                                                    The second issue is dedicated to exploring points of intersection with other regulations
                                                    applicable to AI systems within the European digital ecosystem.

                                                    Keywords
                                                    Generative AI systems, AI Act, Governance



                                1. Introduction                                                      potential implications on a crucial issue: hate speech
                                                                                                     and online misinformation.

                                Generative intelligence, a cutting-edge development
                                in the realm of artificial intelligence, has significantly           2. GPAIs’ classification
                                influenced the legislative trajectory of the European
                                Regulation      on       Artificial    Intelligence      -
                                2021/0106(COD).                                                          The critical characteristics of general-purpose AI
                                    AI systems utilizing Large Language Models have                  (GPAI) models include their large size, opacity, and
                                the capacity to generate a wide range of outputs,                    potential to develop unexpected capabilities beyond
                                including texts, translations, images, sounds, videos,               those intended by their creators. According to article
                                and more. The prospect of these systems                              3 (63), a general-purpose AI model means an AI model
                                harmoniously integrating with other AI systems                       trained with a large amount of data using self-
                                amplifies their usefulness for users, both                           supervision at a scale that displays significant
                                professionals and non-professionals, as well as for                  generality and is capable of competently performing a
                                public and/or judicial authorities. The latter can                   wide range of distinct tasks regardless of the way the
                                leverage     them       for     forecasting,    adopting             model is placed on the market and that can be
                                recommendations, or making informed decisions.                       integrated into a variety of downstream systems or
                                    The unique characteristics of generative AI have                 applications.
                                raised questions about the applicability of the                          On December 6, 2022, the General Secretariat of
                                traditional risk management approach, which forms                    the Council classified GPAIs as high-risk systems. This
                                the foundation of European technological Regulation,                 classification enforces specific compliance obligations
                                and how to effectively categorize this new form of AI.               (Articles 10-15), requires a risk impact assessment on
                                    The proposed intervention aims to highlight the                  fundamental rights (Article 27), mandates auditing
                                status of generative AI under the AI Act and its                     prior to market entry (Article 43), and necessitates
                                governance. Consequently, critical reflections will be               registration in the EU database, along with post-
                                elaborated upon regarding these elements and their                   market surveillance obligations.



                                     1 The article reflects collective thoughts; however, the                      © 2024 Copyright for this paper by its authors. Use permitted under
                                                                                                                   Creative Commons License Attribution 4.0 International (CC BY 4.0).
                                paragraphs can be attributed as follows: paragraphs 1, 2, 2.1, and
                                3 to Dr. Michela Tuozzo; paragraphs 2.2 and 4 to Prof. Giovanna
                                De Minico.

                                Ital-IA 2024: 4th National Conference on Artificial Intelligence,
                                organized by CINI, May 29-30, 2024, Naples, Italy
                                ∗ Corresponding author.

                                   giovanna.deminico@unina.it (G. De Minico);
                                michela.tuozzo@unina.it (M. Tuozzo);




CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
     The high-risk classification of GPAIs has sparked     In addition to those already stipulated for "standard"
two types of criticism: the failure to adhere to a         GPAIs, providers shall perform model evaluation to
precautionary approach and the imposition of               identify and mitigate systemic risk; assess and
obligations that are perceived as difficult to achieve.    mitigate possible systemic risks at the Union level;
     Through its amendments on June 14, 2023, these        report without undue delay to the AI Office and, as
critical aspects led the European Parliament to            appropriate, to national competent authorities,
develop an autonomous classification of GPAIs as           relevant information about severe incidents and
foundation models. Article 28 ter outlined three           possible corrective measures to address them; ensure
categories of obligations for the provider: risk           an adequate level of cybersecurity protection.
identification and mitigation, testing and evaluation,
and documentation.                                         2.2. Governance2
     However, it's important to note that the regulatory
model was significantly altered in the final version of        The governance of the artificial intelligence
the adopted AI Act, a result of the compromise             market is complex due to AI's diverse applications in
reached among the European institutions during the         both public and private sectors across national and EU
trilogue phase. This change was influenced by              levels. It is important to note that responsibility for
lobbying efforts (Bareis), underscoring the political      actions like fundamental rights impact assessment
dynamics at play in the regulatory process.                and conformity evaluation falls on the entrepreneur's
                                                           initiative, reflecting a confusing blend by the
                                                           Commission of individual centrality and a trend
2.1. Tiered approach                                       toward system privatization.
                                                               The governance system could have taken three
                                                           forms: complete decentralisation, relying on national
The Regulation of generative intelligence became its
                                                           oversight systems like telecommunications, complete
category in the final version of the AI Act approved on
                                                           centralised supervision disregarding national
March 13, 2024. Within this category, generative AI is
                                                           variations, or a mixed system with tasks entrusted to
classified into three types: general-purpose AI model,
                                                           the Commission in a designated Directorate-General
general-purpose AI model with systemic risk, and
                                                           and to European agencies. The governance of the AI
open-source general-purpose AI model (articles 51 –
                                                           Act takes different forms depending on the level
55). As a result, we have different rules for different
                                                           considered and even the type of intelligence, as shown
GPAIs.
                                                           by the case of GPAIs. Examples of entirely European
    In essence, Spanish Presidency of the EU Council
                                                           Independent Authorities, independent from National
has aimed to strike a balance between the Council's
                                                           Governments and the Commission, were established
hands-off approach and the Parliament's earlier
                                                           in 2010 with the European Banking Authority, the
stance of establishing uniform rules for all generative
                                                           European Insurance and Occupational Pensions
AI systems.
                                                           Authority, and the European Securities and Markets
    Specifically, providers of "standard" GPAIs, when
                                                           Authority.
generating synthetic audio, image, video, or text
                                                               While traditional AI systems have an intermediate
content, must ensure that the outputs of the AI system
                                                           governance form balancing decentralisation and
are marked in a machine-readable format and
                                                           centralisation, generative AI systems follow a fully
detectable as artificially generated or manipulated
                                                           centralised approach. This led to a complex
(Article 50, paragraph 2). In addition providers shall
                                                           governance structure: the original proposal involved
(Article 53): draw up and keep up-to-date the
                                                           three authorities (Commission, National Authorities,
technical documentation; comply with Union
                                                           and AI Board), but the final act expanded to at least 5
copyright law; make a sufficiently detailed summary
                                                           (including the Advisory Forum and Scientific Panel).
of the content used for training publicly available.
                                                               The entire AI system obediently follows a
    Conversely, providers of open-source GPAIs must
                                                           government-centric approach. It is justified solely by
only comply with copyright rules and those regarding
                                                           the fact that AI will become the engine of public
the synthesis of content used for training.
                                                           policies that must firmly remain in the hands of the
    The obligations for providers of GPAI with
                                                           current political majorities.
systemic risk are more extensive than for "standard".



2 This part is due to Professor Giovanna De Minico.
     As for the Commission, it has the power to have        superficial rule because it is unclear how to ensure
the final and definitive say on corrective measures         functional independence without first guaranteeing
proposed by national supervisory authorities,               organisational independence, known as genetic
confirming an approach centred on the community             independence. Additionally, Article 70 does not
executive.                                                  specify which entities should respect this
     As for the AI Board, it is ensured only functional     independence. Previously, Article 59, par. 4, stated
independence; indeed, genetic independence from the         that “members of each national supervisory authority,
Commission and Member States is not required.               (…), shall neither seek nor take instructions from
Evidence of this is the freedom of each State to send       anybody and shall refrain from any action
whomever they want and the presence of the                  incompatible with their duties”. Removing this part of
Commission on the Board. Consequently, even                 the rule suggests that independence is only aimed at
functional independence is at risk, as already              those being regulated, not political representatives.
evidenced by the comparison between European                This suggests that the AI Act has accepted a partial
Parliament amendments (Article 56) and the final text       risk of capture because it only focuses on regulating
of Article 65. This rule states: "The Board shall be        solid entities.
organised and operated to safeguard the objectivity             The National Authority possesses regulatory
and impartiality of its activities". The original           powers, allowing it to mandate actions such as
formulation of Article 56 stated: "The ‘European            suspensions, corrective measures, or removing AI
Artificial Intelligence Office’ (…) shall be an             systems from the market. Could personal ablation be
independent body of the Union. It shall have legal          constructed as a complex administrative action with
personality" and the Office "act independently when         unequal powers? To this question, we respond
carrying out its tasks or exercising its powers” (Article   affirmatively because the National Authority has the
56 quater). Currently, in terms of its structure, the       authority to propose the action, with the Commission
Office is integrated within the administrative              having the final say. Therefore, two authorities – the
framework        of    the    Directorate-General     for   National and the Community – intervene in the same
Communications Networks, Content and Technology             decision but at different times and with different
(DG-CNECT) of the Commission. It does not have              contributions: one proposes, and the other finalises
operational autonomy from DG-CNECT. Furthermore,            the procedure. This procedural collaboration achieves
unlike      national     authorities,    no   dedicated     coordination       between     authorities    operating
infrastructures or technical, financial, or human           simultaneously within the National Union network.
resources are provided.                                         In a summary overview, the governance of the AI
     Regarding National Supervisory Authorities, it is      Act reserves central political authority for the
permissible for the State to designate them within an       Commission, whose verbum is communicated
affiliated entity such as the Government (as                downstream to the National Authorities. Then, it
emphasised in the Privacy Commissioner's letter             returns to the Commission itself, with an AI Board
dated March 25, 2024).                                      intervening occasionally to address gaps in the
     In Italy, the agencification approach is confirmed     discourse.
by the legislative initiative on a delegated law                In contrast to the system described above, the
regarding artificial intelligence approved by Council       generative system governance is centralised between
of Ministers on April 23, 2024. In the draft, Article 18    the AI Office and the Commission.
designates the two Authorities: the Agency for Digital          The Commission shall have exclusive powers to
Italy (AgID) and the National Cybersecurity Agency          supervise general-purpose AI models and request
(ACN). In both cases, these government agencies             measures and shall entrust the implementation of
achieve functional independence only with respect to        these tasks to the AI Office, a European Agency of the
the regulated entities but not the representative           Commission.
political body. The Government's choice seems clear:            The AI Office plays a central role in developing a
to maintain control over "intelligent policies" in its      Code of Practice and monitoring its application.
hands, rejecting the model of independent authorities.          The Scientific Panel serves as a qualified advisory
     In addition to functional independence,                body to the AI Office.
organisational       independence      was     expressly        Unlike traditional AI models, governance for
requested in the Parliament's amendments (Article           generative models is fully Eurocentric to the extent
59, par. 1, EP).                                            that the Commission fulfils its administrative role and
     Article 70 (AI Act adopted) only recognises            possesses       law    enforcement      powers.     The
functional independence, which seems like a                 centralisation of governance is at its maximum.
    While centralisation under the Commission is            intervene. Civil society organisations, industry,
justified by the sector's sensitivity and the need for a    academia, and other relevant stakeholders, such as
unified implementation approach, it raises concerns         downstream providers and independent experts, may
about deviating from the independent European               also support the process (Article 56).
authorities established in 2010.                                Co-regulation in the technology sector has shown
                                                            various reasons for fallibility, with the primary
                                                            concern being the risk of capture by regulated solid
3. Boundaries                                               entities.
                                                                Lastly, there is an emphasis on the responsible
3.1. Disadvantages of the AI Act                            behaviour of providers but a need for proper division
                                                            of responsibility with 'downstream' users. It should
    The qualification of so-called 'systemic risk' will     be considered that the distribution of responsibilities
initially depend on the capability, either based on a       along the value chain should involve multiple parties,
quantitative threshold of the cumulative amount of          each with different responsibilities, particularly
compute used for its training measured in floating          considering the user's role depending on whether
point operations (FLOPs set as 10^25) or based on a         they use the output for professional purposes
decision of the Commission, ex officio or following a       (Hacker, Engel, Mauer).
qualified alert from the scientific panel. It is presumed
that a model trained with large amounts of data and
advanced complexity has foreseeable adverse effects         3.2. Addressing Challenges Within and
on public health, safety, public security, fundamental      Beyond the AI Act
rights, or the society as a whole that can be
propagated at scale across the value chain.                 Beyond the regulatory aspects addressed in the AI Act,
    Upon closer examination, this definition of             the proliferation of this new type of AI also presents
systemic risk consider the combination of the               interpreters with the issue of the rapid pace of
probability of a harmful event occurring and the            technological transformations. From a constitutional
severity of that harm, as well as the values and assets     law perspective, it is essential to clarify the categories
of constitutional relevance. This appears consistent        involved to assess whether the discipline outlined in
with the framework of the AI Act, which prohibits all       the article respects fundamental liberties.
AI systems whose use is deemed unacceptable                     Consider that the prerogative of the most
because it contradicts Union values. However, there         widespread GPAIs – such as chat GPT – is
needs to be coherence between the means,                    communication.
represented by numerical indicators such as FLOPs,              We need to raise the following questions: Does
and the end, which is the protection of common              generative AI produce ideas? Is it a new form of
constitutional values. These parameters «describe the       media? Or a digital private communication?
foundational model but not its impact on society,               Article 15 of the Constitution protects the
safety or fundamental rights» (Helberger et al.).           limitation of the freedom of communication by the
    Another criticism concerns the quantitative and         guarantees of legal reservation and jurisdiction, but
qualitative reduction of provider obligations, even for     with a significant difference compared to Article 21.
GPAIs with systemic risk. It is surprising to note that     The limits expressed (good conduct) and unexpressed
for such models, the activity of demonstrating              (protection of personality rights such as reputation
compliance before placing the AI system into the            and privacy) would not apply to communications as
market or service is not accompanied by guarantees          they do to Article 21. Furthermore, protecting
of a prior conformity assessment for the provider and       freedom and correspondence from undue
a fundamental rights impact assessment for the              interference would extend to the recipient and the
deployer. Compliance can be demonstrated by relying         sender (in our case, OpenAI, Google, and others).
on codes of practice within the meaning of Article 56           The constitutional coverage of Article 21 would
until a harmonised standard is published.                   imply extending the guarantees of the press medium
    The AI Office, in collaboration with the Board,         to chat GPT as well: the prohibition of censorship, the
encourages the drafting of the Code. They aim to            possibility of adopting inhibitory acts with the
ensure that the Codes of Practice comprehensively           guarantees of legal reservation and jurisdiction, and
address the obligations in Articles 53 and 55.              finally, the possibility of limiting its contents.
However, all providers of general-purpose AI models
and relevant national competent authorities will
      Finally, similarly to the issue of advertising             The criterion for identifying GPAIs at systemic risk
information, it could be argued that informational          relates to computational capacity: “when the
content has an economic purpose. Therefore, it would        cumulative amount of computation used for its
be more appropriate to adopt the limits of Article 41.      training measured in floating point operations is
Thus, it cannot be carried out in contrast to social        greater than 10(^25)” (Article 51 AI Act). Unlike for
utility or in a way that harms health, the environment,     the DSA in this case, the number of clients is
security, freedom, or human dignity. Upon closer            disregarded because          the      taxable      entity is
examination, the protection here is also twofold:           identified because of the capacity to input data and
towards the end-user citizen and other commercial           occupy the data world.
operators.                                                       The second difference is an objective one that
      Providing a definitive answer to the question of      concerns the service. Platforms render an
constitutional coverage requires further examination.       intermediary service: they connect those who
It is essential to consider the communicative context       generate information with those who receive
and the communicating subject: We are within the            information, and this encounter between supply and
protective sphere of Article 15 when the intention is       demand happens on the platform. So, the platform
to maintain the secrecy of the content of virtual           does not put its hand on the information; it hosts it,
correspondence, the recipients are specific and             rationalizes it, organizes it, and categorizes it, but
immutable, and the means are suitable for achieving         someone else is generating the idea. Generative AI, on
secrecy. It is precisely this last requirement that leans   the other hand, does precisely what the platform does
towards the category of Article 21.                         not do; that is, it occupies that space vacated by the
      When considering the fine line with Article 41, we    platform because the AI is not a host; it is the author
must examine the purpose of freedom of expression.          of an idea of its own can argue about whether Chat-
If it serves an economic aim, such as profiling, then the   GPT creates the idea out of thin air, or whether the
broader protection of Article 21 may not apply.             idea is generated by fishing around the network, how
(Ruffolo).                                                  it articulates it, and so on but it is still something that
                                                            involves creative energy. Even though Chat-GPT,
                                                            unlike the human mind, does not create from anything
4. Next steps: Implementing the                             but from a background, it still realizes a vital, active,
                                                            innovative contribution that is not there in the
Regulatory Framework in the
                                                            platform.
Digital Ecosystem3                                               So I have come to say that there is a subjective and
    Upon closer examination, we found that adopting         objective difference, and because of this difference, to
the AI Act only addresses some of the issues raised by      be strict, the discipline of DSA cannot be applied.
generative AI systems.                                           2) If we combine all the regulations, they would
    It could be a reason for adding the general             still fall under the guidelines of the AI Act, which has
discipline of the AI Act with some particular               deliberately exempted GPAI. GPAI was only included
disciplines, such as Digital Services Act – Regulation      in the final negotiations as a last resort, with less strict
2022/2065 – and Digital Market Act – Regulation             rules compared to other forms of intelligence.
2022/1925, which follow a community goal: to create         Essentially, GPAI has been shielded from excessive
a single digital ecosystem.                                 regulation. Those advocating for additional rules are
    This cumulation operation cannot happen                 going against the original purpose. Since EU laws are
automatically, but it should be conducted with some         meant to be interpreted beyond just the words, this
questions showing us how to integrate the disciplines.      combination of regulations would contradict the
    1) Is there a difference in the passive legitimates     intentio legis of the AI Act, burdening GPAI with
of the two disciplines? By passive legitimates, we          regulations it was meant to avoid. While we might
mean the recipients of the rules of the Digital Service     regret not imposing more regulations, it is worth
Act and the AI Act. The former are the platforms that       acknowledging that GPAI has been protected as
are identified in the discipline because of a quantity:     intended.
“several average monthly active recipients of the                Finally, let us ask what the purpose of DSA is. The
service in the Union equal to or higher than 45             DSA is for keeping the net clean of blatant
million” (Article 33, DSA).                                 malfeasance, misleading, hate speech, fake news, and



3 This part is due to Professor Giovanna De Minico.
so on. Provided that the DSA is not intended to             reasoning that hateful conduct is prohibited because
derogate from the general principle that the platform       it is prohibited. Rather, tautology can conceal
is not the editor-in-chief of a newspaper, it is exempt     dangerous liberticidal theses and easy slides toward
from a prior and general control obligation, but it does    only permissible speech: state speech. Thus, the
put in place a punctual control obligation, thus not        control of permissibility conceals insidious forms of
generalized, and not ex-ante but ex-post. This              merit-based scrutiny of the manifestation of thought,
obligation of control signifies that the ultimate goal is   opening the door to digital censorship on the Web.
to hold together a control that does not impose a                This undergoes a radical change from a place
generalized vigilance to which no platform will ever        unscathed by heteronomous interventions and free
want to submit, surrendering fundamental freedoms,          from information intermediaries to a space
thus the right to say and have expressed those who          supervised by private individuals who have the keys
then put news on the platforms.                             to open and close the information agora in their
    Suppose we have a positive answer on positive           hands. This risk cannot be avoided due the absence of
DSA’s purpose. In that case, it could be shifted to the     the normative definition of falsehood, with the
Chat-GPT as well? Moreover, why not if this policy of       paradoxical consequence of a caesura between the
cleaning up the network is so positive, even if some        offline environment, where the dissemination of false
(De Minico d) see it as a form of censorship entrusted      ideas does not constitute a crime unless it attacks
to the private entity? When the Commission                  other goods-interests other than the truth, and the
designates platforms as “providers of large online          virtual one, where instead the idea if false ceases to be
platforms” (Articles 15 and 33 DSA), they become the        the exercise of a right to become an illicit fact.
subjects of a timely and subsequent obligation to                Therefore, I do believe that the DSA has
control the information stored and transmitted by           aggravated the limit of lawfulness, making certain
their platform to ensure a transparent and secure           conduct that is lawful offline unlawful when it sees the
digital environment.                                        playing field changed.
    The fulfilment of this selective cleanup duty-which          The most severe thing about this blank
reconfirms the absence of a generalized duty of             endorsement to platforms of the power to control the
control by object and, over time, according to the          merit of others' ideas is the emergence of an unseen
philosophy of the e-Commerce Directive, now re-             function assigned to private individuals, which in the
proposed in Art. 8 DSA, should equalize the                 real world does not even exist in terms of power
asymmetrical relationship between the platform              referred to as a public authority. I prefer to trust in the
owner and the author of the hosted content, just as it      beneficent virtues of the marketplace of ideas, which,
should put the author of the content and its recipient,     in allowing the coexistence of the false with the true,
the end user of the information flow, on the same           lets citizens distinguish between the two entities
level.                                                      because it believes in their ability to mature a
    Without prejudice to my doubts about the                responsible idea without being pre-addressed by
suitability of this asymmetrical measure to equiordize      those who claim to know for them what objective
misaligned social partners, I would instead call            truth is.
attention to a possible effect that could affect                 This failure of the D.S.A. to predetermine the
platforms. These, in order not to incur a liability         concept of forgery degrades abstractness into
judgment due to the maintenance of illicit content          concreteness and generality into particularity. The
online, will be inclined to delete rather than preserve     law is resolved in the ordinal provision of the private
the ideas of others, based, moreover, on summary            strongman, while the equality of citizens before the
evaluations pronounced inaudita altera parte. Adding        law is attenuated in the concrete provision ad certam
to this is a further consideration: the lack of an          personam.
abstract and general definition of the concept of false          Adopting instead the point of view of those who
leaves platforms free to confuse false with politically     consider it positive and desirable the extension of its
inappropriate news or news that does not conform to         contents to the A.I. Act, then this operation cannot be
dominant thinking. No more complete is the                  carried out by making an automatic addition of
prohibition of hate speech, which lacks a prior             disciplines but can be applied in interpretation.
typification of hate speech, even regarding the                  3) Who does the interpretation? The Commission
necessary causal link between the saying and the            has governance over the GPAIs and, therefore, could
activating effect toward the recipients of the              impose on them obligations arising from the digital
prohibited conduct sought to be solicited. This             service, not by automatic acquisition (Hacker, Engel,
normative gap in the DSA points back to tautological        Mauer; Botero Arcila) but according to reasoning by
analogy legis, assuming that between the two cases,               Available                  at                SSRN:
the regulated, the digital and the unregulated Chat-              https://ssrn.com/abstract=4539452
GPT, via is the identity of facts. At the end of this        [3] A. Cefaliello, M. Kullmann. “Offering False
discussion, we have partially applied the rules of the            Security. How the Draft Artificial Intelligence Act
DSA to chat, but only by interpretation and assuming              Undermines Fundamental Workers Rights”.
that scrupulous reasoning is conducted.                           European Labour Law Journal 4(2022), pp. 542-
    While maintaining a negative judgment on the                  562.
content of the DSA, adopting instead the perspective              https://doi.org/10.1177/20319525221114474
of those who consider it positive and desirable to           [4] G. De Minico a. "Too many rules or zero rules for
extend its contents to the AI Act, then such an                   the ChatGPT?" BioLaw Journal 2(2023), pp. 491-
operation cannot be carried out by automatically                  501.      https://doi.org/10.15168/2284-4503-
adding disciplines (Hacker, Engel, Mauer; Botero                  2723
Arcila), but can be applied interpretively.                  [5] G. De Minico b. “Una norma unica che tenga
    Who does the interpretation? The Commission,                  insieme ChatGpt e privacy”. Il Sole 24 ore, June
which has governance over GPAIs and could therefore               14, 2023.
impose obligations on them arising from the DSA, not         [6] G. De Minico c. “The Challenge of the Virtual
through automatic acquisition but through analogical              World for Independent Authorities”. European
legal reasoning, admitting that between the two cases,            Public Law 29:1(2023), pp. 1-26.
the regulated digital pathway and the unregulated            [7] G. De Minico d. “Internet e le sue regole”. Atti del
GPAI, there is a factual identity.                                Convegno Processi politici e nuove tecnologie
    At the end of this discussion, we have partially              tenutosi a Università degli Studi di Bari - Aldo
applied the rules of the DSA to chat, but only                    Moro nel 22 giugno 2023, edited by M. Calamo
interpretatively and provided that a careful reasoning            Specchia, Giappichelli, Torino, forthcoming.
is conducted.                                                [8] G. De Minico e. “Nuova tecnica per nuove
    Similarly, one should reason about the extension              diseguaglianze.      Case       law:     Disciplina
of the contents of the DMA.                                       Telecomunicazioni, Digital Services Act e
    In conclusion, Regulations in the digital                     Neurodiritti”. Federalismi.it 6(2024).
environment provide solid foundations for                    [9] L. Floridi, M. Chiriatti, “GPT-3: Its Nature, Scope,
addressing the risks posed by the rapid evolution of              Limits, and Consequences”. Minds & Machines 3
generative AI. Additionally, it is crucial for the                (2020),                pp.                681–694.
European Commission to fully utilise its                          https://doi.org/10.1007/s11023-020-09548-1
implementation power during the execution phase              [10] P. Hacker. “AI Regulation in Europe: From the AI
(Articles 290 and 291 TFEU), adapting the pace of                 Act to Future Regulatory Challenges”. Oxford
technology to existing rules to protect fundamental               Handbook of Algorithmic Governance and the
rights.                                                           Law edited by I. Ajunwa, J. Adams-Prassl, Oxford
                                                                  University Press, 2024.
                                                             [11] P. Hacker, A. Engel, M. Mauer. “Regulating
Acknowledgements                                                  ChatGPT and other Large Generative AI Models”.
                                                                  FAccT         7(2023)         pp.       1112-1123
The initiative falls within the scope of the FAIR Project
                                                                  https://doi.org/10.48550/arXiv.2302.02337.
FAIR (Future Artificial Intelligence Research), WP 3.8
                                                             [12] N. Helberger, N. Diakopoulos. “ChatGPT and the
– Resilient AI, Ethical - Legal and Societal Issues in
                                                                  AI Act”. Internet Policy Review. Journal On
Resilient AI Systems coordinated by Professor
                                                                  Internet       Regulation         12:1      (2023),
Giovanna De Minico.
                                                                  https://doi.org/10.14763/2023.1.1682.
                                                             [13] C. Novelli, P. Hacker, J. Morley, J. Trondal, L.
References                                                        Floridi. “A Robust Governance for the AI Act: AI
[1]   J. Bareis. “BigTech's Efforts to Derail the AI Act”.        Office, AI Board, Scientific Panel, and National
      VerfBlog, 2023/12/05.                                       Authorities,” May 5, 2024, available at SSRN:
      DOI: 10.59704/265f1afff8b3d2df                              https://ssrn.com/abstract=4817755                or
[2]   B. Botero Arcila. “Is it a Platform? Is it a Search         http://dx.doi.org/10.2139/ssrn.4817755
      Engine? It's Chat GPT! The European Liability          [14] U. Ruffolo. “Piattaforme, A.I. generativa e libertà
      Regime for Large Language Models”. Journal of               di (formazione e) manifestazione del pensiero. Il
      Free Speech Law, 3 (2023), pp. 455-488.                     caso ChatGPT”. Giurisprudenza italiana (2024),
                                                                  pp. 472-480.