<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>LLMs and the Public Arena: a Threat to Democracy? Insights from Italian Journalism</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Riccardo Corsi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Gran Sasso Science Institute</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Pisa</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper investigates the adoption and governance of Generative AI (GenAI) in the Italian journalistic field, examining its implications for democracy and the public arena. Drawing on 13 in-depth interviews and one collective interview with journalists, union representatives, and ethics board members, the study explores how GenAI is perceived, used, and contested in Italian newsrooms. Findings show a widespread yet unstructured use of GenAI, characterized by both instrumental and critical attitudes. While GenAI is generally framed as a supporting tool for editorial work, concerns emerge around job displacement, editorial independence, opacity, and the commodification of journalistic content. By applying the framework of sociotechnical imaginaries, the study identifies four dominant risk narratives emerge -substitution, mistrusted information, machine-driven editorial logic, and surveillance-reflecting deeper anxieties about power asymmetries in the information ecosystem. The Italian case highlights the need for journalist-led governance strategies and contextualized AI adoption models, as shown by Il Manifesto. Overall, the paper argues that integrating LLMs in journalism is not only a technical matter but a profoundly political issue, demanding participatory and ethically grounded regulation to protect democratic values.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;AI and democracy</kwd>
        <kwd>journalism</kwd>
        <kwd>responsible generative AI</kwd>
        <kwd>public arena</kwd>
        <kwd>sociotechnical imaginaries</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Journalism plays a key role in Western democracies and is undergoing rapid changes due to technological
advancements, profoundly transforming both journalistic processes and products, thus necessitating
innovation in traditional business models[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The rise of Generative AI (Gen AI), a subset of AI capable of
generating original content such as articles, images, and videos, has further transformed the journalistic
landscape, and is sparking concerns and becoming a topic of public debate across countries [
        <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5">2, 3, 4, 5</xref>
        ].
      </p>
      <p>
        The potential consequences, particularly for journalism, are considered both positive and negative.
The promise of artificial intelligence (AI), and especially Generative AI, lies in its ability to rationalize
work processes, aligning with ideals of eficiency and speed, but its adoption is constrained by factors
such as professional norms, regulatory frameworks, audience preferences, and existing technological
infrastructures [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Beckett and Yaseen[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] report that while many stakeholders recognize generative
AI’s accessibility and ease of use as an advantage, opinions diverge on its broader implications. Some
view it as a tool for enhancing content production and workflow automation, while others express
concerns about misinformation, bias, and ethical challenges.
      </p>
      <p>
        In 2023, significant events highlighted the growing impact of these tools on journalism. A notable
example is the historic agreement between Springer and OpenAI, whereby content produced by major
outlets such as Bild, Die Welt, Politico, and Business Insider would be used to train AI models. Also some
media organizations, such as the Associated Press, have signed agreements with OpenAI, recognizing
the potential of Gen AI to enhance content production and distribution, while others, e.g, The New York
Times or Getty [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] initiated a legal case against OpenAI and Stability AI for unauthorized use of its
content for training purposes. Recently, in Italy, agreements have been disclosed between OpenAI and
major editorial enterprises such as GediGroup and Rcs, by generating protests and discontent among
professionals. The Italian national data protection authority Garante dela Privacy has fined OpenAI,
the company behind the ChatGPT model, for violating transparency requirements, failing to meet
obligations towards users, and lacking safeguards concerning the age of minors[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        In this regard, the Italian government is beginning to transpose the European AI Act regulations,
especially from a editorial point of view, focusing on a standardised certification (a marking) for
the traceability of AI-generated content, the valorisation of copyright, the defence of employment
profiles and the profession, the issue of vigilance for competition, and attention to Retrieval-augment
generation (RAG) techniques for indicating the sources of the outputs of generative models, whose
training content must be kept in a special register[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The professional code of ethics has been updated
by the relevant regulatory body, establishing in Article 19 the principle of non-substitution of humans
and the instrumental role of AI in support of human labor[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        This scenario underscores the divide between diferent views on the role AI systems - notably
Generative tools such as LLMs - should play in the future of journalism and what consequences this
could have for our democratic societies. In fact, it has been recognized that AI based systems are
afecting the public arena, understood as the ensemble of ”media infrastructures” which organizes
the information production, distribution and access while mediating also relations between diferent
actors[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], with particularly relevant consequences in terms of epistemic agency of the public[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        This study aims to advance the understanding of the adoption of generative AI in newsrooms and its
potential critical consequences for democracy, conceived in the light of the notion of communication[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ],
emphasising the co-construction of technology [
        <xref ref-type="bibr" rid="ref15 ref16 ref17">15, 16, 17</xref>
        ] — as well as the negotiation of professional
agency and human-in-the-loop aspect of ai applications in journalism[
        <xref ref-type="bibr" rid="ref4 ref6">4, 6</xref>
        ]. The study focuses on
the italian context, recognized in the literature as a “pluralist polarised” media ecosystem with its
peculiarities[
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], in a couple of ways.
      </p>
      <p>
        First, the study investigates how journalists are efectively using AI and Generative AI in Italian
newsrooms through in-depth interviews with professionals who engage with AI in their routine
workflow or due to ethical and professional concerns (some interviewees are also members of labor
unions and the deontological order). Then, the study examines how professionals assess the risks
and broader implications of Generative models to promote their responsible integration in a complex
negotiation of their autonomy and agency. Risks and concerns are analyzed through the lens of
sociotechnical imaginaries, an emerging sociological framework that captures diverse perspectives
on AI narratives and discourses [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. By exploring the responsible adoption of generative AI in the
information ecosystem—an issue widely recognized in the literature [
        <xref ref-type="bibr" rid="ref20 ref4">20, 4</xref>
        ]—this study provides a
deeper understanding of the challenges specific to Italy, examines its wider implications, and highlights
the inherently political nature of AI adoption within journalism. So, to critically engage with the
political implications of generative AI in journalism, this paper adopts an approach who emphasizes
the co-construction of technological systems and professional practices in a specific national context.
Specifically, this study explores: i) how is Generative AI being employed by Italian journalists? ii)
How do professionals perceive and envision the role of Generative AI in their workflows, how do they
negotiate their own agency contributing to shaping or resisting the adoption of Ai in newsrooms? iii)
In what ways could this afect an evolving public arena?
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Background. The widespread adoption of AI in newsrooms and why that matters for democracy</title>
      <p>
        The adoption of artificial intelligence (AI) in journalism has sparked widespread discussion over recent
years regarding its implications for both the profession and society. AI, particularly in its earlier forms,
has been deployed across various domains, from news gathering to content production and data analysis,
from identifying relevant news stories to personalizing content distribution via recommendation systems
[
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. From its initial applications, the use of AI in journalism has been marked by a growing emphasis
on “robot journalism” [22], “data journalism” as well as “algorithmic journalism” where algorithms are
used to produce news from structured data and automated story generation [23, 24]. These technologies
have facilitated faster content production, enhanced news recommendation and distribution systems
[25], and improved source verification techniques [ 26]. While automatically increased productivity and
eficiency may allow journalists to dedicate more time to in-depth reporting and investigative work,
there are also significant risks, ranging from bias to replacing human labour and threats to editorial
independence, in addition to content ownership, copyright violations and transparency regarding the
use of data in training the models [
        <xref ref-type="bibr" rid="ref7">27, 28, 7, 29</xref>
        ]. Now, LLMs are reshaping once again the landscape,
and the introduction of Generative AI tools could also compromise the accuracy, authenticity and
credibility of the journalistic work [30]. While initially celebrated, its potential has been tempered
by challenges like hallucinations in generated content and controversies at regulatory, editorial, and
business levels[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Briefly, AI poses in question the main nature, role and workflow of journalism as
well as its goals and future [
        <xref ref-type="bibr" rid="ref20">20, 31</xref>
        ].
      </p>
      <p>
        The impacts of these technologies on media and journalism indicate broader implications for
democratic societies, afecting fundamental values of the journalistic profession as outlined in articles 3, 8,
and 9 of the Global Charter of Journalism, which respectively address the reliability of journalistic
sources, respect for privacy and human dignity, and the duty to disseminate information without
promoting hatred or discrimination [32]. Moreover, by afecting the information ecosystem [ 33], and
by consequence the public arena, envisaged as the “media infrastructures that enable and constrain the
publication, distribution, reception, and contestation of information that allow people to exercise their
rights and duties” [34] AI could have a transformative efect on democracies. The study in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] stresses
the challenges these tools pose to the production and recognition of truth, with possible severe
consequences for the public’s ability to be properly informed and to act properly in our digital societies.
In fact, AI systems are reshaping how those news are produced, according to which economic and
organizational conditions [
        <xref ref-type="bibr" rid="ref12">12, 35</xref>
        ]. According to some, democracy will have to be reimagined in the
new communication paradigm [36].
      </p>
      <p>
        These challenges underscore the importance of identifying mechanisms for properly integrating
technology into journalistic work, while also recognizing contextual and organizational factors that
shape the outcomes of AI adoption [
        <xref ref-type="bibr" rid="ref4 ref6">4, 29, 6</xref>
        ]. Thus, which values for journalism and how to efectively
govern the integration of those systems in the newsroom, and which actors are shaping this fast evolving
information ecosystem are crucial factors at stake [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. International examples also reveal diverse
approaches to AI integration. In Sweden, collaboration among media organizations has prioritized
transparency and collective learning about AI applications [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Analysing the Dutch and Danish context,
Cools and Diakopoulos[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] assessed perils, possibilities and conditions for responsible use of GenAI
tools, according to journalists’ perspectives highlighting the strong importance of ethical guidelines
and “AI Task Forces”, composed by journalists themselves and technologists. Simon [31], drawing on
the classical work of Hallin and Mancini [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], in his study of “liberal” information ecosystems such
as those in the USA and UK, as well as the “democratic corporatist” model in Germany, observed a
rationalization of news production and a shift in power and control within the information ecosystem
facilitated by AI. These studies highlight how the adoption of AI in journalistic work is influenced by
the tradition that shapes the information ecosystem in a complex and context-specific way.
      </p>
      <p>
        Thus, to understand the implications of AI adoption in Italy, it is essential to consider the broader
features of the national media system, which aligns with the ”Polarised Pluralist” model described by
Hallin and Mancini[
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. In this context, the media system is characterized by high levels of political
parallelism, strong ties between media outlets and political elites, and a tradition of
commentarydriven journalism over impartial reporting [37, 38]. This model is historically rooted in delayed
media market liberalization, leading to a landscape where economic sustainability often relies on state
subsidies or party afiliations, with criticisms of journalistic autonomy [ 39]. Media instrumentalisation
and partisan editorial lines persist, shaping how innovation, including AI, is interpreted through
political, not only technical logics. As De Blasio et al.[40] argue, the platformization of news further
exacerbates political polarization, limiting pluralism and increasing ethical concerns over technological
mediation. Furthermore, stressing the fragility of this context, data on working conditions highlight
strong notable disparities based on professional status: professionals earned €67,621, publicists €29,430
and trainees €19,215, with a gender pay gap of 16% favoring men1. A significant imbalance was
observed in employment contracts: salaried journalists had longer contracts and higher pay, while
70% of freelancers earned less than €25,000 annually. Freelancers outnumbered employees across all
age groups2. If research on Italian newsrooms indicates hesitancy toward on AI, driven by concerns
about costs, job displacement, and the lack of a clear editorial strategy for its integration, the use
of AI is still seen as a complementary tool to improve eficiency and automate routine tasks rather
than replace human labor [41]. However, Italian public discourse on AI has shifted significantly
with the advent of tools like ChatGPT. Before ChatGPT, AI narratives were predominantly optimistic,
emphasizing productivity, well-being, and security, and its introduction, however, spurred concerns
about job displacement, human obsolescence, and ethical dilemmas[42]. Degli Esposti and Tirabassi [43]
indicate that the salient issues arising from interviews conducted in 2022 with Italian journalists are the
recognition of the need for training and common guidelines on the proper use of AI, as well as the risk
of increasing the spread of fake news. Oversll, the widespread adoption of new technologies such as
LLMs is once again reshaping the landscape and calls for dedicated analysis. This evolution underscores
ChatGPT’s role as a pivotal moment in Italian AI discourse, fostering a more critical understanding
of its societal implications and a need to further investigate it according to journalists’ perspective.
This structural backdrop is key to interpreting how Ai is integrated, negotiated, socially shaped by
Italian journalists and their imaginaries about AI adoption: they do not emerge in a vacuum but are
deeply entangled with the history of media-politics entanglements, the weak autonomy of editorial
institutions, and ongoing legitimacy crises.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Method</title>
      <p>
        The study adopts a qualitative research approach drawn from the realm of interpretive sociology, which
emphasizes the subjective meanings that individuals and groups ascribe to their social worlds[44, 45].This
study is informed by the frame of social constructionism in technology, particularly through the work of
Langdon Winner [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], Biker and Pinch[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] and McKenzie and Wajcman[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], which ofers an important
analytical perspective on how technologies are socially shaped and constructed through interactions
between various social, political, and cultural actors. In this view, technologies like AI are not just tools;
they are social products that emerge from collective decisions and negotiations and the introduction of
AI in journalism is, therefore, not only a matter of technological development but also one of social
shaping where journalists, tech developers, regulators, and audiences interact to influence the direction
and use of AI systems.
      </p>
      <p>Another cornerstone of this research stems from studies on sociotechnical imaginaries, defined by
Jasanof and Kim[ 46] as the visions that societies hold about the future of technology - both positive,
desirable or negative, dystopian - and how these visions influence policy and governance decisions.
Starting from this intuition of the co-construal relationship between techno-scientific activity and the
political order, scholars argue that imaginaries are contested, diferentiated and a good part of digital
governance appear to occur around technology and its rethorics [47] entering also high-level arenas, as
for the negotiation of the AI Act, contributing to the development of policy frames[48].</p>
      <p>Imaginaries are thus interactional sense-making activities, in the broader political economic,
organisational, situational and technological contexts (...) and can be analysed with a
methodology that pays attention to interactions, discourses and technologies in the situations
of interest[49].</p>
      <p>
        Sociotechnical imaginaries emerge from narratives and discourses as focused on potential
development of technology and its contextual factors [49], and shape and organize how society interprets
technology [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Bringing forward imaginaries could be crucial in order to ‘mind the gaps’ between
diferent visions, and recognize perspectives of users and actors besides techno developers [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
1https://www.ilsole24ore.com/art/tra-giornalisti-dipendenti-prevale-contratto-tempo-indeterminato-AGjViGdD?refresh_ce=
1
2ibidem
      </p>
      <p>The study integrates semi-structured interviews [50] to gather rich, context-sensitive data, capturing
both individual reflections and collective insights. A total of 17 professionals - 13 through
semistructured individual interviews and 4 in collective semi-structured interview were conducted - have
been interviewed, with respondents selected on their direct experience with AI technologies, whether
through their newsroom practices, ethical concerns, or governance-related roles, as members of labor
unions (FNSI, Stampa Romana) or deontological order (Ordine dei Giornalisti). The medium duration of
interviews is 45 minutes. The collective interview was conducted with members of the Il Manifesto’s
newsroom, the duration was 1 hour, and the intervieewed discussed with the interviewer their peculiar
approach to tech development in journalistic practices. The data collected have been analysed according
to the thematic analysis framework [51].</p>
      <p>Insert Tab 1.1 and Tab.1.2 for sampling of interviewees - see appendix</p>
      <p>The interview guide was designed to explore uses in the Italian landscapes of AI and Generative AI
in Italian information system; the perspectives of journalists expressed through their sociotechnical
imaginaries on positive and negative implications for the profession, governance issues and implications
for the public arena, by focusing in particular to the most relevant events in the italian context as
described above.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <sec id="sec-4-1">
        <title>4.1. The emerging role of LLMs in the Italian information ecosystem</title>
        <p>All Italian newsrooms are experimenting with AI and equipping themselves with teams to understand
the implications and guide the digital transition. As we will see in more detail later on, the adoption of AI,
and in particular of more sophisticated technologies such as LLM, may entail a varying degree of conflict
and resistance on the part of newsrooms. Before going into the specifics it is worth contextualising the
use of AI in the Italian information ecosystem. This is characterised by a tendency to lag behind the
newsrooms of other Western countries such as Germany, Great Britain, France or the United States and
a use that is still unstructured and uncodified. As one interviewee states:</p>
        <p>Abroad it has been used for a decade or so in all sectors [...] AP started in the financial
sector, automating texts about listed companies [...] The BCC in 2016 was using AI to
make its journalists speak in other languages they do not know. [...] These are structured
interventions built into the organisation’s workflow. [...] There is an important delay in
Italy due to the culture of innovation on the one hand, on the other, the editorial ofices
are now decimated, with few journalists crushed by routine, they have no space or time to
dedicate to research or study these systems. The study of the work-flow, of the problems
that are time-consuming [...] is somewhat lacking in this structural and headline-driven
application. (N2)</p>
        <p>Another interviewed defined the news industry as “short-sighted”, due to the fact that “before being
publishers they are building contractors, car manufacturers, fuels traders, or they are banks, insurance
companies” (N4). (N6) discloses: “the bottom line is this: the newspaper is still seen as an instrument of
power to interface with politics”.</p>
        <p>This is coupled with the awareness of an ever-changing information landscape, with the emergence of
players producing and distributing content on the market in direct competition to traditional newspapers,
which further complicates the scenario for traditional newspapers, already battered by the advent of the
web. Moreover, the distribution of articles on the web now has to deal with distribution services of the
large platforms, which enhance content according to the audience in ways that are not transparent to
journalists, who can only “indirectly guess the criteria by which one of our pieces is shown to the various
users”, as a reporter recognizes (N9). This respondent considers this new complexity for traditional
newspapers by lamenting how the importance of SEO (Search-Engine-Optimisation) technologies
borders on click-baiting, with negative efects for the quality of the content and the expectations of
the audience. The overall scenario seems destined to be even more complex, with the emergence
of generative search engines, such as Search GpT, which challenges a part of the market that was
traditionally occupied by others, and once again innovate the criteria for visibility of content, without
providing clear guidance on the matter.</p>
        <p>What everyone seems to agree on is the idea of the role of tools such as ChatGpt, Copilot, Claude
or Gemini have in the journalistic workflow: it is a helper, but feared, because one doubts its future
capabilities. In a scarcely structured and codified framework, with editorial ofices reduced to the bone,
there are those who use it to make up for the scarcity of editorial resources, for the “inauspicious work,
that of filling the gaps” allowing for cuts of working time (N9); those who use it to gather ideas and
materials for a piece, to “analyse databases or extract information from them” but always with “the
human in the loop” (N12). Its use is supportive also as a tool of translation or summarization of contents,
as well as optimizing SEO, with recognized possibilities in fostering the distribution across diferent
countries (N6). The promise of eficiency is described by N2: “the reduction of working time could open
possibilities for the qualitative part of the work, such as the writing, the research of sources, analysing,
investigating”. But still, the impact is disparate and “sector dependent”: “we who do local reporting are
more on the street. I think a greater impact is for those working on foreign afairs, because of language
issues and the constant research you have to do” (N9). The names journalists use to characterise LLMs
indicate a hierarchy: “a valet” (N4), an “assistant” (N12), or a “colleague” who does not have the same
skills (N9). There is also someone who just refuses to employ those systems: “I’m ideological in this, I
don’t want to use it, but that’s just how I am. I know my colleagues use it as a facilitator of their work”
(N13). Contributing to connote the subordinate relationship of LLMs is the recognition of various limits
that, as well as imposed by deontology, always imposes verification and control on the journalist of the
output provided by the Generative AI. Emblematic (N7):</p>
        <p>I have dealt with the Mafia for some time. I had written an article on Matteo Messina Denaro.
Out of curiosity, I gave strings to GPT to see how he would do such an article. It is full of
gaps. In specific fields, expectation is never the same as final realisation.</p>
        <p>Others complain about the inability to properly work with inverted commas from an interview, or
the analysis of themes within a body of text: “you have to review everything it gives you” (N9). In
addition, hallucinations, bias and explicitness are considered problematic dimensions both in terms of
the reliability of the content produced and the understanding of how it is produced. (N1) makes it clear
that there is a risk that the algorithm expresses value judgements that are “totally arbitrary” with the
consequence that “even on a seemingly non-dangerous article, considerable damage can be done”. The
fallout, having established that for the Order of Journalists it is always the journalist who is responsible,
is the risk of lawsuits for carelessly generated titles and subtitles. Overall, in this unstructured context,
the use of generative AI afects, even if just as a “assistant”, all the stages of workflow.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Sociotechnical imaginaries of risks and dystopian scenarios</title>
        <p>Journalists were stimulated to elaborate their visions about the adoption of LLMs and AI in their
workflow by focusing especially on critical aspects, risks, and concerns. Four diferent imaginaries
emerged about the potential broader efects of this technology into the information ecosystem and
society.</p>
        <p>LLMs: augmenting or substituting?</p>
        <p>If, on one hand everyone seemed to agree on the auxiliary, subordinate role of Generative AI, which
feeds the idea of irreplaceability of the journalist, mainly due to the limits of the state of the art of the
current technology, on the other hand, the fear of loss of control of the tool, of its ungovernability and
the impossibility of understanding it, caused by its power, emerges strongly. As (N4) argues, “news
gathering will continue to be done by the reporter, the correspondent”. “[LLMs] don’t go and talk to the
policeman, the prosecutor, the deputies” (N9) adds. As we have seen in the previous sub-section, LLMs
are seen as helpers, within a hierarchical perspective which poses the humans at the top. This idea of
the uniqueness of the human being in news gathering is reinforced by (N7), vice-director of a national
news agency, which states that the automation of the information flow from the internal desk to the
external editorial ofices - so, as a primary source - will never happen, because “it would compromise
the transparency of information and its quality upstream”. Nevertheless, job cuts are strongly feared,
mostly “for the desk workers, the graphics, the designers” (N9). The reduction of workforce could
be facilitated by “a cutting cost approach” (N1). What raises concern is not the tool in itself, but the
approach of publishing companies in the economic fragile context they are experiencing (N10), as
clearly (N4) considers:</p>
        <p>Publishers have in their hands a technology with low cost but huge operational impact [...]
and it is a technology that has entered into daily use. The risk that some publishers, even
the most emblazoned ones, might want to use AI [instead of journalists] one day is real. [...]
This is the industrial revolution of our time and it may produce job cuts. (N4)</p>
        <p>This idea of substitution, recognized as a broader phenomenon - “ICT is replacing professions that
seemed unassailable” (N3) - is accompanied by the fear of the loss of control of the systems themselves.
It recalls the theme of explicability, but declined in relation to the power of the tool: “This type of
technology is so powerful and potentially powerful that caution on the part of those who know it, test
it and develop it is inevitable, the risk is that it will get out of control” (N8).</p>
        <p>Mistrusted Information Ecosystem</p>
        <p>The relationship with readers is also a hot topic. “Trust [in journalism] is at stake, but also the
reputation of journalists” (N1). Reactions from readers to machine-generated texts, granted it must
be declared, is perceived as strongly uncertain: journalists address it as an issue of trust. As (N12)
complains, “If I say this piece was 20 percent done by AI, what does it look like? There is uncertainty
about readers” reactions to this phenomenon, given how many socio-economic and cultural factors
contribute to technological acceptance. Related to this is the risk of fuelling phenomena such as filter
bubbles or echo chambers, with the tendency exacerbated by SEO and generative implementations of it
“to profile users” and that “unconfirmed prejudices, ideas, beliefs are reinforced” (N1). Of course, the
issue of trust deals also with disinformation, with the idea that the job of the journalist increasingly
becomes that of intervening in the information ecosystem to cleanse it of falsehood, distinguishing
himself by “the ability to certify the trustworthiness of a source and content” (N8).</p>
        <p>Information ecosystem driven by machine interests</p>
        <p>Related to the centrality of AI systems in dictating news visibility and editorial authority is the
gatekeeping function that the holders of digital distributions, such as Google Discover, with a rising
role of OpenAI, come to play. As one head of digital at an Italian newspaper made clear, the problem of
visibility on these platforms, as well as on systems such as OpenAI’s Chat GpT, following commercial
agreements with publishing groups, implies others:</p>
        <p>How are those sources put in order? Who ranks first? Who pays the most, as is already
the case for some products on Google? Or on the basis of authoritativeness? And how do
we define authoritativeness? These are business questions but above all social implications
because they concern access to information. (N12).</p>
        <p>The access to information is more and more mediated and decided by algorithms and AI systems, with
the need of feeding LLMs models with news information by Big Tech companies. As one respondent
stated “The Gedi OpenAI agreements are the extreme consequence of following machine’s interests
instead of those of readers” (N10). In this scenario journalistic independence is challenged by prioritizing
profit over public interest (conceived as the “citizens’ right to be informed” N4), ranking sources based
on payments or biased measures of ”authoritativeness,” and an amplification efect of disintermediation.</p>
        <p>Your archive, your data are the newsroom’s oil well, and you are handing them over to a
third party [...] If GPT provides me with the answers I seek, I will lose interest in subscribing.</p>
        <p>This becomes an existential risk for journalism (N2).</p>
        <p>Surveillance and control</p>
        <p>Confronted by gigantic private enterprises, journalists are aware of the growing asymmetries of
power and the risk of loss of independence of publishers, and that of disintermediation. These elements
create a scenario of control and surveillance in which democracy is challenged. In this dystopia,
information flows are strictly controlled and true pluralism eliminated, creating an environment where
dissent is suppressed, and democratic debate is stifled. As an interviewed clearly express:
Information without debate, made up of controlled flows, controlled sources of news, does
eliminate dissent, true pluralism, any form of true antagonism of constituted power. This is
not a political discourse but an institutional one, about the functioning of democracy. This
is the real point. The information on which we have been moulded over the last twenty
years is a highly controlled information (N3).</p>
        <p>But there is a strong awareness on the part of professionals not to fall into the temptation of giving
in to easy technological determinism. As one editor-in-chief of a national newspaper well explains:
I wouldn’t give technology the ability to change these principles [the democratic ones], it
is a tool, it helps, it can be used well or badly but it doesn’t have that kind of function in
itself. You have to be aware that misuse can produce damage, and that there can be some
malicious people [...]. In the power of technology as a thaumaturgic power, as a power to
change democracy I do not believe in it (N8).</p>
        <p>Instead, there is a strong awareness that social, economic and cultural dynamics will determine these
possible outcomes. In the midst of this complexity, journalists also reflect on how to address these
challenges integrating technologies in a responsible way.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Towards Responsible LLMs in the Information Ecosystem: Alternative</title>
      </sec>
      <sec id="sec-4-4">
        <title>Governance Strategies for AI</title>
        <p>In order to face the uncertainty surrounding the integration of AI and GenAI in the information
ecosystem, journalists claim a major role in this technological shift and support at national and international
level: because, “technological innovation, if not properly governed, exacerbates the problems of
journalism” (N4). Therefore there is both the necessity of regulating AI’s learning processes and ensuring fair
compensation for the knowledge it absorbs, and of considering how AI-generated content should be
controlled and verified. As two important members of national union states, “journalists must be part
of strategic development of newsrooms [...] and not having to undergo these choices”(N5). A central
issue emerging from the interviews is the economic exploitation of journalistic content by AI systems.
As one journalist, member of a regional union, highlights:</p>
        <p>AI is feeded on content that is not its own. Inside AI there is our work. If the alienation
of value once occurred through hours spent working, today intellectual labor is absorbed
through agreements, ending up in an AI that appropriates it forever for uses we do not even
understand (N3).</p>
        <p>This raises concerns about fair compensation and demands a reevaluation of remuneration
mechanisms to ensure that the value extracted from journalistic work benefits newsrooms, not just technology
companies. N4 underscores the imbalance by pointing out that AI models not only retrieve journalistic
content but also train on it, meaning that “the mechanisms of value must be redesigned”. Copyright is a
central battleground in this sense, with N10 noting that ”copyright is currently the only tool available
even if it is not ideal” after years and years of “looting” by OpenAI which has left rooms open for privacy
concerns alswell.</p>
        <p>The issue of editorial control and accountability emerges as another major challenge. The inability
to oversee or amend AI-generated content can create legal and ethical problems, which could severely
undermine trust in newsrooms. N14 notes that ”what OpenAI does with that data is dificult to
understand. Modifying already published information is crucial for newsrooms due to legal implications,
AI errors, and factual updates”. This underscores the necessity of mechanisms that allow journalists to
track, amend, and verify AI-generated content, with amendability raising as a fundamental value in the
puzzle. Furthermore, this urge of transparency regarding how AI systems process, store, and utilize
data is considered critical to protect both journalists and audiences.</p>
        <p>Thus, even if in Italy newsrooms and institutional bodies have still not published organic guidelines,
the preservation of journalistic ethics remains paramount, as N1, president of the deontological order,
states: ”The principles of professional ethics remain the same. Journalism has evolved over decades and
centuries with these principles, which must now adapt to new production methods and tools”. However,
he notes that AI-specific ethical guidelines remain underdeveloped and that tracing the origins of
AI-generated content is crucial, as he argues that ”journalistic content must show readers how it was
constructed. From there, you touch all sector regulations.” Legal and regulatory frameworks play a
fundamental role, and Italy is working with an ad hoc commission and a law in discussion in senate,
with the proposal of a standardised certification for the traceability of AI-generated content, the defence
of employment profiles and the profession, the issue of vigilance for competition, and techniques to
properly understand the outputs of GenAI, whose training content must be kept in a special register
[52]. N12 is concerned with the “professionalistic approach” of the proposal, which could make it
dificult to adapt it to courts, for example. Others pointed out the dificulty of efectively tracking all
contents produced with AI, and once again, the uncertain efects on readers’ trust.</p>
        <p>A core strategy for a responsible integration of Generative AI in journalistic workflow is considered
investing in research and development. N2 highlights the need for ”experimentation and transparency,
identifying problems to be solved in order to integrate AI structurally within news organizations
rather than relying on individual initiatives”. He also underscores the necessity of ”new professional
competencies to mitigate risks”. N10 also emphasizes the importance of algorithmic accountability,
advocating for ”reverse engineering techniques - a mix of computational journalism, coding, and
investigative reporting - to uncover algorithmic biases and resist opaque AI systems”.</p>
        <p>Training and education are also trivial. In Il Manifesto, as we shall see later on, the editorial board
is engaged with technologists to build an in-house model of AI which has the ambition of being an
AI open to the community of readers. A sort of an “AI Task” has been formed at La Repubblica, with
some journalists attending courses at Oxford, with Reuters and international colleagues, to come back
equipped with the necessary knowledge to navigate this technological transition and transfer in-house
skills. The need for education is not limited to journalists, but, according to N12, it extends to users
because these tools “determine their access to a whole range of services, information, products”. Overall,
governance of AI in journalism requires a multidimensional approach that balances technological
innovation with editorial autonomy, fair remuneration, and accountability. From legal measures and
professional training to ethical codes and transparency mechanisms, structured interventions are
necessary. Ultimately, a responsible approach in integrating AI and LLMs in journalism is not just a
technical issue but a political one, demanding active engagement from journalists, policymakers, and
media organizations.</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.4. Beyond Subordination: Journalism and Control over Innovation</title>
        <p>In September 2024, during the Italian Tech Week, John Elkann, president of the Gedi Group, announced
an agreement between Italy’s largest media company and OpenAI. The deal was framed as part of Gedi’s
digital transformation strategy, leveraging AI for content translation and improving search results
for Italian users of ChatGPT, granting OpenAI access to Gedi’s archives 3. Journalists were informed
about this decision hastily on the same day as its public announcement, during a strike organized
by the newsroom. As one journalist of La Repubblica recalls, the contents of the agreements have
been announced “at a general level without going into specifics” (N13). One an editor-in chief of La
Repubblica reconstructs:</p>
        <p>Mindful of past experience [...] I am talking about social media, about the changes in the
lfow of information... With the advent of AI they wanted to play in advance because there
is a reciprocal need on the part of the platforms to educate their intelligence and provide
verified content, while on the part of the publishers the need to no longer be passive, to come
out of the game so as no longer to have someone scraping your content without your control.
3https://www.milanofinanza.it/news/editoria-accordo-tra-john-elkann-e-sam-altman-sull-ai-i-contenuti-di-gedi-in-italianovanno-su-202409261434075592</p>
        <p>Hence the agreements with major publishing companies, agreements about which little
is known but which involve integrated services between search and artificial intelligence”
(N8).</p>
        <p>The agreement with OpenAI is seen as detrimental to their profession and autonomy and it’s considered
an expression of a “self-destructive approach” which leaves newsrooms “subordinate regarding Big Tech
companies” (N4). Moreover, it intensified existing power asymmetries in the information ecosystem,
highlighting the broader labor-capital conflict in AI adoption. The labor union emphasized two main
demands: AI should support, not replace, journalistic work, and fair compensation should be ensured
for intellectual labor. One union representative cited Article 14 of the national contract, arguing that
publishers should compensate journalists when their work is transferred externally:
Normally the employed journalist when hired gives up the content of his journalistic,
photographic content because it is used by the various platforms of a publisher:
crossmediality, which is already provided for in the contract. Where does this end and the content
of Article 14 begin? Which requires that a fee be paid for the use? It begins when the content
is transferred externally (N4).</p>
        <p>Despite these eforts, journalists faced significant challenges in negotiating with entities that possessed
ifnancial resources exceeding the GDP of some sovereign states. The EU copyright regulations, the AI
Act, and recent decisions from the copyright national authority provided some legal safeguards, but
structural issues persisted. A crucial demand is the involvement of journalists in AI-related
decisionmaking processes which involves technical improvement in production, as stipulated in Worker’s
Statute:</p>
        <p>The game is for journalists to be part of the strategic development directorates [...]. One
challenge is to understand how to acquire the most useful technologies for journalistic work,
which must be part of the process of deciding what is needed, and not having to be subjected
to these choice (N5).</p>
        <p>How regulate Generative AI, particularly following the agreements between major publishing groups
such as RCS and Gedi with OpenAI, has become a central issue in the renegotiation of national
contracts for journalists. The contents of these agreements, protected by trade secrecy, have not
been yet disclosed in detail to journalists, who are demanding greater transparency and adherence to
regulations safeguarding journalistic work and intellectual property rights. A labor dispute is currently
underway, involving all national publishers, newsroom representatives, and policymakers. LLMS have
intensified the debate on AI Italian politics, underscoring the need to address key issues such as value
creation through machine learning, labor protections, control over content and the training processes
of foundation models, and ensuring that technology serves the interests of newsrooms.</p>
        <p>It is worth focusing on a diferent approach in the Italian context which shows how to deal with these
problems in a less conflictual way. Il Manifesto, an editorial social cooperative, pursued a distinctive
path by developing its own AI tool, Memoria Manifesta (MeMa), in collaboration with the start-up
Isagog. Unlike La Repubblica’s agreement with OpenAI, Il Manifesto prioritized data ownership and
transparency. MeMa mainly uses data from its own archive and enhances the newsroom’s ability to
process and retrieve archival content through knowledge graphs, semantic searches, and summarization
tools. Besides the certitude of data employed by the model, developing a local in-house AI can bring
other advantages, such as greater environmental sustainability, due to the solicitation of far fewer
parameters when compared to the OpenAI model, more transparency and explicability, and the reduction
of the peril of disintermediation. As the technical development managers make clear:
If you do a search on 7 October, it returns a series of summaries generated from articles
cited as sources. There is a strong element of transparency compared to generic chatbots,
which also begin by providing sources, but over these there is no control: you cannot tell
GPT which to consider and which to exclude, nor is there any way to control how it takes
information from these sources (N17).</p>
        <p>Here from MeMa we extract concepts, ontologies, people, places, entities that allow you to
do completely diferent consistency checking operations compared to AI systems based on
neural networks (N16).</p>
        <p>This gain in explainability and transparency, has a consequence for journalistic work and is a solution
to the need conceptualized by N14 as amendability, the faculty of correcting an output, whenever it is
generated by a human or a machine.</p>
        <p>Moreover, part of the project is to extend this process of controlling the knowledge produced by
the AI with a wider public, realizing a “community AI” in which members would be able to correct
some entities retrieved by MeMa, or validate some specific content generated by it, fostering once again
transparency. As they show me in the videoconference:</p>
        <p>There is this important role, for example, this is the archivist, who has the power to say this
content is good, this correction is good, and then, how you see, the display of the entities
changes and shows the reader that it is a content blessed by a human who has interpreted it,
and judged it, to be correct, whereas those other entities were only interpreted by Mema.</p>
        <p>This is the journalists who taught me (N16).</p>
        <p>The engagement of the public or specific members of the community is expected to foster trust in
readers, by engaging them in fact-checking procedures while enhancing their critical thinking on the
use of AI. Furthermore, the approach of cooperation and development has put journalists and other
professions - such as the archivist - in a position not to sufer innovation as an extractive imposition,
but as an opportunity to broaden and renew skills.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>
        This study expands existing research on how the Italian media landscape is dealing with AI
systems[43, 41] and notably with Generative AI, by focusing on professional practices and governance
strategies, through the lens of sociotechnical-imaginaries. The analysis, grounded in a set of in-depth
interviews, provides empirical insight into how Italian journalists are responding to and helping shape
the introduction of generative AI within a pluralist-polarised media ecosystem. Unlike other contexts
where AI implementation appears more structured and codified - such as Denmark and the Netherlands
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] or the Uk, Germany and US [31]- the Italian media landscape presents widespread AI use across
various stages of the journalistic workflow, with LLms used in a largely unstructured and experimental
way, with most professionals deploying them for supportive tasks such as translation, summarisation,
SEO optimisation, and content ideation. However, despite the adoption appaires fragmented and
unevenly distributed across newsrooms, journalists maintain a strong commitment to the principle of
“human-in-the-loop” reinforcing their own centrality and accountability.
      </p>
      <p>
        In response to the second research question, our findings show that journalists conceptualise LLMs in
instrumental terms, often as “assistants” or “colleagues” with limited capability. However, some critical
aspects are particularly worth noting and necessitate human oversight to verify and control content
before publication - aligning with findings from other studies [
        <xref ref-type="bibr" rid="ref4 ref6">6, 4</xref>
        ]. AI-generated hallucinations, biases,
and the subpar quality of outputs on sensitive topics, e.g., organized crime, the lack of transparency
in training processes and output generation, coupled with the exploitation of journalistic intellectual
labor, have sparked resistance, if not outright rejection. It is therefore unsurprising that journalists,
when asked to reflect on the broader implications of Generative AI development within the information
ecosystem, expressed deep concerns regarding job displacement, the erosion of editorial independence
and relevance, the deterioration of public trust in Italian journalism, and the growing interest of major
tech corporations in leveraging journalistic content to train ever-larger models for private gain at the
expense of the public interest.
      </p>
      <p>
        These concerns are compounded by opaque agreements, such as those between OpenAI and major
Italian publishers, which exclude journalists from strategic decision-making processes, fueling internal
tensions and labor disputes, fostering power issues and conflict for information control, as recognized
also in [31]. Furthermore, echoing concerns raised in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and [29], as already noted, the interviews
underscore the risk of dependency of news organizations on platform-based AI infrastructure,
undermining editorial autonomy, and the lack of institutional safeguards for content control, data sovereignty,
and labor protection. If it has been argued how Facebook, Google or Twitter are new actors in the
production and distribution of information in our digital societies[34], LLMs providers are perceived as
new actors entering with force in the information market, with possible negative efects on the quality
of information produced by LLMs tools, the distribution and the access of the information itself by the
public, with possible negative consequences for readers’ trust. While some resist AI outright, others
advocate for inclusive governance strategies, fair compensation models, and participatory design. The case
of Il Manifesto exemplifies an alternative pathway where innovation is co-developed, transparent, and
ethically aligned with journalistic values, an approach aligned with calls for algorithmic accountability
and value-sensitive design[
        <xref ref-type="bibr" rid="ref4">32, 4</xref>
        ]. Thus, journalists acknowledge that the responsible use of AI [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] will
ultimately depend on human choices and social, economic, and political factors. Accordingly, they have
articulated governance priorities and potential solutions to integrate in a responsible way LLMs and
other AI-based systems in the public arena. Italian journalists claim the need for newsroom control over
AI-generated content and data, highlight the importance of editorial oversight, traceability, and human
intervention, and claim a major role in driving the innovation in the sector. Moreover, they stress the
necessity of interdisciplinary experimentation and research to actively understand the limitations of
these tools, develop new skills, and implement training programs.Nevertheless, assembling dedicated
teams may not be enough if corporate AI-related decisions, as in the case of La Repubblica, are perceived
as being against the interests of editorial staf, leading to union fronts and internal tensions within the
company.
      </p>
      <p>
        Overall, journalists claim structured interventions and an integrated approach that provides
multilevel support to news organizations in order to truly exploit the potential of AI. Regarding the third
question, journalists articulate sociotechnical imaginaries of AI that highlight existential threats to
the public arena. This is particularly relevant for democracy, understood not only in deliberative
or procedural terms, but above all as a deep communicative process that can also be conflictual, as
suggested by Sustain[53] and Coeckelbergh[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Particularly in this later version, philospher Mark
Coeckelbergh, drawing from the theories of John Dewey, Jürgen Habermas and Iris Marion Young, argue
that ”communication is what democracy is all about”, defining communication as a way of living togheter
based on understanding, recognition and mutual transformation. This conception envisage democracy
as a form of life, in which communication is not merely instrumental but constitutive of democracy itself.
This is consistent with the idea of the centrality for our democracies of a ”public arena”, understood as the
ensemble of ”media infrastructures that enable and constrain the publication, distribution, reception, and
contestation of information that allow people to exercise their rights and duties as citizens”, while also
mediating the relations between diferent actors in societies (élites, citizens, civil society, companies)[ 34].
LLMs are seen as not only altering the production and distribution of information but also redefining
gatekeeping functions, amplifying asymmetries of power, and undermining democratic deliberation.
Concerns over disinformation, opacity, disintermediation, and the growing influence of platform-based
infrastructures evoke dystopian scenarios in which information flows are increasingly controlled by
non-transparent algorithms and private corporations. Ultimately, this study highlights that generative
AI is not merely disrupting journalism, but it is becoming a site of contestation where competing visions
of information, labor, and democracy are at stake. Whether this transition strengthens or weakens
democratic communication will depend on the ability of journalists, policymakers, and institutions to
forge inclusive, ethical, and context-sensitive strategies of integration.
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Limitations and future work</title>
      <p>This work is subject to several limitations. First, it focuses exclusively on Italian journalism and does not
ofer a comparative perspective. Second, although the interviewees were selected for their significant
insights—due to their institutional roles or first-hand experience with the technologies in question—the</p>
    </sec>
    <sec id="sec-7">
      <title>7. Appendix</title>
      <p>Head of digital and AI projects, journalist
Member of the Board, journalist
AI architect and data scientist, MD</p>
      <p>Lead Researcher in language, semantics and knowledge representation
sample remains limited. Third, the sample is unbalanced wotards male respondents, a fact which could be
expression of the fact that male journalists are more dedicated to innovation - fact who is due to complex
cultural and social factors intersecting gender and power in labour - or just due to the limited and non
representative nature of the sample. Fourth, the study could be expanded by widening the range of
interviewees, including freelance journalists, early-career professionals, and staf from local newsrooms.
This would allow for a more comprehensive understanding of the integration of generative systems in
journalism, incorporating perspectives from more precarious segments of the profession and from roles
that, while editorially relevant, are not strictly journalistic. In addition, integrating more structured
methods of data collection—such as questionnaires—could provide a valuable complement to qualitative
interviews, enabling a more comprehensive and systematic account of journalists’ perspectives on how
technology is impacting their work.</p>
      <p>Gender</p>
      <p>Age
M
M
M
F
M
M
M
M
M
M
M
M
M</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author used X-GPT-4 in order to: Grammar and spelling check.
After using these tool(s)/service(s), the author reviewed and edited the content as needed and take full
responsibility for the publication’s content.</p>
      <p>University Press, 2019.
[22] T. F. Waddell, A robot wrote this?, Digital Journalism 6 (2018) 236–255. doi:10.1080/21670811.</p>
      <p>2017.1384319.
[23] K. N. Dörr, Mapping the field of algorithmic journalism, Digital Journalism 4 (2016) 700–722.</p>
      <p>doi:10.1080/21670811.2015.1096748.
[24] M. Kotenidis, A. Veglis, Algorithmic journalism: Current applications and future perspectives,</p>
      <p>Journalism and Media 2 (2021) 244–257. doi:10.3390/journalmedia2020014.
[25] N. Helberger, On the democratic role of news recommenders, Digital Journalism 7 (2019) 993–1012.</p>
      <p>doi:10.1080/21670811.2019.1623700.
[26] L. Dierickx, C.-G. Linden, A. L. Opdahl, Automated fact-checking to support professional practices:
Systematic literature review and meta-analysis, International Journal of Communication 17 (2023)
5170–5190.
[27] K. Dörr, L. Hollnbuchner, Ethical challenges of algorithmic journalism, Digital Journalism 5 (2017)
404–419. doi:10.1080/21670811.2016.1167612.
[28] A. K. Schapals, C. Porlezza, Assistance or resistance? evaluating the intersection of automated
journalism and journalistic role conceptions, Media and Communication 8 (2020) 16–26. doi:10.
17645/mac.v8i3.3054.
[29] B. Garcìa-Orosa, J. Canavilhas, J. Vázquez-Herrero, Algorithms and communication: a systematized
literature review, Media Education Research Journal (2023).
[30] B. Jones, E. Luger, R. Jones, Generative ai and journalism: A rapid risk-based review, 2023. Preprint
or internal report.
[31] F. M. Simon, Rationalisation of the news: How ai reshapes and retools the gatekeeping processes
of news organisations in the united kingdom, united states and germany, New Media &amp; Society 0
(2025). URL: https://doi.org/10.1177/14614448251336423. doi:10.1177/14614448251336423.
[32] G. Romeo, E. Griglié, Ai ethics and policies: Why european journalism needs more of both, in:</p>
      <p>J. Mokander, M. Ziosi (Eds.), The 2021 Yearbook of the Digital Ethics Lab, Springer, 2022.
[33] C. W. Anderson, A. Valeriani, Re-considering journalism as an ecosystem… introduction, Problemi
dell’informazione 1 (2023) 3–12. doi:10.1445/106767.
[34] A. Jungherr, R. Schroeder, Digital Transformation of the Public Arena, Cambridge University Press,
2022.
[35] A. Jungherr, Artificial intelligence and democracy: a conceptual framework, Social Media +</p>
      <p>Society (2023). doi:10.1177/20563051231186353.
[36] M. Castells, The network society revisited, American Behavioral Scientist 67 (2022). doi:10.1177/
00027642221092803.
[37] S. Papathanassopoulos, I. Giannouli, I. Archontaki, The media in southern europe: Continuities,
changes and challenges, in: S. Papathanassopoulos, A. Miconi (Eds.), The Media Systems in Europe:
Continuities and Discontinuities, Springer International Publishing, Cham, 2023, pp. 133–162.
[38] A. Baroni, G. Rigoni, Afective polarisation: The use of emotional language by italian news outlets
on twitter, Mediascapes Journal 23 (2024) 46–66.
[39] P. Mancini, The italian public sphere: a case of dramatized polarization, Journal of Modern Italian</p>
      <p>Studies 18 (2013) 335–347.
[40] E. De Blasio, R. Rega, M. Valente, Polarization and platformization of news in italian journalism:
The coverage of migrant worker regularization, in: D. Palau-Sampio, G. López García, L. Iannelli
(Eds.), Contemporary Politics, Communication, and the Impact on Democracy, IGI Global, 2022,
pp. 74–92.
[41] M. F. Murru, S. Carlo, Ai e newsmaking: Un’indagine esplorativa nelle redazioni nazionali e locali
italiane, Mediascapes Journal 23 (2024) 184–198.
[42] S. Spillare, M. Bonazzi, P. D. Esposti, Ai imaginaries and narratives in the italian public discourse:</p>
      <p>The impact of chatgpt, Im@go (2024).
[43] P. D. Esposti, L. Tirabassi, The human-algorithmic entanglement in the news realm, Problemi
dell’informazione 1 (2024) 41–64. doi:10.1445/113228.
[44] A. Schütz, The Phenomenology of the Social World, Northwestern University Press, 1967.
[45] M. Weber, Economy and Society: An Outline of Interpretive Sociology, University of California</p>
      <p>Press, 1978.
[46] S. Jasanof, S. H. Kim, Sociotechnical imaginaries: Insights from the history of science and
technology, Science as Culture 22 (2013) 233–256.
[47] A. Mager, C. Katzenbach, Future imaginaries in the making and governing of digital
technology: Multiple, contested, commodified, New Media &amp; Society 23 (2021) 223–236. doi: 10.1177/
1461444820929321.
[48] R. Corsi, E. d’Albergo, La politica dell’intelligenza artificiale general purpose: immaginari
sociotecnici, democrazia e policy frame nel processo decisionale della regolazione europea (“eu ai act”
2022–2024), Im@go 23 (2024) 109–130.
[49] C. B. Sanders, J. Chan, Methodological reflections on researching the sociotechnical imaginaries
of ai in policing, in: S. Lindgren (Ed.), Handbook of Critical Studies of Artificial Intelligence, 2023,
pp. 773–782.
[50] S. Kvale, Interviews: An Introduction to Qualitative Research Interviewing, Sage, 1996.
[51] V. Braun, V. Clarke, Using thematic analysis in psychology, Qualitative Research in Psychology 3
(2006) 77–101.
[52] D. Bianchi, Giornalismo e intelligenza artificiale: Aspetti giuridici e normativi, in: Report
Osservatorio Giornalismo Digitale 2024, Ordine dei Giornalisti, 2024. URL: https://www.odg.it/
osservatorio-sul-giornalismo-digitale, accessed January 30, 2025.
[53] C. R. Sunstein, #Republic: Divided Democracy in the Age of Social Media, Princeton University
Press, Princeton, NJ, 2017.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H.</given-names>
            <surname>Cools</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. V.</given-names>
            <surname>Gorp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Opgenhafen</surname>
          </string-name>
          ,
          <article-title>Where exactly between utopia and dystopia? a framing analysis of ai and automation in us newspapers</article-title>
          ,
          <source>Journalism</source>
          <volume>5</volume>
          (
          <year>2022</year>
          )
          <fpage>3</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Fletcher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nielsen</surname>
          </string-name>
          ,
          <article-title>What does the public in six countries think of generative ai in news?</article-title>
          ,
          <source>Reuters Institute for the Study of Journalism</source>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>F.</given-names>
            <surname>Ioscote</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gonçalves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Quadros</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence in journalism: a ten-year retrospective of scientific articles (</article-title>
          <year>2014</year>
          -2023),
          <source>Journalism and Media</source>
          <volume>5</volume>
          (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .3390/ journalmedia5030056.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>N.</given-names>
            <surname>Diakopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Cools</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Helberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Kung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rinehart</surname>
          </string-name>
          , L. Gibbs,
          <article-title>Generative AI in Journalism: The Evolution of Newswork and Ethics in a Generative Information Ecosystem</article-title>
          , Associated Press, New York, USA,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H.</given-names>
            <surname>Cools</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Diakopoulos</surname>
          </string-name>
          ,
          <article-title>Uses of generative ai in the newsroom: Mapping journalists' perceptions of perils and possibilities</article-title>
          , Journalism
          <string-name>
            <surname>Practice</surname>
          </string-name>
          (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .1080/17512786.
          <year>2024</year>
          .
          <volume>2394558</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Simon</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence in the news: How ai retools, rationalizes, and reshapes journalism and the public arena</article-title>
          ,
          <source>Columbia Journalism Review</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>C.</given-names>
            <surname>Beckett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yaseen</surname>
          </string-name>
          , Generating Change:
          <article-title>A Global Survey of What News Organisations Are Doing with Artificial Intelligence</article-title>
          , JournalismAI, London School of Economics, London, UK,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Iannuzzi</surname>
          </string-name>
          ,
          <article-title>Intelligenza artificiale nelle redazioni italiane</article-title>
          ,
          <source>in: Report Osservatorio Giornalismo Digitale</source>
          <year>2024</year>
          , Ordine dei Giornalisti,
          <year>2024</year>
          . URL: https://www.odg.it/ osservatorio-sul
          <string-name>
            <surname>-</surname>
          </string-name>
          giornalismo-digitale,
          <source>accessed January 30</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <article-title>[9] Garante per la protezione dei dati personali, ChatGPT, il Garante privacy chiude l'istruttoria. OpenAI dovrà realizzare una campagna informativa di sei mesi e pagare una sanzione di 15 milioni di euro</article-title>
          ,
          <source>Comunicato stampa</source>
          ,
          <year>2024</year>
          . URL: https://www.garanteprivacy.it/home/docweb/-/ docweb-display/docweb/10085432, roma, 20 dicembre
          <year>2024</year>
          .
          <source>Accessed June 1</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bianchi</surname>
          </string-name>
          ,
          <article-title>Giornalismo e intelligenza artificiale: Aspetti giuridici e normativi</article-title>
          ,
          <source>in: Report Osservatorio Giornalismo Digitale</source>
          <year>2024</year>
          , Ordine dei Giornalisti,
          <year>2024</year>
          . URL: https://www.odg.it/ osservatorio-sul
          <article-title>-giornalismo-digitale, accessed 1 june,</article-title>
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <article-title>Ordine dei Giornalisti, Consiglio Nazionale, Codice Deontologico delle Giornaliste e dei Giornalisti, Approvato dal Consiglio nazionale dell'Ordine dei giornalisti nella seduta dell'11 dicembre</article-title>
          <year>2024</year>
          ,
          <year>2024</year>
          . URL: https://www.odg.it/wp-content/uploads/2024/12/ Codice-deontologico
          <article-title>-approvato-dal-</article-title>
          <source>CN-11.12</source>
          .
          <year>2024</year>
          _def.pdf, accessed:
          <fpage>2025</fpage>
          -06-01.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Jungherr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Schroeder</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence and the public arena</article-title>
          ,
          <source>Communication Theory</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Coeckelbergh</surname>
          </string-name>
          , LLMs, Truth, and
          <source>Democracy: An Overview of Risks, Science and Engineering Ethics</source>
          <volume>31</volume>
          (
          <year>2025</year>
          ). URL: https://doi.org/10.1007/s11948-025-00529-0. doi:
          <volume>10</volume>
          .1007/ s11948- 025- 00529- 0.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Coeckelbergh</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence, the common good, and the democratic deficit in ai governance</article-title>
          ,
          <source>AI and Ethics</source>
          (
          <year>2024</year>
          ).
          <source>doi:10.1007/s43681- 024- 00492- 9.</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>L.</given-names>
            <surname>Winner</surname>
          </string-name>
          , Do artifacts have politics?,
          <source>Daedalus</source>
          <volume>109</volume>
          (
          <year>1980</year>
          )
          <fpage>121</fpage>
          -
          <lpage>136</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>W. E.</given-names>
            <surname>Bijker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. J.</given-names>
            <surname>Pinch</surname>
          </string-name>
          ,
          <article-title>The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology</article-title>
          , MIT Press,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>E.</given-names>
            <surname>McKenzie</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Wajcman,</surname>
          </string-name>
          <article-title>The Social Shaping of Technology: From the High Tech to the Human Touch</article-title>
          , Polity Press,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D. C.</given-names>
            <surname>Hallin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mancini</surname>
          </string-name>
          ,
          <source>Comparing Media Systems: Three Models of Media and Politics</source>
          , Cambridge University Press, New York,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>L.</given-names>
            <surname>Sartori</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Bocca, Minding the gap(s): public perceptions of ai and sociotechnical imaginaries</article-title>
          ,
          <source>AI &amp; Society</source>
          <volume>38</volume>
          (
          <year>2023</year>
          )
          <fpage>443</fpage>
          -
          <lpage>458</lpage>
          . doi:
          <volume>10</volume>
          .1007/s00146- 022- 01422- 1.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>C.</given-names>
            <surname>Porlezza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Schapals</surname>
          </string-name>
          , L. Pranteddu, Beyond boosterism:
          <article-title>New questions and approaches regarding ai and automation in journalism</article-title>
          ,
          <source>Problemi dell'Informazione</source>
          <volume>1</volume>
          (
          <year>2024</year>
          )
          <fpage>3</fpage>
          -
          <lpage>16</lpage>
          . doi:
          <volume>10</volume>
          . 1445/113226.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>N.</given-names>
            <surname>Diakopoulos</surname>
          </string-name>
          , Automating the News:
          <article-title>How Algorithms Are Rewriting the Media</article-title>
          , Harvard
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>