<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Ital-IA</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Emanuele Fulvio Perri</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elio Grande</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Pisa, Largo Bruno Pontecorvo</institution>
          ,
          <addr-line>3, 56127, Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>4</volume>
      <fpage>29</fpage>
      <lpage>30</lpage>
      <abstract>
        <p>This work regards the social side of trustworthiness in the context of Large Language Models (LLMs) according to two congruent shades. Indeed, the first paragraph, drawing aid from a passage of The Science of Logic by G. W. F. Hegel, proposes a qualitative and semantic interpretation of the origin of the so-called “emergent abilities” of LLMs, which are deemed something more complex than a trivial deceit. The second paragraph rather concerns the topic of trustworthiness and responsibility of LLMs from an ethical and phenomenological perspective, proposing a parallelism between the issue of extended mind and the generative transformers as a cognitive extension. The focus lies on the repercussions for the intensive utilization, which can be summarized in the concepts of cognitive depletion and digital dementia, leading to a debasement of precious human qualities - creativity, attention, interpretational ability. Our suggestion, then, first of all trusting-because we have to trust-the critical sense of human users, is directed towards some kind of ethics of AI to introduce in the K-12 category. Our aim remains the wished for design of a pacific coexistence.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Generative AI</kwd>
        <kwd>emergent abilities</kwd>
        <kwd>extended mind</kwd>
        <kwd>hallucinations</kwd>
        <kwd>cognitive depletion</kwd>
        <kwd>1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        A deviation had occurred at the last mile, in the long
run of the approval of the Artificial Intelligence Act,
because of an unexpected technological evolution: the
so-called Foundation Models, generative artificial
intelligence devices made of deep neural networks
good enough to elaborate coherent responses to input
prompts, concerning many typologies of data and
particularly processing natural language within
diverse conceptual and linguistic domains. The
definitive text of the AI Act – see in particular article
51 and annex XIII – provides some criteria of
“systemic risk” for general purpose models, among
other things, in the number of parameters of the
models, in the quality and dimension of datasets and
above all in the necessary compute for training, fixing
the plausible risk threshold to 10^25 FLOPs [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. We
will follow some suggestions regarding the origin of
the so-called “emergent abilities” of Large Language
Models (LLMs)2, developing them through some
considerations about the extensions of the mind. If
there is a character which is bearer of risk in LLMs, it
is their everyday pervasiveness. From Una domanda
impossibile ad Artemisia Gentileschi [“An impossible
question to Artemisia Gentileschi”], the Turing test on
a sample of more than 1200 participants distributed
by various age and education, jointly conceived in
2023 by the Departments of Computer Science and
Civilization and Forms of Knowledge of the University
of Pisa, it has emerged that 31,5% of participants was
fooled by ChatGPT 3.5 in case of listening, while even
43,5% in case of reading [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]3, when trying to
recognize which written composition had been
produced by a human. The point, however, is not so
much if to give confidence, but rather how and why. It
will not be proposed here a general design model to
adequately mitigate the systemic risk produced by
LLMs: a too hard task. We will rather go for hunting
ghosts, attempting to get closer to the nature of
deception, hoping to make a little step further towards
trustworthy modes of utilization of currently
available devices.
      </p>
      <p>2. Ars Artificialiter scribendi4</p>
      <p>
        In The Gutenberg Galaxy [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], noting with
Umberto Boccioni how we were (and still are, we add
here) primitives of a new culture – the organic one of
the electronic age which would have dulled the human
consciousness in the period of its first interiorization
– Marshall McLuhan remembered that the first name
of the typographic printing press was “ars artificialiter
scribendi” (p. 187). Weren’t it for Latin, it seems
coined yesterday. A way of writing, then, an art, a
practical acting in the same domain of manual writing,
which nonetheless had the taste of an artifice. An art
of the artificial or, better, an art of elaborating a
certain kind of data – in this case, alphabetic
characters – in an artificial manner.
      </p>
      <p>
        If the printing press replaced in fact the inkpot, in
the corporeal movements of the hand although not in
the intentions, developing LLMs is instead an “ars
artificialiter scribendi” whose products appear to take
up the alphabet itself, producing dialogical writing or
even paradoxically oral. It would seem to be, given
that we can hardly help ascribing personality, of a fine
seduction strategy. Simone Natale [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] reminds us of
Eliza, the chatbot invented in the Sixties by Joseph
Weizenbaum, underlining the dramaturgical design,
according to some “script”, in the responses of new
chatbots talking about trivial deceit, because it is not
perceived as such and is plunged into everyday life.
      </p>
      <p>
        However, it is not just this. Three technological
breakthroughs allowed the birth of LLMs: the
representation of the meaning of words through
embedding, an attention mechanism to catch
connections among the words themselves, and the
implementation of transformers [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. So, either some
mathematics of language does exist, such that LLMs
take possession of meaning – which therefore stops
being «structured by fore-having, fore-sight, and
foreconception, […] the upon which of the project in terms
of which something becomes intelligible as
something» [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] (p. 142) – or it would regard a
correlate of language itself on a parallel platform.
4 Thanks to our friend Simone Farinella, PhD in history of philosophy,
for the precious advice about the choice of the passage from the
Hegelian work reported in this paragraph.
      </p>
      <p>
        Nothing, however, would let us think that artificial
intelligence presents the fundamental property (that
was) of the soul, «a being which in conformity with its
kind of being is suited to “come together with any
being whatsoever» [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] (p. 12), so much as the
unpredictable phenomenon of the emergent abilities
of LLMs. Wei et al. [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] define the “emergence” with
the Nobel Prize-winner Philip Anderson as qualitative
mutations in a system arising from quantitative
mutations [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Usually, they write, scaling laws allow
to foresee scale effects on systems’ performance.
However, at least with respect to some downstream
tasks, putting the LLMs’ scale on the x-axis (measured
by compute, but also quantity of parameters and
dataset dimension are useful indexes thereof) and
performances on the y-axis, the curve does not grow
up gradually but undergoes sudden variations once a
certain threshold has been passed. «Note» – key point
– «that the scale at which an ability is first observed to
emerge depends on a number of factors and is not an
immutable property of the ability». Under the
category of few-shot prompting – that is, tasks
apparently learned after a very small number of input
instructions in the guise of teachings – comes for
example the ability to reply in a truthful way or to map
conceptual domains. Some performance measures,
according to more than one metric, are reported by
Wei et al. with respect to various typologies of LLMs
(LaMDA, GPT-3, Gopher etc.), and the phenomenon of
emergence appears multiple times, but not always,
with a threshold comprised between 10^22 and
10^25 FLOPs. They are certainly tasks akin to human
intellectual capabilities. However, the missing
steadiness and univocity, with respect to different
architectures, of the threshold to cross for an ability to
emerge, lets us suspect that the emergence of new
qualities in the behavior of such models be, yes,
correlated with quantitative increments of compute,
parameters etc., but not by them strictly caused. There
is a semantic threshold beyond which the parts of a
collection (the ancient Greek would have used here
the term pân) are subsumed, harmonizing, in a whole
(in Greek: olòn) where every branch, every connection
finds a proper meaning. A qualitative, or at least not
quantitative threshold, as it was in the sorites paradox
by Eubulides of Miletus: a gap between different
dimensions. It might be perhaps useful to reflect, so as
to make the point on this logical mechanism, on a
passage from The Science of Logic by G. W. F. Hegel:
«Whenever all the conditions of a fact are completely
present, the fact is actually there; the completeness of
the conditions is the totality as in the content […]. In
the sphere of the conditioned ground, the conditions
have the form (that is, the ground or the reflection that
stands on its own) outside them, and it is this form
that makes them moments of the fact and elicits
concrete existence in them» [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] (p. 483). His aim was
to rationalize the accidentality (nowadays we could
talk about data to correlate) within unique schemes,
the “things”, make “real” some things which are just
possible. A dimensional gap, indeed, born by the
crossing of a quantitative threshold – the
completeness of the conditions, which by themselves
remain accidental. The problem of the
representativity of data lies behind the corner.
      </p>
      <p>Can an extended net of sequences, like for example
the hypertext (obviously, simplifying) called “the
web”, overcome that critical mass and reflect,
adequate itself to a systematic whole, a semantic olòn,
a complex of signifiers? We would be tempted to reply
positively: the web is our Zeitgeist. It contains
analogies, additions in column, sentiments, errors: the
patterns recognized by the emergent abilities of LLMs.
Supposing to train a model – like a transformer
endowed with 175 billion parameters – on such a net
of sequences as dataset, won’t such patterns or
subpatterns emerge? Without, among other things, real
learning: the model runs in inference mode.</p>
      <p>However, it was said that conditions – translated:
correlations among data – have their ground outside
themselves. The model just computes. It has only a
surrogate intelligence and even a large number of
parameters can’t produce such improvement in
quality. But might it be good enough to mirror the
improvement in quality originally lying in data
semantics? If so, we could perhaps explain why, to
whom reads on the screen, a string will seem a reply,
two a discourse, and a thousand a writer, although the
LLM actually speaks alone, according to a hierarchy of
the most probable terms.</p>
    </sec>
    <sec id="sec-2">
      <title>3. “Somatization” of LLMs: rethinking ethics of generative AI from a phenomenological perspective</title>
      <sec id="sec-2-1">
        <title>Continuing the use of the ethical-philosophical lens to study the implications of irresponsible use of LLMs (such as GPT-x, LaMDA, LLaMA, Gemini, etc.), it seems interesting and above all useful to fetch Andy</title>
        <p>
          Clark and David Chalmers’ brilliant phenomenological
formulations of the concept of extended mind [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] and
Kim Sterelny’s concept of scaffolded mind [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
Thinking of responsible LLMs according to the
standard framework (transparency, fairness, privacy,
etc.), it is appropriate to ask whether a stable social
trust in such technologies is not promptly impeded
due to a misconception of generative artificial
intelligence itself. Clark and Chalmers, in their
wellknown work The Extended Mind, bring up the example
of “Otto’s notebook”: Otto is a patient with
Alzheimer’s disease who, to cope with daily
mnemonic challenges, relies on a bloc-notes on which
he’s used to jot down and retrieve information that he
is no longer aware of, due to his disease. The “analog”
relationship between Otto and his notebook pours
into dependence—a blind reliance; Otto’s life
memories are scattered around in the pages of his
notebook, which is the only acceptable resource for
reporting on a past and being aware of the present.
The phenomenology of the notebook lies in its being
much more than an external resource while retaining
its original ontological status: the notebook is a
cognitive extension, a ramification of Otto’s mind and,
even, a supplement to his memory. Kim Sterelny picks
up on Clark and Chalmers by introducing what is a
full-fledged fair corrective: the notebook, being
physically outside the body, cannot extend cognitive
capacities while also guaranteeing the same degree of
reliability as the resource it replaces (that is, memory)
and, therefore, its function is somewhat to support
it—to scaffold it [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. In other words: external
(informational, datal, executive, …) resources should
not be considered reliable to the same extent as
internal resources since, even though external ones
collaborate in dense mental associations, they are
disembodied and indirectly managed. Certainly, due
to mental plasticity, there are several pros of
incorporating external adjuvant resources within the
cognitive system—the notebook supplants memory,
the cane mitigates claudication, the lens enhances
vision, etc.—, but the cons, on a risk-benefit scale, are
significant: (1) reliance on the external resource is
inherently fallacious, since the same degree of
integrity as the internal resource cannot be
guaranteed; (2) exposure to the risk of sabotage of the
external resource is substantial, both in the sense of
environmental conditioning and in the (rarer, but not
negligible) sense of targeted attacks; (3) in cases of
substitution of the internal resource with an external
one, an acceleration of the depletion of the already
damaged internal system can be expected, causing its
ultimate downfall. In this frame the relationship
between internal and external environment and the
environmental niche is designed—under the same
risky conditions under which sentient beings gain a
being-in-the-world [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. The reflections advanced
thus far soon make sense if we reimagine the
(progressively obsolescent) concept of
humanmachine interaction (HMI) from a phenomenological
perspective: an environmental niche hinged on the
relationship between digital system (a computer, a
model, etc.) and organic system. LLMs, according to
this interpretation, are the external resource—so
appealing, so addictive, so affordable—with which we
compensate major “humanliest flaws”—executory
promptness, memory capacity, mundane
transiency—at the risk of self-causing depletion.
        </p>
        <p>
          Very related to this point is the risk of an only
apparently reliable AI: the cognitive depletion
triggered by a gradual (and not totally voluntary)
renunciation of creative and cognitive capacities,
which today goes hand in hand with the so-called
deskilling; we fall into what Manfred Spitzer [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] calls
digital dementia: an over-reliance on technology that
shows potential to replace human capacities can
induce a decrease in cognitive capacities for
information processing and creative production
(think imagination), implying symptoms close to
those of dementia and that regress very slowly by
suspending the use of that given technology. Spitzer
writes in Information technology in education: risks
and side effects [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] about neuroplasticity and the use
of technology in learning:
        </p>
        <p>«Given what we know about neuroplasticity, i.e.,
learning and the brain, it is hard to believe that some
education practitioners and policy makers still believe
that reducing cognitive load is beneficial for the
learner. Quite the opposite is the case: The more effort
you have to take, the better the learning outcome» (p.
84).</p>
        <p>
          What Spitzer remarks is the value of direct
experience, of concrete and hard doing, for a stable
imprint of the information; the full experience,
moreover, means taking the needed time—a
permission that our postmodern society “of
impatience” often does not grant. In short: doing,
taking the necessary time, on the one hand; outsource
for all at once, on the other. The difference between
the two approaches is quali-quantitative and lies in
the permanence of the result, as well as in the result
itself. A similar warning comes from Stefano Cabitza
who writes about epistemic sclerosis [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]:
        </p>
        <p>«[...] machines AI, initially conceived to enhance
peculiar capacities of men “for the benefit of men” [...],
[have ended up] paradoxically to produce an opposite
effect [...] of disempowerment, according to a dynamic
already known to popular wisdom when it is said that
“the muscle that is not used, atrophies.” [...] we have
called this danger “epistemic sclerosis,” meaning [...]
the risk of losing the habit of exploring the unknown
and managing, also understood in terms of awareness,
tolerance and even appreciation, the uncertainty that
affects all our evaluations, estimates, predictions»5
(pp. 80, 85).</p>
        <p>
          Cabitza’s is not an apologia for slow-working, nor
is ours meant to be an oracle-like dystopian invective
against GAI: it is, rather, about recognizing the
implications of LLMs on the future of creativity,
information, cultural production, and learning.
Cognitive depletion [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] arises not from balanced
coexistence with technology, but from replacement by
technology, as Adriano Fabris points out at UCSI, on
the topic of journalism and AI:
        </p>
        <p>
          «[...] at best, a deskilling [...], and at worst,
prospectively, a replacement of what these can do by
what the AI program can do faster and more fully»6
(§2) [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Just as the notebook, referred to by Clark and</title>
        <p>
          Chalmers, throws Otto into a relationship of absolute
dependence and, virtually, worsens his memory
(sparing him the stresses of exertion), LLMs, with
their features simulating Gestaltic qualities, drag
users into a relationship of dependence that affects
not only the most time-consuming mechanical
activities, but also the most human and light ones
(drafting an e-mail, replying to a message, ...); what are
the long-term effects of such a dependence of this
extent? At the beginning of the paragraph we made a
reference to the fundamental unacceptability of the
external resource when it has function of cognitive
extension, given three key cores; those same three
cores can be repurposed to contribute to a new
framework for responsible and reliable GAI; in the
present case, for example, considering a multimodal
transformer as an external resource (with a function
of cognitive extension that is, extended mind), it will,
if heavily used, necessarily have to produce adjuvant
effects—it will be notebook, will be cane, will be lens,
…—and other “castrating” ones: (a) in being an
external resource, it will not guarantee continuous
accessibility, (b) it will be subject to environmental
conditioning or manipulation—especially since
datasets are generally neither personal nor personally
inspectable/customizable (except for sparse
5 English translation provided by the authors.
6 English translation provided by the authors.
instances of RLHF like temporary slight changes in
model behavior based on user-expressed preferences
via A/B testing) —, (c) it will worsen cognitive
capabilities, which are already compromised [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] and
there will be instances of outright dependency. It is
evident, as the last decades of pocket electronics,
phenomenology and philosophy of mind teach (also
showing us several cases of so-called adaptive
phenotypic plasticity), that whatever technology
shows the prerequisites for cognitive extension is in
the long run pejorative of cognitive abilities and, by
extension, of the-being-in-the-world respecting the
physiological sharing/reserve alternation. In order to
build lasting social trust and ensure a healthy
coexistence with generative AI and whatever other
technology will result—this is also the EU’s approach7
[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]—it’s crucial to talk about ethics: while it is
necessary to ensure an ethics in AI, it seems more
important to work on an ethics of AI: introducing the
teaching of ethics (in general) and AI ethics as early as
K-12 [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] is the only way to lay the groundwork for a
truly accountable and reliable GAI. Admittedly, the
utterly interdisciplinary nature of such an endeavor is
well-known by this time; it remains, however, that
ethics and law are the only two cartridges to foster the
desired healthy coexistence. Given the “position
paper” nature of this contribution, it is worth
repeating that the writers’ intent is to emphasize the
importance of introducing ethics from the earliest
years of schooling: at stake is the replacement of
human creativity with generative sterility resulting
from the statistical prediction of language—P=(W|h),
if we talk about word-embedding in NLP—, to disrupt
not only the field of culture, but also the very criteria
of aesthetic-artistic evaluation of written opuses. A
separate parenthesis is to be opened in regard of
biases management in generative AI—a hot topic in
the area of responsible AI practices. “GAI bias” means
the systematic trend of a generative model to return
outputs biased toward certain responses; the reasons
why this happens can be attributed to the dataset used
for training, to implicit assumptions during the
training itself, or even to biases inherent in our society
and thus reflected in the “answers” given by the
system.
        </p>
        <p>
          That of bias in transformers is often considered a
problem that we still need to solve interdisciplinarily,
a problem that undermines the path to “responsible
7 «L’approccio etico dell’Unione europea alla intelligenza artificiale è
volto a sollecitare una riflessione etico-umanistica sul progresso
tecnologico mondiale». (Alpini, 2019, 6); Transl. by the authors: «The
European Union's ethical approach to artificial intelligence is intended to
prompt ethical-humanistic reflection on global technological progress».
and reliable” GAI. The feeling is that we cannot see the
wood for the trees: the problem lies elsewhere,
outside the development and usage patterns of AI
systems; the biases are in the training data since they
mirror what our society has produced to date. To put
it another way: writing a prompt to a chatbot asking
for the writing of a text à la D.A.F. de Sade and ending
up complaining about a bias for the degrading
representation of women versus that of a violently
dominant man is laughable. It would seem right,
somewhat, to accept the biases for what they are:
reflections of what we have been; then, a GAI is all the
more reliably “responsible and trustworthy” when it
transparently represents a state of affairs, not when it
works of embellishment. The new front in the struggle
for transparent AI is demystifying the fight against
bias; it has to do with the exercise of moral posture,
with confrontation (even unpleasant, so be it), with
history and characterial ideal types [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]—in the
Weberian sense of simplified idealization. While the
difference between character ideal type, persona (as a
unique combination of attributes defining a certain
individual), figural restitution and bias is sub sole, it is
not as clear (to many AI ethicists, but not only) that
the goals of transparency and trustworthiness are not
pursuable by purging bias: only a generalized
sensitivity to the use and consequences of generative
systems will be able to avert the big issues on the
horizon.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Conclusion</title>
      <p>This paper has sought to explore the social side of
reliability and accountability with respect to the use
of large language models, providing a qualitative and
semantic reading of the origin of the so-called
“emergent abilities” of such generative models. The
analysis was supported by parallels between
extended mind and AI-based transformers, winking at
a more phenomenological approach to the problem of
GenAI misuses. Even if for a few lines only, we went
“ghost hunting” motivated to investigate in the nature
of these systems neither more nor less than what they
are.</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgement</title>
      <p>"FAIR - Future Artificial Intelligence Research"
Spoke 1 "Human-centered AI", funded by the
European Commission under the NextGeneration EU
programme, PNRR.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Alpini</surname>
          </string-name>
          ,
          <article-title>Sull'approccio umano-centrico all'intelligenza artificiale. Riflessioni a margine del “Progetto europeo di orientamenti etici per una IA affidabile”</article-title>
          , «Comparazione e diritto Civile»,
          <year>2019</year>
          ,
          <volume>2</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P. W.</given-names>
            <surname>Anderson</surname>
          </string-name>
          ,
          <article-title>More is different: Broken symmetry and the nature of the hierarchical structure of science</article-title>
          , «Science»,
          <year>1972</year>
          ,
          <volume>177</volume>
          (
          <issue>4047</issue>
          ):
          <fpage>393</fpage>
          -
          <lpage>396</lpage>
          . http://www.lanais.famaf.unc.edu.ar/cursos/e m/Anderson-MoreDifferent-
          <year>1972</year>
          .pdf (accessed
          <volume>21</volume>
          /04/
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G.</given-names>
            <surname>Attardi</surname>
          </string-name>
          , Il Bello, il Brutto e il Cattivo dei LLM, «Mondo Digitale»,
          <year>2023</year>
          , June, 1-
          <fpage>16</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chalmers</surname>
          </string-name>
          , The extended mind,
          <source>«Analysis»</source>
          ,
          <year>1998</year>
          ,
          <volume>58</volume>
          (
          <issue>1</issue>
          ),
          <fpage>7</fpage>
          -
          <lpage>19</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fabris</surname>
          </string-name>
          ,
          <article-title>Giornalismo e intelligenza artificiale: la questione etica di cui parla Adriano Fabris</article-title>
          .
          <source>Unione Cattolica della Stampa Italiana</source>
          ,
          <volume>10</volume>
          /02/
          <year>2024</year>
          . URL: https://www.ucsi.it/news/opinioni/14595-
          <string-name>
            <surname>giornalismo-</surname>
          </string-name>
          e
          <article-title>-intelligenza-artificiale-laquestione-etica-di-cui-parla-adrianofabris</article-title>
          .
          <source>html (accessed</source>
          <volume>21</volume>
          /04/
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fabris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ferragina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Horvat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Morelli</surname>
          </string-name>
          , G. Prencipe, Filosofia interroga Arte,
          <article-title>Drammaturgia sfida IA</article-title>
          .
          <article-title>Due testi, due podcast, per rispondere alla domanda: scrittura umana o artificiale?</article-title>
          , «Mondo Digitale»,
          <year>2024</year>
          [forthcoming].
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Floridi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <article-title>Intelligenza artificiale: L'uso delle nuove machine</article-title>
          , Bompiani,
          <year>Milano 2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>E. Grande,</surname>
          </string-name>
          <article-title>LLMs: il surrogato dello Spirito del mondo</article-title>
          , «Fondazione Leonardo - Civiltà delle Macchine»,
          <volume>18</volume>
          /01/2024, https://www.civiltadellemacchine.it/it/newsand-stories-detail/-/detail/llms-surrogatospirito
          <source>(accessed</source>
          <volume>19</volume>
          /01/
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G. W. F.</given-names>
            <surname>Hegel</surname>
          </string-name>
          , The Science of Logic, transl. G. Di Giovanni, Cambridge University Press,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Heidegger</surname>
          </string-name>
          ,
          <article-title>Being and Time. A translation of Sein und Zeit, transl</article-title>
          . J.
          <string-name>
            <surname>Stambaugh</surname>
          </string-name>
          , State University of New York Press,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>B.</given-names>
            <surname>Hibou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tozy</surname>
          </string-name>
          ,
          <article-title>Ragionare per idealtipi. Comprendere con Weber lo Stato contemporaneo in Marocco… e altrove, «Cambio</article-title>
          . Rivista sulle Trasformazioni Sociali»,
          <year>2021</year>
          ,
          <volume>10</volume>
          (
          <issue>20</issue>
          ),
          <fpage>65</fpage>
          -
          <lpage>83</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>I.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , D. DiPaola, C. Breazeal,
          <article-title>Developing middle school students' AI literacy</article-title>
          ,
          <source>in Proceedings of the 52nd ACM technical symposium on computer science education</source>
          ,
          <year>2021</year>
          . pp.
          <fpage>191</fpage>
          -
          <lpage>197</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Manwell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tadros</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. M.</given-names>
            <surname>Ciccarelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Eikelboom</surname>
          </string-name>
          ,
          <article-title>Digital dementia in the internet generation: excessive screen time during brain development will increase the risk of Alzheimer's disease and related dementias in adulthood</article-title>
          ,
          <source>«Journal of Integrative Neuroscience»</source>
          ,
          <year>2022</year>
          ,
          <volume>21</volume>
          (
          <issue>1</issue>
          ),
          <fpage>028</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>M. McLuhan</surname>
          </string-name>
          , La Galassia Gutenberg.
          <article-title>Nascita dell'uomo tipografico, transl</article-title>
          . S. Rizzo, Armando Editore,
          <year>Roma 1976</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Natale</surname>
          </string-name>
          ,
          <article-title>Macchine ingannevoli</article-title>
          . Comunicazione, tecnologia, intelligenza artificiale, transl. D. A.
          <string-name>
            <surname>Gewurz</surname>
          </string-name>
          , Giulio Einaudi Editore,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Parlamento</given-names>
            <surname>Europeo</surname>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Emendamenti del Parlamento Europeo alla proposta della Commissione</article-title>
          .
          <source>Regolamento (UE)</source>
          <year>2024</year>
          /
          <article-title>… del Parlamento Europeo e del Consiglio del</article-title>
          ...
          <article-title>che stabilisce regole armonizzate sull'intelligenza artificiale… (legge sull'intelligenza artificiale), (COM(</article-title>
          <year>2021</year>
          )
          <fpage>0206</fpage>
          -
          <lpage>C9</lpage>
          -0146/
          <fpage>2021</fpage>
          - 2021/0106(COD)),
          <volume>06</volume>
          /03/2024, https://www.europarl.europa.eu/doceo/docu ment/A-9
          <string-name>
            <surname>-</surname>
          </string-name>
          2023-0188
          <string-name>
            <surname>-AM-</surname>
          </string-name>
          808-808_IT.pdf (accessed
          <volume>21</volume>
          /04/
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>E. F.</given-names>
            <surname>Perri</surname>
          </string-name>
          ,
          <article-title>Generative artificial intelligence and creative-cognitive depletion: an ethical issue</article-title>
          .
          <article-title>Use and abuse of GAIs and GPTs in the field of culture and education</article-title>
          . IA, educación y medios de comunicación: modelo TRIC,
          <string-name>
            <surname>Dykinson</surname>
            <given-names>S.L.</given-names>
          </string-name>
          ,
          <year>Madrid 2024</year>
          , (preprint).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Spitzer</surname>
          </string-name>
          ,
          <article-title>Demenza digitale</article-title>
          .
          <article-title>Come la nuova tecnologia ci rende stupidi</article-title>
          ,
          <source>Corbaccio</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Spitzer</surname>
          </string-name>
          ,
          <article-title>Information technology in education: Risks and</article-title>
          side effects,
          <source>«Trends in Neuroscience and Education»</source>
          ,
          <volume>3</volume>
          (
          <issue>3-4</issue>
          ),
          <year>2014</year>
          ,
          <fpage>81</fpage>
          -
          <lpage>85</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>K.</given-names>
            <surname>Sterelny</surname>
          </string-name>
          , Minds: extended or scaffolded?.
          <source>«Phenomenology and the Cognitive Sciences»</source>
          ,
          <volume>9</volume>
          (
          <issue>4</issue>
          ),
          <year>2010</year>
          ,
          <fpage>465</fpage>
          -
          <lpage>481</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bommasani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Raffel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zoph</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Borgeaud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Yogatama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bosma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <surname>D</surname>
          </string-name>
          : Metzler,
          <string-name>
            <given-names>E.H.</given-names>
            <surname>Chi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hashimoto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Vinyals</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dean</surname>
          </string-name>
          .,
          <string-name>
            <given-names>W.</given-names>
            <surname>Fedus</surname>
          </string-name>
          ,
          <source>Emergent Abilities of Large Language Models, in Transactions on Machine Learning Research</source>
          ,
          <year>August 2022</year>
          , arXiv:
          <fpage>2206</fpage>
          .07682v2 [cs.CL].
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>