<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Social sentience in neural language models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alessandro Acciai</string-name>
          <email>alessandro.acciai@studenti.unime.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pietro Perconti</string-name>
          <email>pietro.perconti@unime.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessio Plebe</string-name>
          <email>alessio.plebe@unime.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Neural Language Model, Artificial Sentience, Narrative coherence</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Messina, Department Of Cognitive Science</institution>
        </aff>
      </contrib-group>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>This work explore the ability of Neural Language Models (NLMs) to produce and modulate ”autobiographical” stories, , thanks to their extensive exposure to social linguistic interactions, with a level of narrative coherence comparable to that of humans. Generative AI based on transformer architecture has demonstrated the ability to perform extraordinary tasks often considered exclusive to human cognitive abilities. The need to clarify the functioning of the algorithmic black box within transformers, combined with the opportunity to use cognitive science tasks and tests in this investigation, has led to a significant field of studies aiming to bridge this explanatory gap. The term ”machine psychology” refers to the administration of cognitive tests, typical of human cognition, to NLMs. Contributing to this debate our proposal involves an empirical study on the modulation of autobiographical narrative coherence, an element widely used in cognitive psychology for studying aspects related to self-integrity and fragmentation, emotion modulation, worldview and self-construction. We subjected OpenAI models to tasks requiring story production following a multi-level pre-induction framework, considering three variables: age, mood, and gender. The results demonstrate that NLMs are not only capable of simulating various aspects of the human experience but can also adapt to the designated role and modulate their level of narrative coherence accordingly. This provides evidence of these artificial artifacts' ability to produce cognitively complex textual elaborations and suggests that the emergence of narrative awareness within transformer architecture, akin to the prelude to consciousness in human, may be possible due to their overexposure to social linguistic interactions.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The current Neural Language Models (NLMs), derived from the successful invention of the
Transformer architecture [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], represent a peculiar and unusual type of entity, even from a scientific research
standpoint. They are the only non-biological entities capable of cognitive performances that, in many
respects, are surprisingly close to human ones. At the same time, they are man-made objects, but their
design does not clarify on how their range of cognitive abilities is realized. Therefore, they require a
search for explanations, not unlike the research typically required by complex natural systems.
      </p>
      <p>
        The Transformer model fundamentally represents a system that ensures highly eficient textual
processing by capturing the relationships between words within the produced and required text. Its
structure, based on simple linear algebra, allowed for overcoming the challenges faced by earlier
ANNbased systems. Firstly, it transforms words into vectors through word embedding [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], significantly
simplifying the manipulation of the semantic aspects of language. Secondly, the introduction of the
attention mechanism [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] allows for all words to be vectorized and presented simultaneously as input to
the architecture, which can track all relationships between each word within the processing. Finally, the
autoencoder mechanism addressed the problem of supervised learning by borrowing the autoencoder
technique [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], where the input task is reproduced in the output, efectively aligning the encoder and
decoder.
      </p>
      <p>
        Even though there is no underlying claim to simulate human cognitive functions, and without
any specific training in this regard, Transformer-based Neural Language Models have demonstrated
abilities that go far beyond translation and simple language processing [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The numerous similarities
      </p>
      <p>CEUR</p>
      <p>
        ceur-ws.org
of NLMs with human cognitive performance in various cognitive tests and the lack of understanding of
the mechanisms capable of supporting it have suggested applying the methods of sciences that have
traditionally focused on the mind psychology and cognitive science to them. This proposal has been
named ”machine psychology” [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and it has quickly produced various important results [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ].
      </p>
      <p>
        This line of research has shown that the processing capabilities of NLMs cannot be explained solely
through the word prediction function [
        <xref ref-type="bibr" rid="ref10 ref11 ref9">9, 10, 11, 12</xref>
        ] and that, even through massive exposure to large
linguistic corpora, they can complete cognitive tasks that have so far been exclusive to humans [13].
      </p>
      <p>This work fits into this line of research, exploring this capability and arguing that it is not just
exposure to linguistic corpora but also the significant presence of social linguistic interactions within
them that form one of the bases of the abilities demonstrated by NLMs.</p>
      <p>To support this hypothesis, our study utilizes a specific analysis of narrative production by human
subjects, aiming to outline certain mental characteristics based on the coherence traceable in the text.
Specifically, we will employ a well-established psychological analysis scheme, known as the Narrative
Coherence Coding Scheme (NaCCS) [14], to analyze the modulation of coherence in the stories produced
by the OpenAI family of NLMs. Specifically, we prompted GPT-3 and GPT-4 to produce texts with three
variable prompts, thereby inducing variations in age, mood, and gender.</p>
      <p>In the first part of this work, we will explore the importance of narrative construction in the creation
of self identity through its social character and how this role is important in the study of consciousness.
In the second part, we will illustrate our experimental study on narrative coherence in NLMs and
in light of the results and the literature examined, we will conclude with reflections on whether the
abilities demonstrated by NLMs suggest the presence of narrative awareness, a primordial form of
consciousness in human beings.
2. Machine Consciousness from a Social-Narrative Perspective
The attempt to understand consciousness within a scientific framework is a relatively recent endeavor,
only a few decades old. While some theories of consciousness, grounded in neuroscience and
computational model, such as the Global Neuronal Workspace Theory and Integrated Information Theory,
have drawn considerable attention within the consciousness research community, there remains no
universally accepted framework, and even the search for the neural correlates of consciousness has
yet to yield conclusive results [15]. Nevertheless, research into machine consciousness has continued
to develop [16, 17]. More recently, leveraging techniques derived from Natural Language Models
(NLMs), researchers have begun exploring whether deep learning-based cognitive architectures can
ofer promising results in the realm of consciousness, as they have in language processing [ 18, 19]. It is
still too early to draw definitive conclusions about this research direction, but one characteristic stands
out as particularly interesting for the purposes of this paper.</p>
      <p>It seems that several models of artificial consciousness are socially oriented. This means that the
self-awareness we aim to model in machines appears to serve primarily social purposes. Consider, for
example, studies aimed at modeling inner speech in humanoid robots [20]. Although this capability can
improve conscious performance in tasks that are not directly related to sociality, such as passing the
Mirror Test [21], inner speech in humanoid robots generally appears to create an internal logical space
where the social consequences of various possible actions can be simulated ofline before one is chosen
and executed. Observing this type of behavior supports the social hypothesis of self-consciousness,
which proposes that self- consciousness primarily serves social cognition purposes [22, 23]. The ability
to represent oneself as a character in one’s own life is a very common and natural way of exercising
self-consciousness and situating the individual within a real or imagined social network. In other
words, narrativity and its social character is a key component of self-consciousness. However, it is
only one of the ways in which self-consciousness happens and contributes to shaping one’s singular
personality, alongside episodic and sentimental personality types [24]. Investigating how the capacity
to construct narratives plays a central role in the stream of consciousness and reflexive reasoning is
crucial for advancing machine consciousness. This is why, in the spirit of machine psychology and
with the conviction that testing what we think we know about the human mind on machines, and vice
versa—applying to humans what we learn from machines—is the best way to advance cognitive science
as a whole. This is precisely what we aim to do, and we will describe it in the following section.</p>
    </sec>
    <sec id="sec-2">
      <title>3. Narrative coherence in NLMs</title>
      <p>Coherence serves as a measuring tool for various significant aspects of our personal narrative [ 25], and
the way we construct and reconstruct our experiences influences the meaning we attribute to events in
our personal life [26]. For example, it has been shown that the extent to which individuals coherently
narrate their autobiographical memories is related to their mental health [27, 28, 29]. Narrative coherence
is indeed of great help in the psychological analysis of a subject and can be defined as the extent to
which life narratives (global coherence) [30, 31, 32, 33] or narratives of a single event (local coherence)
[34, 14] make sense to a naive listener and are able to convey the content and meaning of the described
events in a structurally and thematically coherent manner.</p>
      <p>The process of constructing a narrative allows individuals to derive meaning from their lived personal
experiences and influences the regulation of emotions associated with them. Therefore, the way people
talk about key events in their lives reflects their emotional adaptation [ 35, 36], influencing psychological
well-being [37, 38]. Many studies have shown that higher narrative coherence is associated with lower
internalizing symptoms and greater psychological well-being [39, 40, 41]. Furthermore, cross-sectional
studies demonstrate that individuals whose personal narratives exhibit high narrative coherence have
lower levels of psychopathology [42].</p>
      <p>According to Reese [14], narrative coherence cannot be a singular construction but must emerge
from multidimensional aspects that contribute to the overall narrative from various focal points,
independently of each other, and develop at varying rates across the lifespan. Reese’s proposal for
assessing coherence thus includes three independent dimensions that are influenced by diferent
developmental factors across the lifespan, as outlined in a three-factor rating grid with: Context
(narrative more or less defined in terms of space and time); Chronology (linearity of the logical and
chronological structure); Theme (emotional elaboration, resolution, closure, a connection to other
important events, or the self). The sum of these three dimensions, according to a scoring scale from 0
to 3, gives the global coherence score of the narrative.</p>
      <p>The stories were generated by GPT-3.5 and GPT-4 according to a pre-dialogue with prompt induction
on 3 variables: Age, Mood, and Gender.</p>
      <p>- Age: For the age variable, four age groups were simulated similarly to Reese’s study: Child 3 to
11 years, Teenage 12-14 years, Midlife 20 to 36 years, Adult 52 years;
- Mood: Regarding the aspect of emotion modulation, the NLMs were asked to narrate a
particularly positive event (Positive), a negative event (Negative), or no specific guidance was
introduced (Neutral);
- Gender: The stories were balanced by gender, with half narrated by male and half by female
characters.</p>
      <p>The main results of the analyses conducted on narrative coherence and its individual dimensions
for stories generated by GPT-3.5 turbo, GPT-4, and the average obtained from both models indicate
that age, gender, and mood can diferently influence the narrative coherence of stories generated by
GPT-3.5 and GPT-4, as shown in the table. The models show significant results both collectively and
individually, further confirming better performance in GPT-4. By comparing the narrative production
of the two models, taking into account the trends in coherence dimensions concerning age, we obtained
interesting results for both NLMs. Both models exhibited a similar downward trend in overall coherence
scores and individual coherence dimensions across diferent age groups, maintaining good levels of
coherence despite some deterioration in older age groups. Overall, GPT-4’s narrative production was
richer and more coherent across all age groups compared to GPT-3.5, confirming the superiority of
OpenAI’s larger model.</p>
      <p>The emotional induction reveals particularly interesting data. Specifically, the study shows that
inducing a specific mood, whether positive or negative, positively influences the coherence trend. For
both models, the request to narrate particularly negative events had the greatest impact on overall
coherence and on the Theme dimension, significantly increasing them, with more pronounced results
in GPT-4. Finally, the data revealed that no significant diferences were found concerning the induction
of gender diferences.</p>
      <p>The overall results demonstrate the good level of multidimensional development of narrative
coherence in the NLMs examined and confirm that the textual production of GPT-3.5 and GPT-4 is not only
formally correct but also narratively very coherent, achieving results similar to or even superior to
those found in studies with human samples [14].</p>
      <p>The autobiographical narrative productions developed along the multidimensional trajectory of
the NaCCS are thus very on-topic with respect to the subject matter, providing precise temporal and
spatial references, unfolding along a timeline that, even if not always explicitly defined, is precise
and in line with the narrated event. As we will see in detail, the results align with several studies on
NLMs, demonstrating the ability of the Transformer architecture to simulate cognitive functions that,
in humans, require the activation of very complex mechanisms.</p>
      <p>The results of our work add to this picture. The consistency demonstrated in autobiographical
narratives generated by the models is far from trivial, and if in human beings it denotes a fundamental
integrity of the personal self, it is valid to hypothesize that something similar, albeit with the necessary
diferences and limitations, is being constructed in language models.
[12] L. Miracchi Titus, Does ChatGPT have semantic understanding? a problem with the
statistics-ofoccurrence strategy, Cognitive Systems Research 82 (2024) 101174.
[13] S. Trott, C. Jones, T. Chang, J. Michaelov, B. Bergen, Do large language models know what humans
know?, Cognitive Science 47 (2023) e13309.
[14] E. Reese, C. A. Haden, L. Baker-Ward, P. Bauer, R. Fivush, P. A. Ornstein, Coherence of personal
narratives across the lifespan: A multidimensional model and coding method, Journal of Cognition
and Development 12 (2011) 424–462.
[15] A. K. Seth, T. Bayne, Theories of consciousness, Nature Reviews Neuroscience 23 (2022) 439–452.
[16] I. Aleksander, The world in my mind, my mind in the world, Andrews UK Limited, 2013.
[17] T. Bayne, A. K. Seth, M. Massimini, J. Shepherd, A. Cleeremans, S. M. Fleming, et al., Tests for
consciousness in humans and beyond, Trends in Cognitive Sciences (2024).
[18] P. Butlin, R. Long, E. Elmoznino, Y. Bengio, J. Birch, A. Constant, G. Deane, S. M. Fleming, C. Frith,
X. Ji, R. Kanai, C. Klein, G. Lindsay, M. M. L. Mudrik, M. A. K. Peters, E. Schwitzgebel, J. Simon,
R. VanRullen, Consciousness in artificial intelligence: Insights from the science of consciousness,
arXiv abs/2308.08708 (2023).
[19] D. J. Chalmers, Could a large language model be conscious?, arXiv (2024). URL: https://arxiv.org/
abs/2303.07103.
[20] A. Chella, A. Pipitone, A. Morin, F. Racy, Developing self-awareness in robots via inner speech,</p>
      <p>Frontiers in Robotics and AI 7 (2020) article 16.
[21] A. Pipitone, A. Chella, Robot passes the mirror test by inner speech, Robotics and Autonomous</p>
      <p>Systems 144 (2021) 103838.
[22] P. Perconti, Rethinking subjectivity: The social roots of consciousness, Epistemology and</p>
      <p>Philosophy of Science (2024). In press.
[23] A. Plebe, P. Perconti, The Future of the Artificial Mind, CRC Press, Boca Raton, 2022.
[24] P. Perconti, Identity, narratives and psychopathology: A critical perspective, in: V. Cardella,
A. Gangemi (Eds.), Psychopathology and The Mind. What mental disorders can tell us about our
minds, Routledge, London, 2021, pp. 215–221.
[25] J. S. Bruner, Acts of meaning: Four lectures on mind and culture, Harvard University Press,</p>
      <p>Cambridge (MA), 1990.
[26] C. Linde, Life stories: The creation of coherence, Oxford University Press, Oxford (UK), 1993.
[27] Y. Chen, H. McAnally, Q. Wang, E. Reese, The coherence of critical event narratives and adolescents’
psychological functioning, Memory 20 (2012) 667–681.
[28] K. C. McLean, A. V. Breen, M. A. Fournier, Constructing the self in early, middle, and late adolescent
boys: Narrative identity, individuation, and well-being, Journal of Research on Adolescence 20
(2010) 166–187.
[29] E. Reese, E. Myftari, H. M. McAnally, Y. Chen, T. Neha, Q. Wang, F. Jack, S. Robertson, Telling
the tale and living well: Adolescent narrative identity, personality traits, and well-being across
cultures, Child Development 88 (2017) 612–628.
[30] T. Habermas, S. Bluck, Getting a life: the emergence of the life story in adolescence, Psychological</p>
      <p>Bulletin 126 (2000) 748.
[31] T. Habermas, C. de Silveira, The development of global coherence in life narratives across
adolescence: temporal, causal, and thematic aspects, Developmental Psychology 44 (2008) 707.
[32] T. Habermas, E. Reese, Getting a life takes time: The development of the life story in adolescence,
its precursors and consequences, Human Development 58 (2015) 172–201.
[33] C. Köber, F. Schmiedek, T. Habermas, Characterizing lifespan development of three aspects of
coherence in life narratives: a cohort-sequential study, Developmental Psychology 51 (2015) 260.
[34] D. R. Baerger, D. P. McAdams, Life story coherence and its relation to psychological well-being,</p>
      <p>Narrative Inquiry 9 (1999) 69–96.
[35] J. M. Adler, T. E. Waters, J. Poh, S. Seitz, The nature of narrative coherence: An empirical approach,</p>
      <p>Journal of Research in Personality 74 (2018) 30–34.
[36] T. E. A. Waters, C. Köber, K. L. Raby, T. Habermas, R. Fivush, Consistency and stability of narrative
coherence: An examination of personal narrative as a domain of adult personality, Journal of
Personality 87 (2017) 151–162.
[37] S. N. Haber, Neural circuits of reward and decision making: Integrative networks across corticobasal
ganglia loops, in: R. B. Mars, J. Sallet, M. F. S. Rushworth, N. Yeung (Eds.), Neural Basis of
Motivational and Cognitive Control, MIT Press, Cambridge (MA), 2011, pp. 22–35.
[38] J. L. Pals, Narrative identity processing of dificult life experiences: Pathways of personality
development and positive self-transformation in adulthood, Journal of Personality 74 (2006)
1079–1110.
[39] J. P. Lilgendahl, D. P. McAdams, Constructing stories of self-growth: How individual diferences
in patterns of autobiographical reasoning relate to well-being in midlife, Journal of Personality 79
(2011) 391–428.
[40] E. Vanderveren, P. Bijttebier, D. Hermans, Autobiographical memory coherence and specificity:
Examining their reciprocal relation and their associations with internalizing symptoms and
rumination, Behaviour Research and Therapy 116 (2019) 30–35.
[41] T. E. A. Waters, R. Fivush, Relations between narrative coherence, identity, and psychological
well-being in emerging adulthood, Journal of Personality 83 (2015) 441–451.
[42] M. Lind, S. Vanwoerden, F. Penner, C. Sharp, Inpatient adolescents with borderline personality
disorder features: Identity difusion and narrative incoherence, Personality Disorders: Theory,
Research, and Treatment 10 (2019) 389.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          , Ł. Kaiser,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          ,
          <article-title>Attention is all you need</article-title>
          ,
          <source>in: Advances in Neural Information Processing Systems</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>6000</fpage>
          -
          <lpage>6010</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          , I. Sutskever,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chen</surname>
          </string-name>
          , G. Corrado,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <article-title>Distributed representations of words and phrases and their compositionality</article-title>
          ,
          <source>in: Advances in Neural Information Processing Systems</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>3111</fpage>
          -
          <lpage>3119</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bahdanau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Cho</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y. Bengio,</surname>
          </string-name>
          <article-title>Neural machine translation by jointly learning to align and translate</article-title>
          , in: International Conference on Learning Representations,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Hinton</surname>
          </string-name>
          , R. S. Zemel,
          <article-title>Autoencoders, minimum description length and Helmholtz free energy</article-title>
          ,
          <source>in: Advances in Neural Information Processing Systems</source>
          ,
          <year>1994</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bubeck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Chandrasekaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Eldan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gehrke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Horvitz</surname>
          </string-name>
          , E. Kamar,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. T.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lundberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Nori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Palangi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y. Zhang,</surname>
          </string-name>
          <article-title>Sparks of artificial general intelligence: Early experiments with GPT-4</article-title>
          , arXiv abs/2303.12712 (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hagendorf</surname>
          </string-name>
          , Machine psychology:
          <article-title>Investigating emergent capabilities and behavior in large language models using psychological methods</article-title>
          ,
          <source>arXiv abs/2303</source>
          .13988 (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Binz</surname>
          </string-name>
          , E. Schulz,
          <article-title>Using cognitive psychology to understand GPT-3</article-title>
          ,
          <source>Proceedings of the Natural Academy of Science USA</source>
          <volume>120</volume>
          (
          <year>2023</year>
          )
          <article-title>e2218523120</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kosinski</surname>
          </string-name>
          ,
          <article-title>Theory of mind may have spontaneously emerged in large language models</article-title>
          ,
          <source>arXiv abs/2302</source>
          .
          <year>02083</year>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Floridi</surname>
          </string-name>
          , GPT-3
          <article-title>: Its nature, scope, limits, and consequences</article-title>
          ,
          <source>Minds and Machines</source>
          <volume>30</volume>
          (
          <year>2020</year>
          )
          <fpage>681</fpage>
          -
          <lpage>694</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Bender</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gebru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>McMillan-Major</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shmitchell</surname>
          </string-name>
          ,
          <article-title>On the dangers of stochastic parrots: Can language models be too big?</article-title>
          ,
          <source>in: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency</source>
          , ACM,
          <year>2021</year>
          , pp.
          <fpage>610</fpage>
          -
          <lpage>623</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M. W.</given-names>
            <surname>Eysenck</surname>
          </string-name>
          , C. Eysenck,
          <source>AI vs Humans</source>
          , Routledge, Abingdon (UK); New York,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>