<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Sophia Project: Enhancing Player Im mersion with Intelligent Autonomous NPCs in 2D RPGs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vinícius Custódio Chelli</string-name>
          <email>theongatechelli@gmail.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>André Luiz França Batista</string-name>
          <email>andreluiz@iftm.edu.br</email>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <fpage>25</fpage>
      <lpage>30</lpage>
      <abstract>
        <p>Introduction: This paper introduces the Sophia Project, an advanced intelligent autonomous NPC system specifically designed for 2D RPG games using Unity. Objective: Sophia aims to employ reinforcement learning techniques [8] to dynamically evolve its relationship with players and features voice recognition capabilities to interpret emotional and contextual commands. Methodology or Steps: Integrated with the ChatGPT API [9], Sophia can conduct contextually meaningful dialogues, complemented by recurrent neural networks for environment perception and memory management [10], and movement through NavMesh. Additional modules include dynamic relationship tracking, contextual memory retention, and object perception for deeper interactions. Results: Experimental evaluations with 30 participants demonstrated significantly enhanced player immersion, adaptability, and believability compared to traditional scripted NPC systems.</p>
      </abstract>
      <kwd-group>
        <kwd>Unity</kwd>
        <kwd>C#</kwd>
        <kwd>artificial intelligence</kwd>
        <kwd>immersion</kwd>
        <kwd>NPCs</kwd>
        <kwd>reinforcement learning</kwd>
        <kwd>RNN</kwd>
        <kwd>voice recognition</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>https://github.com/ViniciusChelli (V. C. Chelli); https://github.com/andreluizfrancabatista (A. L. F. Batista)
CEUR
Workshop</p>
      <p>ISSN1613-0073</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>The Sophia Project adopts a modular, multi-agent architecture that fuses advanced AI technologies to
foster highly autonomous, emotionally intelligent, and narratively coherent NPC behavior. All modules
interact in real-time to drive immersive experiences inside a 2D farming simulation inspired by Harvest
Moon and Stardew Valley. Below, we detail each module with in-game examples.</p>
      <sec id="sec-2-1">
        <title>2.1. Natural Language Understanding via ChatGPT API [9]</title>
        <p>Sophia uses the ChatGPT API for Natural Language Understanding (NLU) and Generation (NLG). Player
voice inputs—such as “Good morning, Sophia!” or “Why are you upset?”—are transcribed via speech
recognition, then sent to the LLM and returned as structured JSON with semantic tags and suggested
tone (e.g., friendly, apologetic).</p>
        <p>This response is synthesized into speech using a TTS engine and played with appropriate emotion in
Unity. For instance, if the player says something supportive after an argument, Sophia might respond
with a softened voice and phrases like, “I’m still hurt… but thank you for caring.”</p>
        <p>This system ensures that conversations are coherent, emotionally responsive, and influenced by both
memory and prior context.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Recurrent Neural Networks for Memory Management [10]</title>
        <p>To simulate episodic memory, Sophia uses a Long Short-Term Memory (LSTM) network trained on 1,000
gameplay sequences. The network receives time-stamped data on spatial location, previous dialogue
tags, emotional valence, and trust level.</p>
        <p>In-game efect: If the player previously lied to Sophia or gave her an unwanted gift, these events are
embedded in the memory vector. Later conversations reflect this history. For example, Sophia might
say, “You’ve been acting strange lately… I’m not sure I can trust you again,” if emotional consistency
has been negative.</p>
        <p>The LSTM model was trained for 60 epochs using the Adam optimizer with early stopping on
validation loss, ensuring that only meaningful interaction patterns are retained.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Deep Q-Learning for Adaptive Behavior [8]</title>
        <p>
          Sophia’s behavioral decision-making uses a Deep Q-Network (DQN), which selects responses that
maximize long-term reward. The learning algorithm uses the Bellman equation:
(, ) ← (, ) + 
[ +  max ( ′,  ′) − (, )
 ′
]
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
Gameplay Context: Each state  represents Sophia’s perception of the game world: player distance,
tone of recent interactions, current location, and trust level. -  1: Player enters Sophia’s garden after
ignoring her for three days. -  2: Player apologizes for past mistakes. -  3: Player compliments Sophia
at the festival.
        </p>
        <p>Each action  corresponds to Sophia’s next move: - Approach the player and initiate warm dialogue.
- Express caution or distance herself. - Comment on recent events. - Decline interaction.</p>
        <p>Rewards are defined by how the player reacts: - +1.0 for positive engagement (e.g., continued
conversation), - +0.5 for neutral proximity, - −0.75 for of-topic or silent responses, - −1.0 for hostility
or departure.</p>
        <p>Training Specifications
The network receives 42 input features (emotions, location, time, etc.), with 12 possible actions. It
uses prioritized replay, mini-batch size of 64,  -greedy exploration (1.0 → 0.1), and three hidden layers
(256–128–64, ReLU). Target networks update every 500 steps.</p>
        <p>This architecture ensures that Sophia not only reacts appropriately in the moment, but learns to
anticipate and adapt to complex player behavior patterns over time.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. NavMesh Navigation</title>
        <p>Sophia’s movement is managed by Unity’s NavMeshAgent system. Navigation is not random: it is
emotionally informed and context-aware.</p>
        <p>Example: If trust is high, Sophia actively approaches the player in open fields. If afraid (due to hostile
interactions), she may avoid crowded areas or retreat to her house. The DQN selects a navigation action,
and the NavMesh generates the path, integrating both afect and world geometry.</p>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. Dynamic Relationship System</title>
        <p>This subsystem tracks trust, empathy, and afection as floating-point variables. These evolve based on
dialogue history, gifts, and situational behavior.</p>
        <p>In-game example: Repeatedly helping Sophia with farming or checking on her after a storm increases
trust. Ignoring her or mocking her responses lowers empathy. The values modulate how she greets the
player, her facial expressions, and even her willingness to share personal stories.</p>
      </sec>
      <sec id="sec-2-6">
        <title>2.6. Object Perception Module</title>
        <p>Sophia uses grid-based environmental scanning to detect objects and update dialogue contextually.
Example: If the player picks up a rare herb near Sophia, she might immediately react with: “That’s
the one I was looking for!” Conversely, if the player picks up a harmful item or a tool she dislikes, she
might step back or warn them.</p>
      </sec>
      <sec id="sec-2-7">
        <title>2.7. Contextual Memory System</title>
        <p>In addition to RNN-based short-term memory, Sophia maintains a long-term episodic log that includes:
- Previous conversations, - Emotional shifts, - Events like festivals, storms, arguments.</p>
        <p>Use in-game: This memory allows Sophia to say, “You were really kind to me at the Spring Festival,”
or “I still remember that fight we had last season…”—making her feel like a persistent, evolving character.</p>
      </sec>
      <sec id="sec-2-8">
        <title>2.8. Voice Recognition Integration</title>
        <p>Player voice commands are processed via Windows 11’s native speech recognition. Emotional tone
(pitch, intensity) is estimated and tagged as cheerful, sad, angry, etc.</p>
        <p>Gameplay efect: If a player says, “Sophia, I’m sorry!” in a trembling voice, Sophia may lower her
guard and respond softly. Conversely, if the same phrase is said in a cold or sarcastic tone, she might
respond with doubt or silence.</p>
      </sec>
      <sec id="sec-2-9">
        <title>System Architecture Overview</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Experimental Results and Discussion</title>
      <p>To evaluate the efectiveness of Sophia’s architecture in enhancing player immersion and emotional
engagement, we conducted a controlled user study employing an A/B testing framework.</p>
      <sec id="sec-3-1">
        <title>Participants and Demographics</title>
        <p>The study recruited 30 participants (17 male, 11 female, 2 non-binary), aged between 18 and 34 years
( = 24.6 ,  = 4.2 ). Participants were drawn from two primary sources: undergraduate and graduate
students from Computer Science and Game Design courses (n = 18), and members of online gaming
communities with experience in 2D RPGs and simulation titles (n = 12). All participants reported
playing games at least once a week, with 60% describing themselves as “frequent players” and 20% as
having previous experience with modded or AI-enhanced games.</p>
        <p>Prior to participation, individuals were briefed on the general goals of the study, signed informed
consent forms, and were told that two distinct NPC systems would be tested, without disclosing technical
details or hypotheses.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Experimental Design</title>
        <p>Each participant engaged in two separate but structurally identical gameplay sessions within a
Unitybased 2D RPG prototype:
• Condition A (Control): Featured a traditional NPC powered by static dialogue trees and fixed
event scripts.
• Condition B (Experimental): Featured the Sophia system, with dynamic emotional responses,
memory-based interactions, and AI-driven behavior modules.</p>
        <p>Both versions contained the same narrative arc, including:
• A simple combat tutorial,
• An emotional side quest (e.g., helping the NPC recover from an in-game argument),
• Context-sensitive item exchanges,
• Branching dialogues based on previous interactions.</p>
        <p>Gameplay order was counterbalanced to reduce order efects: half of the participants experienced
the control NPC first, and half began with the Sophia system. Each session lasted between 25 and 40
minutes depending on interaction depth and player behavior.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Evaluation Metrics</title>
        <p>Three primary metrics were used for evaluation:
1. Immersion: Measured using a modified version of the IEQ (Immersive Experience Questionnaire)
[? ], adapted for short-term interaction and translated into Portuguese. Items included “I felt
connected to the character” and “I was unaware of time passing.”
2. Emotional Engagement: Assessed using a custom 5-point Likert scale, ranging from “not
emotionally afected at all” to “highly emotionally involved,” focusing on perceived realism and
emotional congruence of the NPC.
3. Narrative Coherence: Participants rated dialogue flow, memory continuity, and responsiveness
using a 4-item scale (e.g., “Did the NPC remember past events?” and “Did the dialogue make
logical sense over time?”).</p>
        <p>Each scale was completed after both gameplay sessions. Additionally, qualitative feedback was
collected via short interviews and open-ended written comments.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Quantitative Results</title>
        <p>
          A repeated-measures ANOVA showed statistically significant diferences favoring the Sophia system:
• Immersion:  (
          <xref ref-type="bibr" rid="ref1">1, 29</xref>
          ) = 22.8 ,  &lt; 0.001 ,  2 = 0.44
• Emotional Engagement:  (
          <xref ref-type="bibr" rid="ref1">1, 29</xref>
          ) = 25.3 ,  &lt; 0.001 ,  2 = 0.47
• Narrative Coherence:  (
          <xref ref-type="bibr" rid="ref1">1, 29</xref>
          ) = 19.7 ,  &lt; 0.001 ,  2 = 0.40
        </p>
        <p>These results indicate that participants found the Sophia-powered NPC significantly more immersive,
emotionally compelling, and narratively coherent than the traditional scripted counterpart.</p>
      </sec>
      <sec id="sec-3-5">
        <title>Qualitative Observations</title>
        <p>Open-ended feedback further reinforced the quantitative findings. Common participant comments
included:
• “It felt like the character actually remembered what I said.”
• “I wasn’t expecting her to react diferently depending on how I talked to her.”
• “The way she moved away after I ignored her was oddly real.”</p>
        <p>Conversely, the control condition was frequently described as “predictable,” “flat,” or “like most games
I’ve played before.”</p>
      </sec>
      <sec id="sec-3-6">
        <title>Discussion</title>
        <p>The integration of adaptive AI systems, contextual memory, and emotional modeling had a measurable
and substantial impact on user experience. By grounding NPC behavior in reinforcement learning and
afective feedback, Sophia creates an illusion of agency and realism that surpasses traditional scripting.
These findings support the notion that modular AI-driven characters can elevate narrative engagement
and player satisfaction in interactive environments.</p>
      </sec>
      <sec id="sec-3-7">
        <title>Quantitative Results</title>
        <p>Sophia demonstrated notable performance gains across multiple metrics. Player–NPC interaction
frequency rose by 65% over the scripted NPC baseline. Average interaction length increased by 40%,
and user re-engagement rate was nearly double when interacting with Sophia. These patterns were
particularly evident in emotionally complex scenarios and open-ended tasks.</p>
        <p>Statistical analysis using a paired-sample t-test confirmed significant improvements (  &lt; 0.05 ) in
three key metrics:
• User satisfaction: 3.2 → 4.6 (Likert 5-point scale)
• Perceived emotional responsiveness: 2.9 → 4.7
• Realism of NPC behavior: consistently higher scores with Sophia</p>
        <p>These results support the hypothesis that emotionally coherent, adaptive behavior significantly
enhances perceived immersion and believability in NPC interactions.
Post-session interviews and open-ended questionnaires (Appendix A) revealed consistent user
perceptions of Sophia as ”emotionally intelligent,” ”attentive,” and ”lifelike.” Many participants highlighted
Sophia’s capacity to remember prior events, personalize interactions, and adapt tone or behavior based
on emotional feedback.</p>
        <p>Sample testimonials:
• “It felt like Sophia genuinely remembered who I was and what I had gone through.”
• “Her voice felt natural, and her tone changed depending on how I spoke to her.”
• “Unlike normal NPCs, she didn’t repeat herself; she reacted to what I did.”</p>
      </sec>
      <sec id="sec-3-8">
        <title>Scalability and Design Implications</title>
        <p>Sophia’s modular architecture is designed for extensibility. Multiple NPCs can be instantiated within
the same environment, each maintaining distinct memory states, emotional profiles, and player-specific
interaction histories. This enables the construction of dynamic game worlds populated with autonomous
agents capable of expressing individuality, reacting to their social environment, and evolving narrative
roles.</p>
        <p>These findings suggest that systems like Sophia may serve as foundational frameworks for
nextgeneration narrative engines, combining emotional depth, persistent context, and real-time adaptation
in support of emergent storytelling.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Limitations and Potential Solutions</title>
      <p>One of the main limitations of the current Sophia implementation lies in its reliance on an active internet
connection to access the cloud-based ChatGPT API for natural language processing [9]. While this
architecture ensures sophisticated dialogue generation, it also introduces practical constraints such as
increased latency, potential service downtime, dependency on external servers, and concerns regarding
data privacy and long-term maintainability—particularly problematic for ofline gameplay scenarios or
independent developers with constrained budgets.</p>
      <p>To mitigate these limitations, future iterations of Sophia will prioritize the integration of open-source
Large Language Models (LLMs) capable of running locally. Candidate models include LLaMA,
GPTNeoX, Mistral, Falcon, and BLOOMZ. These models can be fine-tuned with in-game dialogue corpora,
emotion-tagged interactions, and character backstories to yield a customized conversational agent.
When quantized and optimized for inference eficiency, such models are deployable on consumer-grade
GPUs, enabling low-latency interactions and enhanced user privacy.</p>
      <p>A more ambitious line of development involves engineering a lightweight, domain-specific LLM
tailored to the needs of emotionally responsive NPCs. This specialized model would be trained on
curated datasets consisting of branching dialogues, afect-labeled utterances, and narrative arcs. Training
could be performed using frameworks like PyTorch or TensorFlow, with deployment via ONNX and
Unity’s Barracuda inference engine or through native plugin bindings in C++/C. This would allow fully
ofline, real-time dialogue processing tightly integrated with game logic.</p>
      <p>By shifting from reliance on cloud-based APIs to embedded AI modules, Sophia aims to support
scalable, cost-efective, and privacy-preserving deployment across diverse platforms—including mobile,
console, and VR—without compromising its narrative and emotional sophistication.</p>
      <p>This approach would enable Sophia and future NPC systems to retain the emotional richness and
contextual continuity of large-scale language models while operating entirely ofline, thus meeting the
constraints of privacy-sensitive and resource-limited environments.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Future Work</title>
      <p>Future development of the Sophia Project will focus on enhancing realism, scalability, and interactivity
to support increasingly rich and emergent gameplay experiences.</p>
      <sec id="sec-5-1">
        <title>Personality Modeling and Narrative Individualization</title>
        <p>We plan to implement a personality modeling subsystem where each NPC is characterized by unique
and evolving traits such as empathy, assertiveness, or curiosity. These traits will influence how the NPC
reacts to player decisions, shaping branching narrative paths and fostering the illusion of individual
growth and agency over time.</p>
      </sec>
      <sec id="sec-5-2">
        <title>Ofline NLP and Decentralized Deployment</title>
        <p>Building on the limitations identified, future versions of Sophia will adopt optimized, quantized LLMs to
support full ofline functionality. A current prototype leverages the Mistral 7B model for local inference
and XTTS for real-time voice synthesis. This ofline version allows uninterrupted gameplay while
preserving privacy and eliminating reliance on cloud infrastructure.</p>
      </sec>
      <sec id="sec-5-3">
        <title>Emergent Social Simulation</title>
        <p>Inspired by systems like Bethesda’s Radiant AI [12], we aim to simulate interconnected NPC societies.
Agents will have their own goals, schedules, and dynamic inter-NPC relationships that evolve based on
world events and player actions. These interactions will form the foundation for emergent, unscripted
narratives.</p>
      </sec>
      <sec id="sec-5-4">
        <title>Standalone Game and Experimental Platform</title>
        <p>We are developing a full-length 2D RPG game centered around the Sophia architecture. This game
will feature original pixel art, a persistent world populated by autonomous NPCs, and an intelligent
city simulation. It will serve as a live testbed for evaluating long-term memory persistence, emotional
realism, and cooperative multi-agent behavior.</p>
      </sec>
      <sec id="sec-5-5">
        <title>Expanded Emotional Interaction and Audio Design</title>
        <p>Sophia’s expressive capabilities will be further enriched by integrating advanced audio features,
including contextual voice layering, dynamic emotional intonation, and sound-triggered behavioral responses.
These enhancements aim to deepen the afective resonance of interactions.</p>
      </sec>
      <sec id="sec-5-6">
        <title>Ethical Considerations and User Impact</title>
        <p>Finally, we will initiate a dedicated research thread on the ethical and psychological implications of
prolonged exposure to emotionally responsive NPCs. Key areas of inquiry include the development of
parasocial bonds, user consent in adaptive AI interactions, and the long-term impact on player cognition
and emotional states.</p>
        <p>This roadmap outlines a long-term vision where Sophia evolves into a foundational AI system
for narrative-rich games, capable of creating emotionally intelligent, socially complex, and ethically
grounded digital characters.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Related Work</title>
      <p>Sophia is positioned at the intersection of multiple research domains, including autonomous agents,
conversational AI, afective computing, procedural storytelling, and reinforcement learning.
Foundational work on afect-driven agents [ 13], deep reinforcement learning for NPCs [2], and emotional
modeling frameworks [7] informs the core of Sophia’s architecture.</p>
      <p>Systems such as Generative Agents [11] have demonstrated the viability of emergent behavior in
sandbox simulations through memory embedding and goal-driven planning. Similarly, narrative-driven
agents have leveraged natural language processing to enable coherent multi-turn interactions. Sophia
expands on these ideas by integrating persistent memory, voice-based emotional input, and adaptive
reinforcement learning in a playable 2D RPG environment.</p>
      <p>Unlike earlier models that rely on text input/output or scripted behavior trees, Sophia incorporates
speech-based input, afective modulation, and contextual memory retention. Techniques inspired
by GLoVe and BERT embedding models are used for conversation grounding and long-term
narrative continuity, although Sophia’s current implementation utilizes structured representations over
transformer-based attention.</p>
      <p>In contrast to scripted NPCs or rule-based planners, Sophia provides real-time behavioral adjustment
via deep Q-learning, and emotional memory tracking through RNNs. These elements make it possible
to simulate NPCs with personality evolution, afective agency, and individualized histories.</p>
      <p>This integrative approach reflects a growing shift in game AI from static content toward dynamic
systems capable of sustained, believable interactions. Sophia contributes to this trend by ofering a
scalable, modular framework for embedding social intelligence into interactive characters.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>The Sophia Project introduces a modular NPC architecture that unifies real-time voice interaction,
contextual memory, reinforcement learning, and afective modeling to create lifelike, emotionally
responsive agents. Unlike conventional scripted systems, Sophia mimics the nuance and plasticity of
human communication, enabling dynamic and personalized interactions in 2D RPG environments.</p>
      <p>Quantitative and qualitative evaluations support Sophia’s efectiveness in enhancing immersion,
increasing interaction frequency, and promoting narrative continuity. These outcomes, grounded in a
controlled A/B user study, highlight the value of combining reinforcement learning with emotional
memory and speech input.</p>
      <p>Sophia was validated within a custom-built farming simulation inspired by Harvest Moon and Stardew
Valley [14], demonstrating persistent character memory, branching narrative paths, and adaptive
behavior in response to emotional cues.</p>
      <p>Architecturally, Sophia supports horizontal scalability, enabling the coexistence of multiple NPCs with
distinct personalities, memories, and emotional trajectories. This allows for the creation of dynamic,
socially rich game ecosystems composed of fully autonomous agents.</p>
      <p>Looking forward, Sophia aims to eliminate cloud dependency through the integration of locally
hosted LLMs such as LLaMA or Mistral. This transition will unlock ofline functionality, reduce latency,
and improve privacy—critical for deployment across indie, academic, and commercial settings.</p>
      <p>Overall, Sophia represents a meaningful step toward emotionally intelligent, adaptive game characters.
It ofers a powerful platform for future research in procedural narrative, ethical AI interaction, and
emergent multi-agent storytelling.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>This work was supported by an undergraduate research grant funded by the Federal Institute of
Education, Science and Technology of the Triângulo Mineiro (IFTM).</p>
      <p>We extend our thanks to the IFTM for providing the structure, resources, and academic environment
that made this project possible.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration of Generative AI</title>
      <p>The authors declare that no generative AI tools were used in the preparation of this work.</p>
    </sec>
    <sec id="sec-10">
      <title>8. Appendix A - Participant Questionnaire</title>
      <p>The following questionnaire was administered to participants after their interaction with the Sophia
NPC system:
1. How natural did the interaction with Sophia feel?
2. Did Sophia’s responses reflect the context of your previous actions?
3. On a scale from 1 to 5, how emotionally engaging was the NPC?
4. Did Sophia remember past events or choices you made?
5. How satisfied were you with the dialogues?
6. How believable did Sophia’s behavior seem?
7. Would you prefer Sophia over traditional scripted NPCs in future games?
8. Were the NPC’s voice and audio responses pleasant and immersive?
9. Did the NPC adapt its behavior over time?
10. Open feedback: What aspects did you enjoy or dislike during your experience with Sophia?
[4] Jurafsky, D., &amp; Martin, J. H. (2023). Speech and Language Processing: An Introduction to Natural</p>
      <p>Language Processing, Computational Linguistics, and Speech Recognition (3rd ed.). Pearson.
[5] Zhou, L., Prabhumoye, S., &amp; Black, A. W. (2020). Design of a Conversational Agent for Interactive</p>
      <p>Storytelling and Gaming. arXiv preprint arXiv:2007.00107.
[6] Yannakakis, G. N., &amp; Togelius, J. (2021). Artificial Intelligence and Games . Springer. (Edição
atualizada)
[7] Zhao, J., Chen, X., &amp; Liu, Y. (2022). Intelligent Virtual Agents: Deep Learning Approaches for</p>
      <p>Game NPCs. Entertainment Computing, 42, 100468.
[8] Sutton, R. S., &amp; Barto, A. G. (2018). Reinforcement Learning: An Introduction (2nd ed.). MIT Press.
[9] Brown, T., Mann, B., Ryder, N., et al. (2022). Language Models are Few-Shot Learners. Advances in</p>
      <p>
        Neural Information Processing Systems, 33, 1877–1901.
[10] Lee, S., Kim, D., &amp; Oh, Y. (2020). Deep Memory Networks for Adaptive Game Characters. Proceedings
of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE) .
[11] Park, J., et al. (2023). Generative Agents: Interactive Simulacra of Human Behavior. ACM
Transactions on Computer-Human Interaction, 30(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), 1–33.
[12] Bethesda Softworks. (2017). Radiant AI Technology for NPC Behavior. Bethesda Game Studios
      </p>
      <p>Technical Documentation.
[13] Li, M., Wang, H., Zhang, Y., &amp; Liu, X. (2023). A Survey on Emotion-Aware Intelligent Agents in</p>
      <p>Games. IEEE Transactions on Afective Computing , Early Access.
[14] Barone, E. (2016). Stardew Valley [Video Game]. ConcernedApe. Disponível em: https://www.
stardewvalley.net/</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Yannakakis</surname>
            ,
            <given-names>G. N.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Togelius</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <source>Artificial Intelligence and Games</source>
          . Springer. DOI:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>319</fpage>
          -63519-4
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Maidl</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leitner</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ziafati</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Kerschbaum</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Non-Player Character Behavior Modeling Using Deep Reinforcement Learning</article-title>
          .
          <source>In Proceedings of the 2020 IEEE Conference on Games (CoG)</source>
          (pp.
          <fpage>360</fpage>
          -
          <lpage>367</lpage>
          ).
          <source>IEEE. DOI: 10.1109/CoG47356</source>
          .
          <year>2020</year>
          .9231712
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Mnih</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kavukcuoglu</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Silver</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rusu</surname>
            ,
            <given-names>A. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Veness</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellemare</surname>
            ,
            <given-names>M. G.</given-names>
          </string-name>
          , et al. (
          <year>2015</year>
          ).
          <article-title>Human-level control through deep reinforcement learning</article-title>
          .
          <source>Nature</source>
          ,
          <volume>518</volume>
          (
          <issue>7540</issue>
          ),
          <fpage>529</fpage>
          -
          <lpage>533</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>