<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Narratives: A Data-Driven Platform for Interactive Storytelling Based on Artificial Intelligence and Knowledge Representation Techniques</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Luigi Colucci Cante</string-name>
          <email>luigi.coluccicante@unicampania.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mariangela Graziano</string-name>
          <email>mariangela.graziano@unicampania.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Beniamino di Martino</string-name>
          <email>beniamino.dimartino@unicampania.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Large Language Models, Semantic, Virtual Reality, Augmented Reality, Knowledge Graph</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Engineering, University of Campania ”L. Vanvitelli”</institution>
          ,
          <addr-line>Aversa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science and Information Engineering, Asia University</institution>
          ,
          <addr-line>Taichung</addr-line>
          ,
          <country country="TW">Taiwan</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Department of Computer Science, University of Vienna</institution>
          ,
          <addr-line>Vienna</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <fpage>9</fpage>
      <lpage>11</lpage>
      <abstract>
        <p>This paper introduces ”StoryHub”, an innovative platform for interactive storytelling based on the integration of semantic technologies, Large Language Models (LLMs) and immersive environments in augmented and virtual reality. The platform enables the transformation of narrative, historical or literary content into digital experiences that can be explored through semantic graphs, conversational avatars and three-dimensional scenes. The process is based on the automatic extraction of knowledge from texts, its validation by experts, and its modelling in structured ontologies, in accordance with the principles of interoperability, traceability and reuse of data. The user can interact with virtual historical characters in a natural and personalised way, exploring the narrative from diferent perspectives. The paper describes the architecture of the platform, the functional modules and the technologies adopted, and finally discusses the application potential of the system in the fields of cultural heritage, education and immersive communication.</p>
      </abstract>
      <kwd-group>
        <kwd>Artificial Intelligence</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction and Motivations</title>
      <p>
        In recent years, the joint evolution of technologies related to generative artificial intelligence, augmented
and virtual reality and semantic representation of knowledge has opened up new opportunities for the
creation and enjoyment of interactive digital content [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In particular, the structured extraction of
knowledge from textual sources and its subsequent integration into virtual environments are redefining
the way information is made accessible, navigable and understandable by end users [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This is the
backdrop for the ”StoryHub” project, a prototype platform developed within the RASTA (Augmented
Reality and Automated StoryTelling) project, which aims to transform narrative texts (literary, historical
or educational) into enriched digital experiences based on knowledge graphs, augmented and virtual
reality, and large-scale language models (LLM). Starting from textual content, the system automatically
builds a semantic network of characters, events and relationships, which can be explored through a
visual interface or enjoyed in conversational form via a 3D avatar in augmented or virtual reality. Thus,
by means of special graphic interfaces, the user has the possibility of graphically exploring the narrative
structure, favouring the analysis of the connections between the protagonists, the understanding of
the context and the consultation of automatically generated narrative profiles. The project aims to
combine the expressive power of narrative with the potential of data science technologies, proposing a
new way of accessing knowledge through transparent, interoperable and reusable data structures. The
dialogic component, fed by LLM and guided by a knowledge base automatically generated from the text,
      </p>
      <p>CEUR
Workshop</p>
      <p>
        ISSN1613-0073
then validated by human experts, opens the way to new forms of interaction with the content, while
raising relevant questions about ethics, traceability of sources and reliability of automatic answers. The
dialogue component based on semantic retrieval (Semantic-RAG) allows for personalised and verifiable
interaction, which is a step forward compared to generalist generative models not based on controlled
knowledge bases. The entire process is driven by a focus on data governance: the generated graphs are
persistent, versionable, interoperable and reusable in other application contexts (museums, publishing,
education), consistent with the principles of data spaces and data reusability. Furthermore, the system
is designed to be explainable, allowing the provenance of information (data lineage) to be traced and
the output generated by LLM to be supervised, in line with the requirements of reliability, transparency
and ethics in the use of Big Data [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>This platform represents a concrete example of Big Data and Data Science Application, in which
textual knowledge is transformed into structured data and then reused in advanced computational
environments, serving human-computer interaction and cultural fruition. Furthermore, the use of
LLM for the generation of personalised and dialogic content raises important considerations about
the control and verification of automatic information, which the paper discusses with reference to the
principles of the Ethics of Big Data and Trustworthy AI. Through the analysis of this use case, the paper
intends to ofer an interdisciplinary reflection on the methods and practices for the representation,
governance and ethical use of narrative data in augmented environments, proposing ”StoryHub” as an
adaptable framework for applications in education, culture and industry, with a particular focus on
cultural heritage enhancement and immersive communication.</p>
      <p>The paper is structured as follows: Section 2 analyses the main related works, with a focus on
automated storytelling technologies, semantic representation and immersive applications. Section 3
describes in detail the methodology adopted to build the ”StoryHub” platform, highlighting the roles
involved and the process of transforming narrative content into structured knowledge. Section 4 delves
into the implementation aspects, illustrating the technologies used and the ontology developed for
the historical domain. Section 5 provides an overview of the usability modes made available by the
platform, including graph visualisation, interactive storytelling, and augmented and virtual Web reality
scenarios. Finally, Section 6 presents the conclusions of the work and proposes directions for future
developments.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>
        Recent studies have focused on the integration of big data, knowledge graphs and immersive technologies
to develop new forms of automatic, interactive and personalized storytelling. In [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], the authors propose
an innovative framework for Cultural Heritage based on a Multimedia Knowledge Graph (MKG).
The system exploits deep learning techniques and Linked Open Data to segment images and identify
relevant elements in multimedia contents automatically collected from the web. The goal is to generate
coherent and semantically rich stories that guide the user through a dynamic and intuitive interface.
In a complementary manner, the Open Story Model (OSM) introduced in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] addresses the challenge
of understanding heterogeneous big data through a pipeline that transforms them into structured
knowledge graphs, from which personalized narratives are then derived. The generated stories are
not only descriptive, but take the form of interactive data products, which allow stakeholders to
explore and analyze in real time information relevant to their interests. Another relevant contribution
comes from the work of Gatt and Reiter [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], which explores the transition from structured data and
knowledge graphs to natural language texts. An in-depth overview of Natural Language Generation
(NLG) techniques is provided, highlighting the challenges in generating cohesive narratives from
nonlinguistic data. The tutorial discusses both traditional approaches and modern solutions based on neural
architectures, with an emphasis on adaptability and scalability in domains without parallelized corpora.
An interesting perspective on the use of knowledge graphs in narrative generation is presented in [7],
where sophisticated semantic relations, such as intentional or enabling causalities, are proposed beyond
the classic “who, what, where, when” relations. The adoption of such semantic structures enriches the
generated narratives, improving their grammatical coherence and semantic precision. The work [8]
explores the use of virtual reality in immersive storytelling as a form of afective ethnography. Through
the case of Traveling While Black, they show how VR can facilitate an empathetic and non-intrusive
experience, creating a sensory and reflective relationship between viewer and narrated subject. In
[9] it is analyzed immersive storytelling in mixed reality (VR/AR) environments, investigating how
emerging technologies can shape future identities, especially in spatial contexts. Through installations,
interactive books and performances with humans, robots and avatars, the project combines art and
science to create sensory experiences that stimulate reflections on technology, the body and society.
The approach is interdisciplinary and oriented towards experimentation with young audiences. On
the immersive visualization of big data, the work [10] highlights the limits of traditional techniques in
representing high-dimensional data, proposing instead the use of AR/VR technologies to improve the
perception of information structures. Interaction in augmented reality environments allows for greater
immediacy in understanding and cognitive retention of the relationships between data. Similarly, [11]
proposes virtual reality platforms for collaborative data visualization. Such immersive environments
ofer tangible benefits in terms of geometric perception and intuitive understanding of datascapes,
also favoring shared exploration between users in a common visual space. Finally, the article [12]
analyzes the impact of virtual reality on digital media in the era of big data and artificial intelligence.
The integration of VR in areas such as tourism, audiovisual production and urban planning shows how
technology can improve the communicative efectiveness of content and facilitate a more direct and
engaging transmission of information. In summary, a convergence between big data, knowledge graphs
and immersive technologies emerges, which makes the development of interactive, multimodal and
personalized narratives possible. The use of conversational avatars and augmented reality opens up
promising scenarios for a participatory fruition of knowledge, capable of connecting complex data with
rich, intuitive and engaging user experiences.
      </p>
      <p>Although the literature reviewed outlines an increasing convergence between, generative AI, immersive
reality and semantic web technologies, “StoryHub” stands out for its systemic approach that integrates
these fields into a single operational workflow, replicable and validated on a specific cultural-historical
domain. While some works deal with individual components, the platform proposed in this article
coherently unifies semantic modelling, dialogic generation based on a structured knowledge base, and
immersive content fruition with end users.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>• Domain Experts: have the initial task of analysing historical sources (archives, books, letters,
etc.) and reporting the knowledge in a “historical narratives” database, containing natural language
storytelling processing. In addition, all the main storytelling elements inherent to the narrative
domain (locations, scene objects, events and actors) are to be reported in a domain ontology.
• Script Writers: have the task of modelling the storytelling using a special story composition
tool, or using Natural Language Processing and/or generative techniques to automatically extract
storytelling elements from the natural language text. To ensure storytelling consistency, in the
second case it is necessary for human domain experts to validate the automatically extracted
content using the same story composition tool to support validation. In both cases, the produced
storytelling is automatically converted into a semantic representation, for details of which see
Article [13]. Thus, the script writers’ task ends with the populating of the semantic database
”Populated Storytelling Ontology”.
• Human Annotators: have the task of semantically annotating historical archive sources (texts
or digital scans) with the storytelling elements defined in the domain ontologies. As the same
domain ontologies are used for both storytelling modelling and semantic annotation, an indirect
semantic link is created between the storytelling and archive sources. Thus, greater historical
coherence is provided to the narratives. The tool used is a semantic annotator of texts and images,
for details of which see articles [14] and [15]. The annotations produced with the tool are stored in
OWL (Ontology Web Language) format in the ontology “Semantic Annotated Historical Archives”.
The merge of the three semantic databases “historical narratives”, ”Populated Storytelling Ontology”
and “Semantic Annotated Historical Archives” builds the knowledge base of ”StoryHub”.
• Inference Engine: is the processing heart of the system. It uses the structured knowledge
contained in the knowledge base to create the assets to be used for enjoyment, including 3D avatars
and scene representations. In particular, the task of this module is to automatically construct
prompts consistent with the structured knowledge validated by experts for the generation of
assets through generative models. Another task of this module consists in the generation of
answers to user queries posed to characters of stories or dialogues between characters: thanks
to the semantic structuring of knowledge, in order to be able to answer a user query, the LLM
performs a semantic retrieval of content within the knowledge base, in order to guarantee answers
consistent with historical content and avoiding fake news.
• Final User: can take advantage of the functionalities provided by ”StoryHub” through special
user scenarios. The assets generated by the inference engine are used to provide the user with an
immersive user experience consistent with the narrated facts. In particular, ”StoryHub” provides
three diferent fruition scenarios: augmented reality, virtual reality and virtual web reality.
In addition, the tool ofers the possibility to enjoy storytelling through three points of view:
Navigable Characters Knowledge Graph View, Storytelling View and Interactive Conversational
Avatars View.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Implementation</title>
      <p>In this Section, some details concerning the implementation of the methodology presented in Section 3
are presented.</p>
      <p>Semantic technologies were used for storytelling modelling. In particular, both storytelling and
domain ontologies used for the composition of stories are realised in OWL (Ontology Web Language).
For the RASTA project, HistoricO [16] was created, an ad-hoc ontology inherent to the
culturalhistorical domain in Italy in the 1700s, containing knowledge inherent to the social-political-cultural
classes of the time, characters from the noble hierarchy (kings, queens, princes, etc.), politics (ministers,
councillors, etc.) and civil (merchants, professionals), historical events, historical artefacts (documents,
books and letters inherent to the historical period), locations, etc. An example of storytelling modelled
in OWL is illustrated in Figure 2.</p>
    </sec>
    <sec id="sec-5">
      <title>5. ”StoryHub” Overview</title>
      <p>The storytelling ontology populated with the stories modelled by script writers was used as the
knowledge base for the ”StoryHub” utilisation tool, described in Deliverable. The tool provides three
modes of fruition:
• Navigable Characters Knowledge Graph View: it is possible to navigate a knowledge graph
whose nodes represent all the characters present in the semantic knowledge base. Various filters
can be applied to the graph and all the information associated with each character can be displayed.
In addition, a conversational agent can be activated to dialogue with each of the characters in the
graph.
• Storytelling View: the structure of one or more stories modelled in the knowledge base is
displayed. A story is displayed as a sequence of scenes, each of which is graphically represented
with a cube. It is possible to view all the information in each scene, displaying all the props used,
the actions taken in the scene and the characters participating in each action. In addition, it
is also possible to ‘bring the scene to life’ by activating a conversational agent that generates
dialogues between two or more characters involved in the scene.
• Interactive Conversational Avatars View: through this view, each scene can be visualised
in 3D through avatars that can converse in real time with each other and with an external user
about the context of the scene in question.</p>
      <sec id="sec-5-1">
        <title>More details on the use of the tool are provided in the following subsections.</title>
        <sec id="sec-5-1-1">
          <title>5.1. Navigable Characters Knowledge Graph View</title>
          <p>As can be seen from Figure 3, each node in the graph represents a character modelled in the
knowledge base, while the edges represent one of the possible relationships between the characters. Four
relationships are used:
• Marriage Relationship: corresponds to one of the possible marriage relationships in the domain
ontology.
• Sibling Relationship: corresponds to one of the possible family relationships indicating that a
character is the brother or sister of another character.
• Parent Relationship: corresponds to a father-son relationship.
• Storytelling Relationship: corresponds to a collaboration between two characters in at least
one action of a story. This relationship testifies that the two characters know each other and
have interacted at least once in their lives.</p>
          <p>Various filters can be applied to the graph; for example, the filter ”Characters that satisfy constraints”
is available, which allows one to select all and only those domain instances that satisfy one or more
restrictions on the object properties of the semantic knowledge base by means of an automatically
generated SPARQL query based on the parameters selected graphically by the user. An example is
shown in Figure 4, where the graph has been restricted to all and only those individuals who were
sovereigns of the “Kingdom of Naples” and whose dynasty is “Bourbon”.</p>
          <p>Clicking on a ‘character’ node displays an information sheet containing all the information related to
that character, which is extracted from the semantic knowledge base by means of a series of SPARQL
queries. Information such as the date and place of birth and death, the class or classes to which the
domain instance belongs, the description, any relationships with other characters and the list of stories
in which the character appears are shown. It is also possible to activate a conversational agent to
converse with the character via a special button on the information sheet. It is possible to converse with
a character either by means of a special chat or verbally through a text to speech and speech to text
module. Figure 5 shows an example of a conversation with the character “Giuseppe II Asburgo Lorena”.</p>
          <p>As can be seen from Figure 5, Emperor Giuseppe II of Habsburg was asked via chat if he knew the
character “Luigi Serio”. The conversational agent, in order to provide the answers, uses an LLM on
which a semantic RAG was performed. Thus, the knowledge used to provide the answers is acquired
from the stories modelled in the semantic base, in such a way as to avoid the generation of fake news
and historical inconsistencies. Indeed, through the chat, for each answer generated by the LLM, it
is possible to visualise which sources were used; in the example shown in Figure 5, knowledge was
acquired from two scenes from the story “Giuseppe II at Persano”.</p>
        </sec>
        <sec id="sec-5-1-2">
          <title>5.2. Story View</title>
          <p>The Story View allows the visualisation of the storytelling modelled in the semantic knowledge base.
Figure 6 shows an example from story 6, which tells of the of the evolution of the Royal Palace of
Quisisana, in Castellammare (Italy), from its foundation under “Carlo II di Angiò” its modern restoration.</p>
          <p>As can be seen from Figure 6, scenes are modelled with cubes placed in sequence between them.
Once a scene has been chosen, it is also possible to display the actions performed in it, the characters
involved and the scene objects used. With a click on a given node, an information panel can be opened
with all the information available in the knowledge base; Figure 7 shows an example of a document-type
prop representation, namely a letter from Vanvitelli to his brother Urbano in 1751.</p>
          <p>Thanks to the semantic annotation work carried out earlier, a semantic layer was added to relate the
props modelled in the stories to the historical sources. This step ensures a higher level of consistency in
the generation of dialogues with the characters in the stories. By clicking on a node of type ‘Scene’, it is
possible to display all the information related to it, and it is also possible to activate a conversational
agent to ‘execute’ the scene, using an LLM to generate dialogues between two or more characters
involved in the scene, again using semantic RAG techniques to avoid fake news. Figure 8 shows an
example of dialogues generated between the characters “Luigi Vanvitelli”, “Carlo III di Borbone” and
“Maria Amalia di Sassonia”, during the second presentation of the designs of the royal palace of Caserta,
held at the Palace of Portici, at the time the residence of the sovereigns.</p>
          <p>In addition, the tool provides the user with the possibility to intrude at any point in the conversation
in real time, in such a way as to actively participate in the narrative, asking questions or even making
suggestions to the characters.</p>
        </sec>
        <sec id="sec-5-1-3">
          <title>5.3. Interactive Conversational Avatars View</title>
          <p>From each specific scene of the Story View it is possible to activate this visualisation, which consists
of a Virtual 3D Mode reproduction of the scene: the scene comes to life, showing all the characters
involved in it within the location. The characters interact with each other through dialogues generated
by the LLM, but the user can also here at any time insert himself into the conversation by asking
general remarks or a specific question to one or more characters in the scene. The conversation that is
generated is not a static asset but is a dynamic conversation generated in real time by combining the
linguistic generative capacity of LLMs used to convey coherent and reliable content derived from the
knowledge base containing context, scene and storytelling elements. An example of a reproduction of
the ”Presentation of the designs of the royal palace of Caserta” scene is shown in Figure 9.</p>
        </sec>
        <sec id="sec-5-1-4">
          <title>5.4. Augmented Reality Scenario</title>
          <p>The functionality ofered by ”StoryHub” can also be used through an Augmented Reality scenario. By
framing the surrounding area with the camera of their person device (smartphone, tablet, etc.) the user
will see the 3D avatar of the historical character appear. The user has the possibility of manipulating
the avatar by means of gestures: for example, he can resize it, rotate it, turn it upside down and even
attach it to a flat surface such as a table or the floor. Once the character has been arranged, the user
has the possibility of initiating a conversation with it: by clicking three times consecutively on the
avatar, audio acquisition is activated, and another three clicks stop the acquisition. The acquired audio
is encoded into text by a “Speech to Text” module, and the text is used as the input prompt of an LLM
(Large Language Model), in whose instructions it is specified to respond by simulating the character’s
historical knowledge and dialectical ability. LLM Gemini version 1.5 was used. The response returned
by the LLM is encoded in audio via a special “Text to Speech” module. Figure 10 shows the historical
character Pliny the Elder, with whom one can interact.</p>
        </sec>
        <sec id="sec-5-1-5">
          <title>5.5. Preliminary evaluation</title>
          <p>For a preliminary evaluation, a pilot experimental protocol was set up to collect measurable metrics
and structured feedback. Table 1 shows the indicators that were considered for an initial quantitative
evaluation. During the exploratory pilot, 15 non-expert users (high school and university students) and
10 experts (historians) were involved.</p>
          <p>In addition, qualitative insights were gathered through semi-structured interviews, which revealed:
• Perceived educational value: users and historians identified consistency of sources as a
distinguishing feature.
• Graph navigation: intuitive for both experts and general users.
• Immersion: AR/VR mode was perceived as immersive (average presence score 5.8 out of 7),
with positive feedback on realism and clarity of interactions.</p>
          <p>These preliminary results indicate a good balance between usability, consistency and immersion. In the
next phases we plan to plan a study with a larger sample (≥ 50 users) with a group variance analysis
and we will perform quantitative analysis with metrics such as task completion rate, retention/return
rate (AR/VR) and clickstream analysis.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions and Future Works</title>
      <p>StoryHub demonstrated how advanced generative artificial intelligence technologies, semantic
knowledge representation and augmented/virtual reality can be integrated to transform narrative content
into interactive and immersive digital experiences. The proposed approach creates a bridge between
historical fiction and new forms of digital enjoyment, allowing users not only to explore stories and
characters through semantic graphs and conversational avatars, but also to interact with them in real
time, in a coherent and historically grounded context. The system is based on principles of transparency,
interoperability and reusability of data, proposing a model that can also be replicated in other application
areas, such as museums, publishing, education and cultural tourism. The synergy between automatic
extraction, human validation and conversational generation makes it possible to maintain a balance
between automation and content accuracy, while respecting the criteria of reliability and traceability
inherent to the ethos of Big Data and reliable Artificial Intelligence. Future developments of the project
will focus on several directions. First, it is planned to extend the HistoricO ontology to other cultural
and temporal contexts, in order to enrich the knowledge base and increase the variety of accessible
narratives. Secondly, customisation techniques of the narrative experience based on the user profile
will be explored to make the interaction even more engaging and adaptive. Finally, the explainability
and auditing capabilities of LLM responses will be enhanced by integrating feedback and verification
modules that actively involve the user in the information validation process. It is important to emphasise
that the modular and data-driven structure of the platform was conceived in order to be able to evolve
towards contexts in which the narrative and interaction do not only concern historical content, but
also relevant phenomena in the technical-scientific field. It would be interesting to experiment in the
future with the methodology underlying the proposed platform by applying it to contexts other than the
enhancement of the historical-cultural heritage, such as purely technical fields such as the monitoring
and detection of cloud patterns.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>The work described in this paper has been supported by the research projects RASTA: Realtà Aumentata
e Story-Telling Automatizzato per la valorizzazione di Beni Culturali ed Itinerari; Italian MUR PON Proj.
ARS01 00540 and AI-PATTERNS, Grantee of FAIR - Future Artificial Intelligence Research (PE00000013)
under the Italian NRRP MUR program funded by the EU - NGEU.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <sec id="sec-8-1">
        <title>The author(s) have not employed any Generative AI tools.</title>
        <p>[7] M. de Kok, Y. Rebboud, P. Lisena, R. Troncy, I. Tiddi, From nodes to narratives: A knowledge
graphbased storytelling approach, in: TEXT2STORY 2024, 7th International Workshop on Narrative
Extraction from Texts (Text2Story), colocated with ECIR 2024, 2024.
[8] M. Ceuterick, C. I. and, Immersive storytelling and afective ethnography in virtual reality, Review
of Communication 21 (2021) 9–22. URL: https://doi.org/10.1080/15358593.2021.1881610. doi:10.
1080/15358593.2021.1881610. arXiv:https://doi.org/10.1080/15358593.2021.1881610.
[9] D. Doyle, Immersive storytelling in mixed reality environments, in: 2017 23rd International</p>
        <p>Conference on Virtual System &amp; Multimedia (VSMM), IEEE, 2017, pp. 1–4.
[10] E. Olshannikova, A. Ometov, Y. Koucheryavy, T. Olsson, Visualizing big data with augmented and
virtual reality: challenges and research agenda, Journal of Big Data 2 (2015) 1–27.
[11] C. Donalek, S. G. Djorgovski, A. Cioc, A. Wang, J. Zhang, E. Lawler, S. Yeh, A. Mahabal, M. Graham,
A. Drake, et al., Immersive and collaborative data visualization using virtual reality platforms, in:
2014 IEEE International conference on big data (big data), IEEE, 2014, pp. 609–614.
[12] W. Zhao, S. Zhang, X. Li, Impact of virtual reality technology on digital media in the context of
big data and artificial intelligence, Journal of Computational Methods in Science and Engineering
23 (2023) 605–615.
[13] L. Colucci Cante, B. Di Martino, M. Graziano, A comparative analysis of formal storytelling
representation models, in: L. Barolli (Ed.), Complex, Intelligent and Software Intensive Systems,
Springer Nature Switzerland, Cham, 2023, pp. 327–336.
[14] B. Di Martino, A. Amato, D. Branco, L. Colucci Cante, M. Graziano, S. Venticinque, Towards
a semantic annotation software design for images and texts, in: L. Barolli (Ed.), Complex, Intelligent
and Software Intensive Systems, Springer Nature Switzerland, Cham, 2024, pp. 413–422.
[15] L. Colucci Cante, S. D’Angelo, B. Di Martino, M. Graziano, Text annotation tools: A comprehensive
review and comparative analysis, in: L. Barolli (Ed.), Complex, Intelligent and Software Intensive
Systems, Springer Nature Switzerland, Cham, 2024, pp. 353–362.
[16] A. Amato, M. Graziano, L. C. Cante, B. Di Martino, A. Di Falco, Historico: An ontology for modeling
social classes, power positions, institutions, events and rituals in historical narratives, in: L. Barolli
(Ed.), Advanced Information Networking and Applications, Springer Nature Switzerland, Cham,
2025, pp. 177–186.
[17] M. Graziano, L. Colucci Cante, B. Di Martino, Deploying large language model on cloud-edge
architectures: A case study for conversational historical characters, in: L. Barolli (Ed.), Advanced
Information Networking and Applications, Springer Nature Switzerland, Cham, 2025, pp. 196–205.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Colucci Cante</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. Di</given-names>
            <surname>Martino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Graziano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Branco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. J.</given-names>
            <surname>Pezzullo</surname>
          </string-name>
          ,
          <article-title>Automated storytelling technologies for cultural heritage</article-title>
          , in: L.
          <string-name>
            <surname>Barolli</surname>
          </string-name>
          (Ed.),
          <source>Advances in Internet, Data &amp; Web Technologies</source>
          , Springer Nature Switzerland, Cham,
          <year>2024</year>
          , pp.
          <fpage>597</fpage>
          -
          <lpage>606</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L.</given-names>
            <surname>Colucci Cante</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Graziano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. Di</given-names>
            <surname>Martino</surname>
          </string-name>
          ,
          <article-title>Smart cities: Integrating iot and cloud computing for smart urban applications</article-title>
          , in: L.
          <string-name>
            <surname>Barolli</surname>
          </string-name>
          (Ed.),
          <source>Advanced Information Networking and Applications</source>
          , Springer Nature Switzerland, Cham,
          <year>2025</year>
          , pp.
          <fpage>186</fpage>
          -
          <lpage>195</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Graziano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. Di</given-names>
            <surname>Martino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. Colucci</given-names>
            <surname>Cante</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Esposito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lupi</surname>
          </string-name>
          ,
          <article-title>Towards a methodology for comparing legal texts based on semantic, storytelling and natural language processing</article-title>
          , in: L.
          <string-name>
            <surname>Barolli</surname>
          </string-name>
          (Ed.),
          <source>Complex, Intelligent and Software Intensive Systems</source>
          , Springer Nature Switzerland, Cham,
          <year>2024</year>
          , pp.
          <fpage>343</fpage>
          -
          <lpage>352</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Renzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Rinaldi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tommasino</surname>
          </string-name>
          ,
          <article-title>A storytelling framework based on multimedia knowledge graph using linked open data and deep neural networks</article-title>
          ,
          <source>Multimedia Tools and Applications</source>
          <volume>82</volume>
          (
          <year>2023</year>
          )
          <fpage>31625</fpage>
          -
          <lpage>31639</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F.</given-names>
            <surname>Lotfi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Beheshti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jamzad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Beigy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Y.</given-names>
            <surname>Philip</surname>
          </string-name>
          ,
          <article-title>The open story model (osm): Transforming big data into interactive narratives</article-title>
          ,
          <source>in: 2024 IEEE International Conference on Web Services (ICWS)</source>
          , IEEE,
          <year>2024</year>
          , pp.
          <fpage>1177</fpage>
          -
          <lpage>1187</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Laha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sankaranarayanan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Krishnan</surname>
          </string-name>
          ,
          <article-title>Storytelling from structured data and knowledge graphs: An nlg perspective, in: Proceedings of the 57th annual meeting of the association for computational linguistics:</article-title>
          <source>Tutorial Abstracts</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>43</fpage>
          -
          <lpage>48</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>