=Paper= {{Paper |id=Vol-3243/paper1 |storemode=property |title=MArTA: A Virtual Guide for the National Archaeological Museum of Taranto |pdfUrl=https://ceur-ws.org/Vol-3243/paper1.pdf |volume=Vol-3243 |authors=Berardina De Carolis,Nicola Macchiarulo,Claudio Valenziano |dblpUrl=https://dblp.org/rec/conf/avi/CarolisMV22 }} ==MArTA: A Virtual Guide for the National Archaeological Museum of Taranto== https://ceur-ws.org/Vol-3243/paper1.pdf
MArTA: A Virtual Guide for the National
Archaeological Museum of Taranto
Berardina De Carolis1 , Nicola Macchiarulo1 and Claudio Valenziano1
1
    Department of Computer Science, University of Bari, Bari, Italy


                                         Abstract
                                         The last two years have been difficult for museums and cultural sites since the Covid-19 pandemic
                                         changed and limited people’s cultural experiences. This paper aims at exploring the impact of the virtual
                                         assistant metaphor to guide people vising a virtual museum and improve the accessibility to information
                                         about artworks in the National Archeological Museum of Taranto. We present the cultural context, the
                                         application and the visitors’ feedback. To this aim, we created two virtual agents, one for welcoming
                                         people and explaining the visiting experience, staying at the reception desk of the museum, and another
                                         acting as a guide in a room with artworks that interacts with users in natural language. A preliminary
                                         evaluation regarding the visitor acceptance of the system was conducted and its results show, in general, a
                                         good acceptance especially as far as the user experience is concerned, while some problems emerged from
                                         the usability point of view, control in particular. Then, we evaluated the interaction with the two virtual
                                         assistants. Results show that all the users were satisfied with the interaction with the two assistants
                                         even if some errors in the speech understanding occurred. Participants appreciated, in particular, the
                                         possibility to make questions about the artworks to the virtual guide. These results encourage us to go
                                         on the idea of integrating VR environments with AI-based virtual assistants.

                                         Keywords
                                         Virtual museum guide, virtual reality, user experience




1. Introduction
The last two years, due to the Covid-19 pandemic, have been difficult for museum and cultural
sites. Many museums, to favor remote visiting and cultural experiences, used digital technolo-
gies, such as virtual reality, to let people visiting places remotely and immersively. Recently,
the concept of metaverse [1], defined as an expansive virtual space where users can interact
with 3D digital objects and 3D virtual avatars of other people in way similar to the real world,
core technologies used behind it, such as virtual and augmented reality, are becoming accepted
by the general public and are used daily in personal and professional settings. In parallel, in the
last years, there has been a significant advancement in the field of Artificial Intelligence (AI)
for driving the behavior of personal virtual assistants. A domain that could take advance of
the linking of AI with VR research field is the cultural one, a virtual museum in which, like in
the real one, people can have an interactive guided visiting experience. Moreover, the finite
number of questions and actions possible in this domain make feasible the development of a

AVI-CH 2022 Workshop on Advanced Visual Interfaces and Interactions in Cultural Heritage. June 06, 2022. Rome, Italy
Envelope-Open berardina.decarolis@uniba.it (B. De Carolis); nicola.macchiarulo@uniba.it (N. Macchiarulo);
claudiovalenziano99@gmail.com (C. Valenziano)
                                       © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
successful virtual assistant engaging the user through spoken natural language interaction.
   This research aims at exploring the impact of the virtual assistant metaphor to guide people
visiting the virtual representation of a museum and improving the accessibility to information
about art- works in the National Archeological Museum of Taranto (MArTA) [2]. We created
two virtual agents, one, Sam, for welcoming people and explaining the visiting experience
staying at the reception desk and another one called Marta, from the acronym of the museum,
acting as a guide in a room with artworks that interacts with users in natural spoken language.
In this preliminary version of the system, we focused our effort on understanding the impact of
the approach from the HCI point of view since we believe it is essential to provide a positive
user experience to visitors. To this aim, we focused on one of the rooms of the museum
”Sala Ricciardi” containing a collection of paintings depicting sacred subjects. At this stage of
the project, Marta has been designed to guide the user in a predefined tour of the paintings
present in the room while answering on-request to questions on information about a painting.
We conducted a preliminary evaluation study aiming at: a) assessing the usability and user
experience of the VR experience; b) evaluating in particular the interaction with the two virtual
assistants. Results show that, while the aspects related to the experience are positive, as far
as the interaction with the assistants is concerned, the scores related to efficiency and control,
even if acceptable, indicate some problems.


2. Background and Motivations
The application of VR to the museum provides a new dimension to the visitors’ experience
and many museums offer VR guided tours. During the lockdown due to Covid-19 pandemic,
the use of technologies enabling remote visits has become a requirement to allow visitors
continuing enjoying the art [3]. In this period, museums developed virtual experiences for
different platforms (i.e. smartphones, Web 3D, VR) to make museum content more accessible
and attractive to visitors. These technologies not only enable intuitive interactions and provide
entertaining and learning experiences, but also provide a sense of immersion for visitors.
   There are several examples of museums that enriched their exhibitions by providing to visitors
VR experiences. For instance, in 2019, Paris’ Louvre launched ‘Mona Lisa: Beyond the Glass’,
a VR experience that through interaction allows users to discover details about the painting.
The experience can be enjoyed not only directly at the Louvre but also on VR app store, iOS,
and Android. Also the Smithsonian National Museum of Natural History offers several virtual
museum tours on the web. It is possible to browse their permanent, current or past exhibitions
as virtual journeys through time and history. The New York’s Metropolitan Museum of Art,
the Florence’s Uffizi Gallery and the Van Gogh Museum, thanks to a successful Google Arts
Culture partnership offers different 3D virtual tours of their collection.
   Recently, with the introduction of the concept of metaverse, VR appears to be intended for use
more widely by many people. Then, extending the museum experience in the metaverse seems
appropriate. In this context, to provide a likelihood experience with a real museum visit, besides
the VR representation of the museum, it seems successful to integrate it with AI technologies.
Avatar of people visiting the museum and virtual assistant, that are 3D versions of chatbots
[4] like a AI-enabled non-playing characters in a video game, could inhabit the metaverse and
provide an engaging experience. Using AI solutions could overcome accessibility problems due
to the language since Natural Language Understanding (NLU) and Natural Language Generation
(NLG) results could be converted into any language, depending on the AI’s training so that users
from around the world can access the museum. Moreover, since the interaction can be based on
natural language, the use of an AI-powered virtual assistant can improve the human-computer
interactions (HCI) and the experience.
   The idea of implementing such a system is not new. Several studies were already discussing
the challenges posed by the development of an intelligent museum guiding system [5]. With
the evolution of technology, a good level of conversational intelligence has been achieved in
these agents acting as museum guides [6]. In recent years, many researchers concluded that
virtual agents could improve interaction by making communication as natural and human as
possible [7]. An example of an immersive virtual experience in which the AI has been used to
provide a conversational interactive experience to visitors is the RaphaelloVR, an immersive
3D project in which, using a VR headset it is possible to listen to stories about the artworks
directly by the protagonists acting as narrators (https://www.skylabstudios.it/museovr/).
   Many chatbots have been developed to guide people in a museum and receive information
about artworks [8]. Examples are the one provided by MAXXI – ”Museo delle Arti del XXI secolo”
in Rome, that provides a digital interactive guide on Facebook Messenger allowing visitors
to follow thematic paths, and make questions (https://www.eng.it/en/case-studies/chatbot-
museo-maxxi). Another example is the one of the Heinz Nixdorf MuseumsForum in Paderborn
Germany (http://www.hnf.de/en/) that provided an early experience of using an avatar bot
introduced as MAX, a conversational agent that directly engages with visitors through a
screen as a virtual museum guide [5]. Also the Carnegie Museums of Pittsburgh created a
gamified museum experience with a digital chatbot component, AndyBot! (https://www.aam-
us.org/2018/04/10/chat-with-andybot-at-aam-2018/), a Facebook Messenger bot for assisting
participants through on and off-site experiences in the Carnegie museums. The use of chatbots
is becoming part of an increasing suite of advancing AI-enabled components to understand the
user’s intent and provide more personalized content and information [9].
   These works and examples indicate that integrating VR visits with AI-based interaction
can be a good solution to enhance the visitors’ experience. However, to be successful, when
designing these types of applications, we should take into account the visitor’s feedback and
their acceptance of the technology using a human-centered perspective [10].


3. MArTA: guiding visitors in the Ricciardi Room
To test our approach, we re-created one of the rooms of the MArTA museum of Taranto: the
Ricciardi Room, that takes the name from Mons. Giuseppe Ricciardi, who donated his precious
collection of paintings to the museum.
   The 3D virtual environment consisted of a reception room, aiming at welcoming and helping
visitors with useful information about the experience, and the Ricciardi room. For the 3D
reconstruction of the rooms, the Amazon Sumerian platform has been used [11]. First, two
empty rooms were created, then we placed in the first one fornitures that are typical of a
reception (Figure 1a), and in the second one the paintings of the Ricciardi collection (Figure 1b).
  Amazon Sumerian allows creating the 3D scenes that are made up of components and entities,
organized into projects. A scene is a 3D space that contains objects (i.e. paintings, lights, etc.)
and behaviors (animations, timelines and scripts) that together define the VR environment.
The scene, once ready, can be exported as a static website that can be opened in a browser.
Two virtual assistants were developed as Sumerian hosts. An host is an asset provided by
Sumerian that has built in animation, speech, and behaviors for interacting with users. Hosts
use Amazon Polly as a text-to-speech. The virtual assistant in the VR museum were created
with two different aims:

    • Sam, that welcomes people at the reception and provides information about the visit. It
      explains also that is possible to select a language different from Italian by clicking on
      the flag on the reception desk. In addition, the user may also see the content of a short
      tutorial video on the back of Sam (Figure 1a).
    • Marta, that guides the visitor through the paintings presenting and describing each
      painting in the room; on request Marta can answer to visitor’s questions about a painting
      of interest (i.e. ”Who is the author of this painting?” or ”When it was painted?”) (Figure
      1b).

The evolution of the VR environment and virtual assistant’s behavior as a consequence of an
event is managed through a finite state machine. The information about the paintings are
structured and stored in the AWS DynamoDB so as to let Marta describing the artworks and
answer to questions. The experience has been designed for an immersive VR visit, however
it is possible to visit the Ricciardi room also on a PC without VR headset. In the immersive
VR setting, the interaction through the headset allows looking around, moving the head to see
what is in the room. To move around it is sufficient to point the motion controller toward the
destination point on the floor. Then, to speak with the virtual assistants it is necessary to point
and click with the motion controller on its body to open its listening channel and then, using
the microphone of the VR headset, it is possible to talk with the assistant. The assistant, using
the Amazon Lex NLU service, will process the input and provide the answer.


4. Preliminary UX Evaluation
As mentioned previously, we evaluated the usability and the user experience (UX) of the VR
application and, according to the human-centered approach, we performed several formative
evaluation phases, using the thinking aloud approach, until we reached a version of the sys-
tem that seemed acceptable for the testers. Then, we performed a user test on the resulting
application and collected both subjective and objective measures regarding usability and UX.

4.1. Participants
Fifteen users evaluated the system (7 males and 8 women from 23 to 40 years old). They asked
to participate voluntarily and none of them previously had an immersive VR experience.
Figure 1: The 3D environment: a) the reception with Sam, b) the Ricciardi room with Marta.


4.2. Materials and Equipment
We selected a set of tasks to be accomplished by the participants. Some of these tasks concerned
the interaction with Sam the receptionist and some others with Marta the virtual guide. The set
of tasks are listed in Table 1.
   To get subjective measures of the general perceived usability and UX the UEQ (User Experience
Questionnaire) questionnaire has been employed. The UEQ allows a quick assessment of the
user experience in a very simple and immediate way [12]. It consists of 26 bipolar items that
are grouped into 6 scales: Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation
and Novelty, therefore it allows measuring both usability and UX related aspects. In addition, to
get a deeper insight about the usability and UX of the two virtual assistants the CUQ tool was
used [13]. This questionnaire allows measuring the perceived personality, onboarding, user
experience and error handling. It provides a score from 0 (poor usability and UX) to 100 (very
Table 1
List of tasks for the evaluation.
                   Task Agent       Description
                   T1      Sam      Ask information about the visit
                   T2      Sam      Select the Italian language
                   T3      Sam      Ask information about Ricciardi Room
                   T4      Sam      Play the video tutorial
                   T5      -        Go in the Ricciardi Room
                   T6      Marta    start the tour
                   T7      Marta    Get information about the ”Addolorata” painting


high usability and UX).
   The questionnaires were made available through the Google Forms platform. As quantitative
measures, we collected the task success rate. We defined as a success a task executed correctly
without the help of the facilitator, as a partial success the case in which the task was executed
after the help of the facilitator, as a failure when, even with the help of the facilitator, the
participant did not complete the execution of the task. The devices supported by Amazon
Sumerian are many, for the evaluation we used the Oculus Quest 2 device.

4.3. Procedure
Before starting the experiment, participants were given an overview regarding the VR tech-
nology, the experiment, and the type of data collected. After signing an informed consent,
participants were trained on how to wear and use the headset and the application by the test
facilitator. The headset was sanitized according to the Covid-19 protocol. The experiment was
conducted in a room of a research lab in our Department. Each user has been invited to the
lab and, after wearing the VR headset, started the experiment by executing the tasks indicated
by the facilitator (Table 1). During the test the observer took note of every problem occurring
during the interaction. After completing the test, each participant was invited to fill-out an
on-line form containing general demographic questions, the UEQ and CUQ questionnaires.

4.4. Results
From the analysis of the questionnaires results, in general, participants did not have a high
difficulty in interacting with the VR environment and with the virtual assistants. The graphic
in Figure 2 shows that the scores related to the UX (attractiveness, stimulation and novelty) are
very high and denotes that participants had a pleasant experience with the system. Surprisely,
the perspicuity score is high, meaning that it was easy for participants to understand how
to interact with the application and executed the assigned tasks. Even if above average, the
efficiency and dependability are lower than the other scores, showing that there were some
problems related to usability and control of the interaction. Globally these results are quite
encouraging considering that none of the participants had a previous experience with VR
technology.
   As far as quantitative measures are concerned, all the users completed the tasks with a success
Figure 2: UEQ results


rate of 90,06%. Only three users, the older ones, asked for the help of the facilitator 10 times. All
the times the task was related to speech input toward the virtual assistants. As far as the CUQ is
concerned we asked to evaluate the interaction with Sam and Marta. Sam received an evaluation
on average of 77/100 and Marta a slightly higher one (79/100). These scores can be considered a
good result. Looking at the notes of the observer, the main problems concerned error handling,
since three times the speech input was not recognized correctly by the assistants and for two
times, even if the speech was recognized correctly Marta did not answer. However, from the
final interview all the user stated to be satisfied by the user experience and appreciated the
possibility to interact vocally with the assistants. In particular, the majority of the participants
appreciated the possibility of asking information about a painting directly to Marta. They
suggested to change the answers of the virtual assistant according to the available time of the
user by providing short or long descriptions.


5. Conclusions and Future Work
This study is valuable from the applied research point of view, as we present the preliminary
evaluation of a VR museum experience integrated with AI-based virtual assistants. The proposed
system uses Amazon Sumerian and it is flexible enough to be used in multiple scenarios. Even
if the study has been performed only with 15 users, its results indicate that the proposed
system provides a good interactive experience. However, some problems, related to control and
speech based input, were detected indicating that a better errors handling strategy needs to be
implemented.
   In the near future, we aim to conduct a new user study on several categories of visitors based
on a quantitative and qualitative methodology that takes into account also some aspects related
to the engagement and emotional experience. A more believable behavior of the two virtual
assistants in terms of eye gaze and turn taking could be set to track the person who is actively
involved in the conversation. As for the “intelligence” of the virtual assistant is concerned, the
museum’s staff could expand the database with thousands of questions. One particularly useful
comment received as a feedback from visitors is to allow them to choose among two different
forms of answers: small (brief resumes) and large (more descriptive) or to adapt the answer to
the age of the user. All these developments can be quickly addressed and implemented, as proof
that technologies used are mature enough to support the museum experience in the metaverse.


Acknowledgments
We thanks the staff of the MArTa museum for the permission of reproducing the paintings and
also for their descriptions.


References
 [1] H. Ning, H. Wang, Y. Lin, e. a. Wang, A survey on metaverse: the state-of-the-art,
     technologies, applications, and challenges, arXiv preprint arXiv:2111.09673 (2021).
 [2] Museo Archeologico Nazionale di Taranto, https://museotaranto.beniculturali.it/en, 2022.
 [3] T. Giannini, J. P. Bowen, Museums and digital culture: From reality to digitality in the age
     of covid-19, Heritage 5 (2022) 192–214.
 [4] S. Sylaiou, V. Kasapakis, D. Gavalas, E. Dzardanova, Avatars as storytellers: affective
     narratives in virtual museums, Personal and Ubiquitous Computing 24 (2020) 829–841.
 [5] S. Kopp, L. Gesellensetter, N. C. Krämer, I. Wachsmuth, A conversational agent as museum
     guide design and evaluation of a real-world application, in: International workshop on
     intelligent virtual agents, Springer, 2005, pp. 329–343.
 [6] S. Robinson, D. Traum, M. Ittycheriah, J. Henderer, What would you ask a conversational
     agent? observations of human-agent dialogues in a museum setting, in: Proceedings of
     the sixth international conference on language resources and evaluation (LREC’08), 2008.
 [7] R. Rosales, M. Castañón-Puga, F. Lara-Rosano, J. M. Flores-Parra, R. Evans, N. Osuna-
     Millan, C. Gaxiola-Pacheco, Modelling the interaction levels in hci using an intelligent
     hybrid system with interactive agents: A case study of an interactive museum exhibition
     module in mexico, Applied Sciences 8 (2018) 446.
 [8] G. Gaia, S. Boiano, A. Borda, Engaging museum visitors with ai: The case of chatbots, in:
     Museums and Digital Culture, Springer, 2019, pp. 309–329.
 [9] E. Fast, B. Chen, J. Mendelsohn, J. Bassen, M. S. Bernstein, Iris: A conversational agent for
     complex tasks, in: Proceedings of the 2018 CHI conference on human factors in computing
     systems, 2018, pp. 1–12.
[10] L. J. Bannon, A human-centred perspective on interaction design, in: Future interaction
     design, Springer, 2005, pp. 31–51.
[11] Amazon Sumerian, https://aws.amazon.com/it/sumerian, 2022.
[12] M. Schrepp, J. Thomaschewski, A. Hinderks, Construction of a benchmark for the user
     experience questionnaire (ueq) (2017).
[13] S. Holmes, A. Moorhead, R. Bond, H. Zheng, V. Coates, M. McTear, Usability testing of
     a healthcare chatbot: Can we use conventional methods to assess conversational user
     interfaces?, in: Proceedings of the 31st European Conference on Cognitive Ergonomics,
     2019, pp. 207–214.