=Paper=
{{Paper
|id=Vol-3622/paper8
|storemode=property
|title=Designing accessible cultural heritage experiences for individuals with hearing impairments
|pdfUrl=https://ceur-ws.org/Vol-3622/paper8.pdf
|volume=Vol-3622
|authors=Evangelia Gkagka,Stella Sylaiou,Dimitrios Koukopoulos,Christos Fidas
|dblpUrl=https://dblp.org/rec/conf/amid/GkagkaSKF23
}}
==Designing accessible cultural heritage experiences for individuals with hearing impairments==
Designing accessible cultural heritage experiences for
individuals with hearing impairments
Evangelia Gkagka1 , Stella Sylaiou2 , Dimitrios Koukopoulos1 and Christos Fidas1
1 University of Patras, 26504 Rio, Greece
2 International Hellenic University, Magnisias, Serres 62124, Greece
Abstract
This paper examines the design considerations and challenges in creating accessible cultural heritage
experiences specifically tailored for individuals with hearing impairments. Cultural heritage sites hold
immense value in terms of historical significance, art, and cultural identity, and ensuring inclusivity for
all visitors, including those with hearing impairments, is crucial. Drawing upon user-centered design
principles, this study explores various aspects that must be addressed to provide meaningful and
inclusive experiences. Key considerations encompass the provision of Mixed-Reality (MR) solutions that
deploy real-time speech-to-text translation combined with mobile applications that provide visual cues
to the communicating peers. Challenges such as communication barriers, technological limitations, and
the need for effective collaboration between cultural heritage institutions, designers, and the hearing-
impaired community are discussed. By addressing these considerations and challenges, this paper aims
to foster awareness and provide insights into developing inclusive cultural heritage experiences that
cater to the needs of individuals with hearing impairments, facilitating their engagement and
appreciation of our shared cultural heritage.
Keywords
Accessibility, cultural heritage, museum guides, hearing impairment, mixed reality1
1. Introduction
Hearing impairments are a prevalent condition worldwide, affecting many individuals. According
to the World Health Organization (WHO), approximately 466 million people globally experience
disabling hearing loss, which accounts for about 6% of the world's population. Moreover, it is
estimated that by 2050, the number of people with hearing impairments could rise to over 900
million due to population growth, aging, and exposure to excessive noise levels. Furthermore,
around one-third of people aged 65 years or older live with disabling hearing loss [1]. This
equates to millions of individuals globally facing hearing and communication challenges. The
impact of hearing impairments on older adults can be profound, affecting their social interactions,
quality of life, and engagement with various aspects of society, including cultural heritage
experiences. It is essential to recognize the specific needs of individuals with hearing
impairments across different age groups and design inclusive solutions that cater to their unique
requirements, ensuring that everyone can fully enjoy and participate in cultural heritage
activities regardless of their hearing abilities [2]. Furthermore, in recent years, mixed and virtual
reality technologies have been used in museums to make the whole experience more fascinating
than traditional guided tours [3]. As a result, a field of research worth considering is using new
technologies to help people with hearing impairments participate in museum tours, as it is
already happening for the visually impaired with tactile exploration, audio descriptions, and
AMID 2023 - Workshop on Accessibility and Multimodal Interaction Design Approaches in Museums for People with
Impairments, September 27, 2023, Athens, Greece
up1066528@upnet.gr (E. Gkagka); sylaiou@ihu.gr (S. Sylaiou); dkoukopoulos@upatras.gr (D. Koukopoulos);
fidas@upatras.gr (C. Fidas)
0009-0004-6805-1805 (E. Gkagka); 0000-0001-5879-5908 (S. Sylaiou); 0000-0001-7019-4224 (D. Koukopoulos);
0000-0001-6111-0244 (C. Fidas)
© 2023 Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR Workshop Proceedings (CEUR-WS.org)
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
mobile gestures [4]. Previous research has been conducted on the ground of visiting museums
with the help of translating what guides say in sign language as well as using augmented reality
mobile apps to facilitate the experience of visitors with hearing impairments [5, 6]. In this paper,
we present an MR app with real-time voice-to-text translation technology developed to enhance
the experience of people with hearing impairments in places of cultural interest, such as
museums. In contrast to the previously published article "Use of XR technologies for enhancing
visitors' experience at industrial museums," a part of which is about supporting people with
hearing impairments at industrial museums, this one emphasizes more the design considerations
and challenges that need to be solved to help those people on museums and cultural heritage sites
[7].
2. Design guidelines to support people with hearing impairments in
museums and cultural heritage sites
2.1. Providing visual cues and alternatives to auditory information
Real-time voice-to-text translation technology holds great potential for improving
communication and accessibility for individuals with hearing impairments. This technology
allows spoken words to be instantly converted into written text, which can be displayed on an
MR device in real-time. One practical application of this technology is in facilitating conversations
between individuals who cannot hear or have difficulties in hearing and those who are hearing.
By using voice-to-text translation, the spoken words of a hearing person can be transcribed into
text and displayed on a screen, enabling the deaf or hard of hearing individual to read and
understand the conversation in real time. This promotes effective communication and inclusivity,
bridging the gap between individuals with different hearing abilities.
A significant challenge is posed in scenarios where a group of individuals is talking
simultaneously, and the system needs to understand and distinguish the individual speakers
accurately. This challenge arises due to overlapping speech, varying speech patterns, and
different acoustic characteristics of each speaker. The answer to this comes by using voice
recognition methods. Voice recognition algorithms must not only recognize the spoken words but
also identify the speaker to attribute the correct text to everyone. This requires advanced speaker
identification techniques, such as voiceprint analysis or speaker diarisation, to accurately
differentiate and assign speech to the respective speakers. Overcoming this challenge enhances
the accuracy and reliability of voice-to-text translation in group settings and can end up in a
result, as seen in Figure 1 below.
Figure 1: Conversation using speaker diarisation
Furthermore, noise removal is a crucial challenge in real-time voice-to-text translation,
particularly in noisy surroundings and contexts. Background noise, such as conversations, traffic,
or environmental sounds, can significantly degrade the quality and intelligibility of the captured
speech. Noise reduction techniques, such as spectral subtraction, adaptive filtering, or deep
learning-based algorithms, are employed to suppress or eliminate unwanted noise and enhance
the clarity of the speech signal. By effectively mitigating noise interference, the voice-to-text
translation system can provide more accurate transcriptions and improve the overall user
experience, particularly in challenging acoustic environments.
Supporting multiple spoken languages presents another significant challenge in real-time
voice-to-text translation. Language diversity adds complexity as different languages have unique
phonetic characteristics, vocabularies, and grammatical structures. Developing language models
and training data for multiple languages requires extensive resources and expertise. Additionally,
accurately recognizing and translating diverse accents and dialects within a given language
further complicates the challenge. Language-specific speech recognition models and language
resources must be developed and integrated into the voice-to-text translation system to ensure
accurate and reliable translations across various languages. Overcoming this challenge involves
continuous research, data collection, and development efforts to expand language support and
improve the accuracy of language-specific models.
Addressing these challenges in voice recognition, noise removal, and language support is
crucial for successfully deploying and adopting real-time voice-to-text translation systems.
Advancements in machine learning, signal processing, and natural language processing
techniques are continually improving the performance and capabilities of these systems, making
them more robust and effective in diverse real-world scenarios.
2.2. Fostering inclusive communication: design requirements for supporting
common understanding and discussion on content comprehension
In the context of inclusive communication, it is crucial to consider the needs of individuals with
hearing impairments and those without. Creating an environment that supports common
understanding and discussion on the comprehension of spoken dialogue can significantly
enhance communication and foster inclusivity. Real-time awareness of what impaired users read
is a crucial aspect of assistive technologies and accessibility solutions. Providing real-time
feedback and insights into the content being read by impaired users enables better support and
personalized assistance to enhance their reading experience. This awareness can be achieved
through various means, such as eye-tracking technology, screen readers, or text-to-speech
systems.
One approach to real-time awareness is the use of eye-tracking technology. By tracking the
movement and focus of the user's eyes, it becomes possible to determine which parts of the text
they are actively reading. This information can provide real-time feedback to the user or adapt
the reading experience accordingly. For example, if an impaired user struggles to read a particular
section, the system can provide additional assistance or offer alternative presentation formats to
improve comprehension.
Screen readers and text-to-speech systems can also provide real-time awareness of the
content being read by impaired users. These technologies convert written text into audible
speech, allowing users to listen to the content instead of reading it visually. Following the text as
it is being read aloud gives impaired users a real-time understanding of the information and its
context.
Real-time awareness of what impaired users read has significant benefits. It allows immediate
intervention or assistance when difficulties arise, ensuring a smoother reading experience. It also
enables personalized adjustments and adaptations based on the user's needs, preferences, and
comprehension levels. These technologies provide real-time feedback and support, enabling
impaired users to access and engage with textual information more effectively.
Another critical aspect of simulating actual speech conditions when converting speech to text
is the pause someone makes between different sentences while speaking. Pausing is essential to
facilitate communication because it is a lot more difficult for someone to explain a topic when the
receiver has plain text as an input. Thus, pausing can be translated into creating a new text
placeholder after 2-3 seconds of quiet.
Figure 2: Example with pausing function
3. Prototype implementation and first evaluation results
3.1. Speech-to-text mixed reality application to support the needs of hearing-
impaired individuals
After setting the aforementioned design considerations, the app’s first version was developed.
The purpose of this version was to prove that the full development of the application is
achievable. The principal core flow is to fully implement the speech-to-text functionality in an
application that runs in the interface of an MR headset. This way, the person with hearing
impairment can participate in group conversations like anyone without requiring special care
and feeling excluded. The design of the speech-to-text application is minimalistic because its goal
is not to gamify the experience but to perform as a background process for people with hearing
impairments. Thus, it should not interfere with the visitor’s museum experience but make his
whole experience smoother.
Open-source platforms and programming languages were used to develop the app. More
specifically, the app was developed in Unity, a free game engine leading in the creation of real-
time 3D games, apps, and experiences for entertainment, film, automotive, architecture, and
more. Visual Studio was used for scripting in C# and deploying the application in Microsoft
Hololens. In addition, MRTK (Microsoft’s Mixed Reality Toolkit), a cross-platform toolkit that
accelerates cross-platform MR development, was used to implement mixed reality features in the
app. Real-time speech-to-text conversion requires an API (Application Programming Interface)
in which the input is sound (in this case, voice), and the output is text. The selected one is Azure
Speech Services, which is provided by Microsoft as part of Azure Cognitive Services. After
developing the app, it can run on any platform (Android, IOS, Windows, etc.) with minor
adjustments thanks to MRTK, with a preference for using a mixed reality headset, such as
Microsoft Hololens, that runs on Windows Holographic OS.
As seen in the image below (fig. 4), the application consists of a main panel and a control panel.
In the main panel, the speech-to-text process takes place. In the control panel, the user performs
actions like starting and stopping the microphone that starts the speech-to-text conversion and
changing the initial blue background. Regarding the setup, using an MR headset for people with
hearing loss is ideal. That way, people can still perform lip-reading and see a transcript of the
things they do not manage to understand. This is the proposed way to counteract the problem of
excluding people with hearing loss from visiting museums and cultural heritage sites without
having to attend overpriced, dedicated tours for people with hearing loss.
3.1. Early-stage evaluation
Figure 3: The application in use
After developing the application, a pilot evaluation study was conducted in the lab with eight
(8) participants from different educational backgrounds. All participants used earbuds to
simulate hearing loss, and they used the Microsoft Hololens 2 Mixed Reality Headset to ‘translate’
museum exhibition information. After using the application, they were given a questionnaire
concerning the usefulness of such an application and a SUS (System Usability Scale) questionnaire
to evaluate the usability of the app:
1. People who participated in the study declared that AR could significantly help people with
hearing loss.
2. 87,5% of them stated that at least once they had difficulty communicating with at least
one person due to hearing impairments and that they will use AR if they face hearing problems
in the future. Answers to the SUS (System Usability Scale) questionnaire ended up with a score
of 85, which is an outstanding result considering that the borderline is 68. Thus, the survey
showed that all participants would use this application frequently whenever available, that it
meets its original design considerations, and is easy-to-use.
3. 87,5% of the participants considered that there was no considerable delay in converting
speech-to-text that they liked the minimalistic design of the app, and that they would
recommend to someone with hearing loss to visit places where such applications are available.
In addition, 62.5% of the participants stated that they got used to the app in 1 minute, 25% in
5 minutes, and 12.5% in 10 minutes. Therefore, this application is envisaged as a valuable
provision for visitors with hearing loss since it will enable them to follow the narrative of the
main audio tour while moving from one exhibit to another (also described as audio tour stations).
4. Conclusions
There are compelling arguments for recognizing the significant presence and importance of
individuals over the age of 60 as a substantial visitor category for museums and cultural heritage
sites. Firstly, the aging population is steadily increasing worldwide, with a significant portion of
the population falling within this age group, primarily due to continuous progress in medical care.
This demographic represents a diverse group of individuals with a wealth of knowledge, life
experiences, and a desire to engage with cultural heritage. Furthermore, not only the elderly can
benefit from assistance in visiting museums and cultural heritage sites. People who have hearing
loss due to other factors, such as genetics, infections, ear trauma, etc., can now actively participate
in social events instead of being excluded.
By tailoring experiences to meet the needs and interests of this demographic, museums can
create inclusive environments that engage and inspire visitors of all ages. To address the
inclusiveness and accessibility of museum and cultural heritage-site tours for individuals with
hearing impairments, this paper proposes a set of design guidelines. These guidelines aim to
enhance the overall experience and ensure that people with hearing impairments can fully engage
with and appreciate the cultural heritage being presented.
Acknowledgements
This research has been co-financed by the European Regional Development Fund of the European
Union and Greek national funds through the operational program Competitiveness,
Entrepreneurship and Innovation, under the call RESEARCH–CREATE–INNOVATE (project code:
T2EDK01392).
References
[1] WHO (World Health Organisation), Deafness and hearing loss, (2023).
https://www.who.int/health-topics/hearing-loss#tab=tab_1
[2] P. Kosmas, G. Galanakis, V. Constantinou, G. Drossis, M. Christofi, I. Klironomos, P. Zaphiris,
M. Antona, C. Stephanidis. (2020). Enhancing accessibility in cultural heritage environments:
considerations for social computing. In Universal Access in the Information Society (Vol. 19,
Issue 2, pp. 471–482). Springer Science and Business Media LLC. DOI: 10.1007/s10209-019-
00651-4.
[3] H. Lee, T. H. Jung, M. C. tom Dieck, and N. Chung. (2020). Experiencing immersive virtual
reality in museums. In Information and Management (Vol. 57, Issue 5, p. 103229). Elsevier
BV. DOI: 10.1016/j.im.2019.103229.
[4] G. Anagnostakis, M. Antoniou, E. Kardamitsi, T. Sachinidis, P. Koutsabasis, M. Stavrakis, S.
Vosinakis, and D. Zissis. (2016). Accessible Museum collections for the visually impaired, in:
Proceedings of the 18th International Conference on Human-Computer Interaction with
Mobile Devices and Services Adjunct. MobileHCI ’16: 18th International Conference on
Human-Computer Interaction with Mobile Devices and Services. ACM. DOI:
10.1145/2957265.2963118.
[5] E. J. Baker, J. A. Abu Bakar, A. Nasir Zulkifli. (2022), Evaluation of Mobile Augmented Reality
Hearing-Impaired Museum Visitors Engagement Instrument. In International Journal of
Interactive Mobile Technologies (iJIM) (Vol. 16, Issue 12, pp. 114–126). International
Association of Online Engineering (IAOE). DOI: 10.3991/ijim.v16i12.30513.
[6] D.I. Kosmopoulos, C. Constantinopoulos, M. Trigka, D. Papazachariou, K. Antzakas, V.
Lampropoulou, A. Argyros, I. Oikonomidis, A. Roussos, N. Partarakis, G. Papagiannakis, K.
Grigoriadis, A. Koukouvou, A. Moneda. 2022. Museum Guidance in Sign Language: The
SignGuide project, in: Proceedings of the 15th International Conference on Pervasive
Technologies Related to Assistive Environments (pp. 646-652). DOI:
10.1145/3529190.3534718
[7] Sylaiou, S., Gkagka, E., Fidas, C., Vlachou, E., Lampropoulos, G., Plytas, A., Nomikou, V. (2023).
Use of XR technologies for enhancing visitors' experience at industrial museums, in:
Proceedings of the 1st Workshop on Accessibility and Multimodal Interaction Design
Approaches in Museums for People with Impairments, CEUR-WS.org, 2nd International
Conference of the ACM Greek SIGCHI Chapter, Athens, Greece. DOI:
10.1145/3609987.3610008