<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Designing accessible cultural heritage experiences for individuals with hearing impairments</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Evangelia</forename><surname>Gkagka</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Patras</orgName>
								<address>
									<postCode>26504</postCode>
									<settlement>Rio</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Stella</forename><surname>Sylaiou</surname></persName>
							<email>sylaiou@ihu.gr</email>
							<affiliation key="aff1">
								<orgName type="institution">International Hellenic University</orgName>
								<address>
									<addrLine>Magnisias</addrLine>
									<postCode>62124</postCode>
									<settlement>Serres</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Dimitrios</forename><surname>Koukopoulos</surname></persName>
							<email>dkoukopoulos@upatras.gr</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Patras</orgName>
								<address>
									<postCode>26504</postCode>
									<settlement>Rio</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Christos</forename><surname>Fidas</surname></persName>
							<email>fidas@upatras.gr</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Patras</orgName>
								<address>
									<postCode>26504</postCode>
									<settlement>Rio</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Designing accessible cultural heritage experiences for individuals with hearing impairments</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">4A4350993002774E0398A080741122EE</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:18+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Accessibility</term>
					<term>cultural heritage</term>
					<term>museum guides</term>
					<term>hearing impairment</term>
					<term>mixed reality1</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper examines the design considerations and challenges in creating accessible cultural heritage experiences specifically tailored for individuals with hearing impairments. Cultural heritage sites hold immense value in terms of historical significance, art, and cultural identity, and ensuring inclusivity for all visitors, including those with hearing impairments, is crucial. Drawing upon user-centered design principles, this study explores various aspects that must be addressed to provide meaningful and inclusive experiences. Key considerations encompass the provision of Mixed-Reality (MR) solutions that deploy real-time speech-to-text translation combined with mobile applications that provide visual cues to the communicating peers. Challenges such as communication barriers, technological limitations, and the need for effective collaboration between cultural heritage institutions, designers, and the hearingimpaired community are discussed. By addressing these considerations and challenges, this paper aims to foster awareness and provide insights into developing inclusive cultural heritage experiences that cater to the needs of individuals with hearing impairments, facilitating their engagement and appreciation of our shared cultural heritage.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Hearing impairments are a prevalent condition worldwide, affecting many individuals. According to the World Health Organization (WHO), approximately 466 million people globally experience disabling hearing loss, which accounts for about 6% of the world's population. Moreover, it is estimated that by 2050, the number of people with hearing impairments could rise to over 900 million due to population growth, aging, and exposure to excessive noise levels. Furthermore, around one-third of people aged 65 years or older live with disabling hearing loss <ref type="bibr" target="#b0">[1]</ref>. This equates to millions of individuals globally facing hearing and communication challenges. The impact of hearing impairments on older adults can be profound, affecting their social interactions, quality of life, and engagement with various aspects of society, including cultural heritage experiences. It is essential to recognize the specific needs of individuals with hearing impairments across different age groups and design inclusive solutions that cater to their unique requirements, ensuring that everyone can fully enjoy and participate in cultural heritage activities regardless of their hearing abilities <ref type="bibr" target="#b1">[2]</ref>. Furthermore, in recent years, mixed and virtual reality technologies have been used in museums to make the whole experience more fascinating than traditional guided tours <ref type="bibr" target="#b2">[3]</ref>. As a result, a field of research worth considering is using new technologies to help people with hearing impairments participate in museum tours, as it is already happening for the visually impaired with tactile exploration, audio descriptions, and mobile gestures <ref type="bibr" target="#b3">[4]</ref>. Previous research has been conducted on the ground of visiting museums with the help of translating what guides say in sign language as well as using augmented reality mobile apps to facilitate the experience of visitors with hearing impairments <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>. In this paper, we present an MR app with real-time voice-to-text translation technology developed to enhance the experience of people with hearing impairments in places of cultural interest, such as museums. In contrast to the previously published article "Use of XR technologies for enhancing visitors' experience at industrial museums," a part of which is about supporting people with hearing impairments at industrial museums, this one emphasizes more the design considerations and challenges that need to be solved to help those people on museums and cultural heritage sites <ref type="bibr" target="#b6">[7]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Design guidelines to support people with hearing impairments in museums and cultural heritage sites 2.1. Providing visual cues and alternatives to auditory information</head><p>Real-time voice-to-text translation technology holds great potential for improving communication and accessibility for individuals with hearing impairments. This technology allows spoken words to be instantly converted into written text, which can be displayed on an MR device in real-time. One practical application of this technology is in facilitating conversations between individuals who cannot hear or have difficulties in hearing and those who are hearing. By using voice-to-text translation, the spoken words of a hearing person can be transcribed into text and displayed on a screen, enabling the deaf or hard of hearing individual to read and understand the conversation in real time. This promotes effective communication and inclusivity, bridging the gap between individuals with different hearing abilities.</p><p>A significant challenge is posed in scenarios where a group of individuals is talking simultaneously, and the system needs to understand and distinguish the individual speakers accurately. This challenge arises due to overlapping speech, varying speech patterns, and different acoustic characteristics of each speaker. The answer to this comes by using voice recognition methods. Voice recognition algorithms must not only recognize the spoken words but also identify the speaker to attribute the correct text to everyone. This requires advanced speaker identification techniques, such as voiceprint analysis or speaker diarisation, to accurately differentiate and assign speech to the respective speakers. Overcoming this challenge enhances the accuracy and reliability of voice-to-text translation in group settings and can end up in a result, as seen in Figure <ref type="figure" target="#fig_0">1 below</ref>. Supporting multiple spoken languages presents another significant challenge in real-time voice-to-text translation. Language diversity adds complexity as different languages have unique phonetic characteristics, vocabularies, and grammatical structures. Developing language models and training data for multiple languages requires extensive resources and expertise. Additionally, accurately recognizing and translating diverse accents and dialects within a given language further complicates the challenge. Language-specific speech recognition models and language resources must be developed and integrated into the voice-to-text translation system to ensure accurate and reliable translations across various languages. Overcoming this challenge involves continuous research, data collection, and development efforts to expand language support and improve the accuracy of language-specific models.</p><p>Addressing these challenges in voice recognition, noise removal, and language support is crucial for successfully deploying and adopting real-time voice-to-text translation systems. Advancements in machine learning, signal processing, and natural language processing techniques are continually improving the performance and capabilities of these systems, making them more robust and effective in diverse real-world scenarios.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Fostering inclusive communication: design requirements for supporting common understanding and discussion on content comprehension</head><p>In the context of inclusive communication, it is crucial to consider the needs of individuals with hearing impairments and those without. Creating an environment that supports common understanding and discussion on the comprehension of spoken dialogue can significantly enhance communication and foster inclusivity. Real-time awareness of what impaired users read is a crucial aspect of assistive technologies and accessibility solutions. Providing real-time feedback and insights into the content being read by impaired users enables better support and personalized assistance to enhance their reading experience. This awareness can be achieved through various means, such as eye-tracking technology, screen readers, or text-to-speech systems.</p><p>One approach to real-time awareness is the use of eye-tracking technology. By tracking the movement and focus of the user's eyes, it becomes possible to determine which parts of the text they are actively reading. This information can provide real-time feedback to the user or adapt the reading experience accordingly. For example, if an impaired user struggles to read a particular section, the system can provide additional assistance or offer alternative presentation formats to improve comprehension.</p><p>Screen readers and text-to-speech systems can also provide real-time awareness of the content being read by impaired users. These technologies convert written text into audible speech, allowing users to listen to the content instead of reading it visually. Following the text as it is being read aloud gives impaired users a real-time understanding of the information and its context.</p><p>Real-time awareness of what impaired users read has significant benefits. It allows immediate intervention or assistance when difficulties arise, ensuring a smoother reading experience. It also enables personalized adjustments and adaptations based on the user's needs, preferences, and comprehension levels. These technologies provide real-time feedback and support, enabling impaired users to access and engage with textual information more effectively.</p><p>Another critical aspect of simulating actual speech conditions when converting speech to text is the pause someone makes between different sentences while speaking. Pausing is essential to facilitate communication because it is a lot more difficult for someone to explain a topic when the receiver has plain text as an input. Thus, pausing can be translated into creating a new text placeholder after 2-3 seconds of quiet. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Prototype implementation and first evaluation results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Speech-to-text mixed reality application to support the needs of hearingimpaired individuals</head><p>After setting the aforementioned design considerations, the app's first version was developed. The purpose of this version was to prove that the full development of the application is achievable. The principal core flow is to fully implement the speech-to-text functionality in an application that runs in the interface of an MR headset. This way, the person with hearing impairment can participate in group conversations like anyone without requiring special care and feeling excluded. The design of the speech-to-text application is minimalistic because its goal is not to gamify the experience but to perform as a background process for people with hearing impairments. Thus, it should not interfere with the visitor's museum experience but make his whole experience smoother.</p><p>Open-source platforms and programming languages were used to develop the app. More specifically, the app was developed in Unity, a free game engine leading in the creation of realtime 3D games, apps, and experiences for entertainment, film, automotive, architecture, and more. Visual Studio was used for scripting in C# and deploying the application in Microsoft Hololens. In addition, MRTK (Microsoft's Mixed Reality Toolkit), a cross-platform toolkit that accelerates cross-platform MR development, was used to implement mixed reality features in the app. Real-time speech-to-text conversion requires an API (Application Programming Interface) in which the input is sound (in this case, voice), and the output is text. The selected one is Azure Speech Services, which is provided by Microsoft as part of Azure Cognitive Services. After developing the app, it can run on any platform (Android, IOS, Windows, etc.) with minor adjustments thanks to MRTK, with a preference for using a mixed reality headset, such as Microsoft Hololens, that runs on Windows Holographic OS.</p><p>As seen in the image below (fig. <ref type="figure">4</ref>), the application consists of a main panel and a control panel. In the main panel, the speech-to-text process takes place. In the control panel, the user performs actions like starting and stopping the microphone that starts the speech-to-text conversion and changing the initial blue background. Regarding the setup, using an MR headset for people with hearing loss is ideal. That way, people can still perform lip-reading and see a transcript of the things they do not manage to understand. This is the proposed way to counteract the problem of excluding people with hearing loss from visiting museums and cultural heritage sites without having to attend overpriced, dedicated tours for people with hearing loss. After developing the application, a pilot evaluation study was conducted in the lab with eight (8) participants from different educational backgrounds. All participants used earbuds to simulate hearing loss, and they used the Microsoft Hololens 2 Mixed Reality Headset to 'translate' museum exhibition information. After using the application, they were given a questionnaire concerning the usefulness of such an application and a SUS (System Usability Scale) questionnaire to evaluate the usability of the app:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Early-stage evaluation</head><p>1. People who participated in the study declared that AR could significantly help people with hearing loss. 2. 87,5% of them stated that at least once they had difficulty communicating with at least one person due to hearing impairments and that they will use AR if they face hearing problems in the future. Answers to the SUS (System Usability Scale) questionnaire ended up with a score of 85, which is an outstanding result considering that the borderline is 68. Thus, the survey showed that all participants would use this application frequently whenever available, that it meets its original design considerations, and is easy-to-use. 3. 87,5% of the participants considered that there was no considerable delay in converting speech-to-text that they liked the minimalistic design of the app, and that they would recommend to someone with hearing loss to visit places where such applications are available. In addition, 62.5% of the participants stated that they got used to the app in 1 minute, 25% in 5 minutes, and 12.5% in 10 minutes. Therefore, this application is envisaged as a valuable provision for visitors with hearing loss since it will enable them to follow the narrative of the main audio tour while moving from one exhibit to another (also described as audio tour stations).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusions</head><p>There are compelling arguments for recognizing the significant presence and importance of individuals over the age of 60 as a substantial visitor category for museums and cultural heritage sites. Firstly, the aging population is steadily increasing worldwide, with a significant portion of the population falling within this age group, primarily due to continuous progress in medical care. This demographic represents a diverse group of individuals with a wealth of knowledge, life experiences, and a desire to engage with cultural heritage. Furthermore, not only the elderly can benefit from assistance in visiting museums and cultural heritage sites. People who have hearing loss due to other factors, such as genetics, infections, ear trauma, etc., can now actively participate in social events instead of being excluded.</p><p>By tailoring experiences to meet the needs and interests of this demographic, museums can create inclusive environments that engage and inspire visitors of all ages. To address the inclusiveness and accessibility of museum and cultural heritage-site tours for individuals with hearing impairments, this paper proposes a set of design guidelines. These guidelines aim to enhance the overall experience and ensure that people with hearing impairments can fully engage with and appreciate the cultural heritage being presented.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Conversation using speaker diarisation</figDesc><graphic coords="2,183.40,541.77,229.30,134.25" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Example with pausing function</figDesc><graphic coords="4,183.38,72.00,228.25,181.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: The application in use</figDesc><graphic coords="5,150.85,97.07,299.20,213.45" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>This research has been co-financed by the European Regional Development Fund of the European Union and Greek national funds through the operational program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH-CREATE-INNOVATE (project code: T2EDK01392).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<ptr target="https://www.who.int/health-topics/hearing-loss#tab=tab_1" />
		<title level="m">World Health Organisation), Deafness and hearing loss</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
		<respStmt>
			<orgName>WHO</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Enhancing accessibility in cultural heritage environments: considerations for social computing</title>
		<author>
			<persName><forename type="first">P</forename><surname>Kosmas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Galanakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Constantinou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Drossis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Christofi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Klironomos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zaphiris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Antona</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Stephanidis</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10209-019-00651-4</idno>
	</analytic>
	<monogr>
		<title level="m">Universal Access in the Information Society</title>
				<imprint>
			<publisher>Springer Science and Business Media LLC</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="471" to="482" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Experiencing immersive virtual reality in museums</title>
		<author>
			<persName><forename type="first">H</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">H</forename><surname>Jung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Tom Dieck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Chung</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.im.2019.103229</idno>
	</analytic>
	<monogr>
		<title level="j">Information and Management</title>
		<imprint>
			<biblScope unit="volume">57</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page">103229</biblScope>
			<date type="published" when="2020">2020</date>
			<publisher>Elsevier BV</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Accessible Museum collections for the visually impaired</title>
		<author>
			<persName><forename type="first">G</forename><surname>Anagnostakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Antoniou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Kardamitsi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Sachinidis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Koutsabasis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Stavrakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Vosinakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zissis</surname></persName>
		</author>
		<idno type="DOI">10.1145/2957265.2963118</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. MobileHCI &apos;16: 18th International Conference on Human-Computer Interaction with Mobile Devices and Services</title>
				<meeting>the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. MobileHCI &apos;16: 18th International Conference on Human-Computer Interaction with Mobile Devices and Services</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Evaluation of Mobile Augmented Reality Hearing-Impaired Museum Visitors Engagement Instrument</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">J</forename><surname>Baker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Abu Bakar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Nasir Zulkifli</surname></persName>
		</author>
		<idno type="DOI">10.3991/ijim.v16i12.30513</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Interactive Mobile Technologies (iJIM)</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="114" to="126" />
			<date type="published" when="2022">2022</date>
			<publisher>International Association of Online Engineering</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Museum Guidance in Sign Language: The SignGuide project</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">I</forename><surname>Kosmopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Constantinopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Trigka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Papazachariou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Antzakas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lampropoulou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Argyros</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Oikonomidis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Roussos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Partarakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Papagiannakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Grigoriadis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Koukouvou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Moneda</surname></persName>
		</author>
		<idno type="DOI">10.1145/3529190.3534718</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 15th International Conference on Pervasive Technologies Related to Assistive Environments</title>
				<meeting>the 15th International Conference on Pervasive Technologies Related to Assistive Environments</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="646" to="652" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Use of XR technologies for enhancing visitors&apos; experience at industrial museums</title>
		<author>
			<persName><forename type="first">S</forename><surname>Sylaiou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Gkagka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Fidas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Vlachou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lampropoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Plytas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Nomikou</surname></persName>
		</author>
		<idno type="DOI">10.1145/3609987.3610008</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st Workshop on Accessibility and Multimodal Interaction Design Approaches in Museums for People with Impairments, CEUR-WS.org, 2nd International Conference of the ACM Greek SIGCHI Chapter</title>
				<meeting>the 1st Workshop on Accessibility and Multimodal Interaction Design Approaches in Museums for People with Impairments, CEUR-WS.org, 2nd International Conference of the ACM Greek SIGCHI Chapter<address><addrLine>Athens, Greece</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
