<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main"></title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Anna</forename><surname>Matamala</surname></persName>
							<email>anna.matamala@uab.cat</email>
							<affiliation key="aff0">
								<orgName type="institution">Universitat Autònoma de Barcelona</orgName>
								<address>
									<addrLine>Edifici K-1002</addrLine>
									<postCode>08193</postCode>
									<settlement>Bellaterra, Barcelona</settlement>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Estel</forename><forename type="middle">•</forename><surname>La Oncins</surname></persName>
							<email>estella.oncins@uab.cat</email>
							<affiliation key="aff0">
								<orgName type="institution">Universitat Autònoma de Barcelona</orgName>
								<address>
									<addrLine>Edifici K-1002</addrLine>
									<postCode>08193</postCode>
									<settlement>Bellaterra, Barcelona</settlement>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<address>
									<settlement>Greenville</settlement>
									<region>South Carolina</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">D77D984031DC479BC842963130916594</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:45+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The metaverse has the potential to extend the physical world allowing users to seamlessly communicate and interact in a new virtual ecosystem. Yet, it is paramount to ensure that these immersive experiences are accessible to all users regardless of their needs. This paper presents some key aspects to be considered when developing a metaverse for all, reporting on the information gathered to develop two technical specifications approved at ITU Focus Group on metaverse. metaverse, accessibility, users, translation 1</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">User actions in the metaverse</head><p>There are four main actions that users are expected to perform in the metaverse [6]: a) accessing the metaverse; b) creating an avatar identity in the metaverse; c) navigating in the metaverse, and d) interacting in the metaverse.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">An ecosystem of virtual worlds offering immersive experiences</head><p>The metaverse is, according to the definition adopted in December 2023 by the International Telecommunications Union Focus Group on Metaverse, "An integrative ecosystem of virtual worlds offering immersive experiences to users, that modify pre-existing and create new value from economic, environmental, social and cultural perspectives". Many technologies converge in the metaverse, which includes many virtual spaces with virtual content and virtual people who adopt the form of an avatar. The metaverse is expected to offer experiences in a broad range of fields such as education, healthcare, culture, or shopping, to name a few. The metaverse is in the process of being defined and built <ref type="bibr" target="#b0">[1]</ref>, so it is a unique opportunity to adopt a born-accessible approach as promoted by Orero <ref type="bibr" target="#b1">[2]</ref> and create it in such a way that anyone, regardless of capabilities, can access it <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>. This paper aims to highlight some key aspects that could be considered when developing a metaverse for all, reporting on the information gathered to develop two technical specifications approved at ITU: Technical Specification ITU FGMV-04 Requirements of accessible products and services in the metaverse: Part I -System perspective <ref type="bibr" target="#b4">[5]</ref> and ITU FGMV-05 Requirements of accessible products and services in the metaverse: Part II -User perspective <ref type="bibr" target="#b5">[6]</ref>. The paper adopts a user-centric perspective, describing the actions users may want to perform in the metaverse and what accessibility requirements should be considered (Section 2). Section 3 describes some accessibility services that could be integrated in the metaverse, pointing at some existing research. The article finishes with some conclusions and future work in this field.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Accessing the metaverse</head><p>Virtual worlds are generally accessed through head-mounted displays, but these devices may not cater for the needs of certain users. This is why alternative devices should be considered so that users can choose the one more suitable to their needs or preferences. These devices include hand-based input devices, non-hand-based input devices and motion input devices as explained by Park and Kim <ref type="bibr" target="#b6">[7]</ref>. Users with disabilities may use their own assistive technologies to access digital content, hence interoperability between hardware components to access the metaverse and these technologies should be guaranteed in an accessible metaverse. One key aspect when accessing the metaverse is the authentication process, in which specific software may be used. Again, it is paramount that users have alternative options to authenticate themselves, via spoken or written text or via haptics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Creating an avatar identity</head><p>An avatar is a medium that projects one's identity within virtual spaces <ref type="bibr" target="#b0">[1]</ref>. These avatar representations range from self-representations to totally new representations. In other words, users may want to have a faithful depiction of themselves, or they may want to be represented by someone totally different, even a non-human avatar. A metaverse for all should give users a choice of self-expression, allowing them to incorporate a wheelchair or a blind cane in their avatar, should they wish to do so.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Navigating</head><p>Once users have been able to access the metaverse, they are expected to navigate through it perceiving the content that is available. The range of user needs may be varied: a user may need to navigate with a haptic controller, whereas another user may need to navigate relying only on visual input, without access to the audio elements, to put two examples. Software component used in building the virtual worlds should consider accessibility features and interoperability with assistive devices. Some of the features already considered for web-based digital content may also be applicable in this context, such as visual contrast or text magnification, for instance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Interacting</head><p>Humans are inherently social, so interaction in the metaverse will be central. Similarly to the physical world, interaction can take place by different means: through oral and written language but also through non-verbal communication (i.e., facial expressions, gestures, among other features). This interaction is a bidirectional process in which users can give input and receive responses. To cater for the needs of all users, this interaction cannot stay at only one level but must provide alternative options. For example, a user may want to provide input through spoken words, through keyboards, through gestures or through eye-tracking. Similarly, a user may prefer to receive oral or written responses, depending on their sensory capabilities.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Accessibility and translation services towards an accessible metaverse</head><p>Accessibility and translation services will play a fundamental role in offering an accessible metaverse, similarly to the key role they already play in the physical and digital world. Next, we present some key services <ref type="bibr" target="#b5">[6]</ref> together with some existing research that points at how these access services could be integrated in immersive environments. We would expect the virtual worlds, services, and products to clearly identify the accessibility and translation services available and allow users to customize their choice in an easy way.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Subtitling</head><p>Subtitling can be in the same language (intralinguistic) or in a different language (interlinguistic) and offer a written alternative to the spoken words, hence benefitting those who cannot access the audio for various reasons or those who do not understand the source language. When specifically addressed to persons who cannot access the audio, subtitles also transfer nonspeech information such as character or language identification, paralinguistic elements, music, and sounds, among other features. Subtitles follow spatio-temporal constraints to allow users to read the text while enjoying the visuals. Subtitles present different features which users should be able to customize, such as font size, font type, font colour, contrast, or alignment.</p><p>Research on subtitling in immersive media started with the BBC study by Brown et al. <ref type="bibr" target="#b7">[8]</ref> on four solutions for rendering subtitles and continued with Roth et al. <ref type="bibr" target="#b8">[9]</ref> investigations. These studies were the basis for the ImAc project studies <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b11">12]</ref>, which focused on two key aspects: where to place the subtitles in the virtual environment (subtitle positioning), and how to guide the viewers to the speaker in the virtual world (guiding mechanism). When the speaker is not in the field of view of a user who cannot access the audio, a mechanism is needed to guide them. The project tested an arrow and a radar, as guiding mechanisms, and also tested subtitles which were always visible to the user, subtitles attached to the speaker, and subtitles placed in a fixed position every 120º. The always visible subtitles and the arrow were the preferred solutions. More recently, Brescia-Zapata et al. <ref type="bibr" target="#b12">[13]</ref> have explored the issues further using eye-tracking to compare always visible subtitles-head-locked in their terminology-versus fixed subtitles and exploring the usefulness of coloured subtitles in three different countries. Although results on subtitle colour are not conclusive, investigations seem to indicate that always visible subtitles are currently the best option to integrate subtitles in virtual environments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Transcripts</head><p>Transcripts provide a written verbatim alternative to spoken words and may also include some written alternatives to non-speech audio information. For example, one could imagine an oral lecture in an educational metaverse which is transcribed and aligned with the lecturer presentation, which would not only be helpful to students who cannot access the audio but also to anyone wishing to go back to the content. Transcripts are presented in the same language as the audio and, contrary to subtitles, they do not have specific spatio-temporal constraints. Some transcripts highlight certain words as they are spoken, which benefits users with reading difficulties. As in subtitling, one would expect users to be able to customize the features of the transcripts in the virtual world, choosing various features such as font type, colour or contrast. Other questions to be explored might be text position and display, especially in the case of interactive transcripts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Audio description</head><p>Audio description translates the visual into spoken words <ref type="bibr" target="#b13">[14]</ref>. It provides a description of visual elements and some sound elements which may be difficult to understand without access to the visuals. This is especially useful for users who cannot access the visuals and can be applicable to both dynamic and static content. Audio description is offered when the source content has silent spaces. Sometimes it can be offered together with an audio introduction, which is an audio text that provides some key relevant information of the content to be enjoyed before accessing it.</p><p>Research on audio description in immersive environments is scarce. As part of the ImAc project <ref type="bibr" target="#b11">[12]</ref>, three AD modes were tested altering the sound treatment: in a Static mode the sound was located on the user's side, as if someone was sitting close to the user; in the Dynamic mode, the sound was placed where the main action being audio described was taking place; in the Classic mode, the ambisonic sound was placed above the user's head. Although the expectations were that the Dynamic mode would contribute to better understanding the story, the participants felt confused <ref type="bibr" target="#b14">[15]</ref> and were more interested in the script characteristics. A second test in Spain and in the UK focused on these aspects, comparing the so-called Classic, Radio, and Extended AD, always with the same sound treatment. Classic AD offered a standard approach, whereas Radio AD featured a more engaging description. Extended AD allowed users to pause the video and listen to an additional description. British users preferred the Radio approach, whereas Spanish users selected the Extended AD as their preferred option. In any case, these tests show that immersive media may open the door to new AD forms and sound treatment, although unsuccessful in a first attempt, may be an issue worth exploring in certain type of virtual content.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Audio subtitling</head><p>Audio subtitling refers to written subtitles which are converted into spoken words, be it through a text-to-speech system or a human voice. This is especially useful for users who cannot see or cannot read the subtitles and do not understand the source language being subtitled. Audio subtitles can be an independent access service or they can be integrated with audio description. Research on audio subtitles is limited as explained by Matamala <ref type="bibr" target="#b15">[16]</ref>, but in the field of immersive media is almost non-existent, with some initial tests as part of ImAc <ref type="bibr" target="#b11">[12]</ref> focusing only on user preferences regarding the combination of audio description with audio subtitling.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5.">Interpreting</head><p>Interpreting can take place between oral languages (for example, from Catalan into English) or between an oral language and a visual gestural language (for instance, from Catalan into Catalan Sign Language). Interpreting benefit those who do not understand a language or cannot hear it by providing an alternative in another language. Whereas interpreting between oral languages is seen as a translation service, interpreting between oral and sign languages is seen as an access service, but both offer access to a content that would otherwise not be accessible to the user. In an ideal metaverse, users could choose the language of the interpretation together with some other features: volume in the case of oral interpretation or positioning of the sign language interpreter or choice of avatar in the case of sign language interpreting.</p><p>Research on interpreting in oral languages in virtual worlds has mainly focused on training and the use of virtual learning environments (VLEs) for collaborative learning in interpreter education. According to results gathered from a pilot test conducted by Braun et al. <ref type="bibr" target="#b16">[17]</ref>, while VLEs are becoming more accessible due to technological advances and their use has increased rapidly, developing appropriate VLEs to integrating them as a sustainable solution creates further challenges.</p><p>As for research on sign language interpreting in immersive media, ImAc pilot testing addressed three main aspects: a) display of signer video, which could be continuous or noncontinuous: whereas in the former the signer window was always visible, in the latter it was only present when interpreting was needed; b) presentation of sign language only versus presentation of sign language and subtitles; c) speaker representation, either through an emoji or a textual description <ref type="bibr" target="#b11">[12]</ref>. The pilot test took place in Germany with a limited number of users, demonstrating a preference for non-continuous display, simultaneous presentation of sign language and subtitles, and textual description to identify users. Similarly to interpreting, it can be stated that despite of the recent technological advancements for sign language interpreting in immersive environments, there are still unsolved questions to be investigated to increase the effectiveness of communication <ref type="bibr" target="#b17">[18]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.6.">Easy-to-understand language</head><p>Easy-to-understand language is an umbrella term to refer to different simplified language varieties (from Easy to Plain Language) that enhance comprehensibility. Easy Language (also called Easy-to-Read) is generally addressed to those who have difficulties reading or understanding language, whereas Plain Language is addressed to all. Although content in the metaverse may want to play with different language varieties, services addressed to all would be expected to be explained in Plain Language, providing where possible Easy Language alternatives.</p><p>Research on easy-to-understand language has generally focused on written texts, with a recent interest in audiovisual content and its relationship with access services <ref type="bibr" target="#b18">[19]</ref>. As part of the ImAc project Oncins et al. <ref type="bibr" target="#b19">[20]</ref> compared subtitles designed for deaf and hard-of-hearing individuals with simplified subtitles aimed at people with cognitive disabilities in immersive environments. According to results gathered from the pilot test, simplified subtitles were generally preferred as they caused less distraction and permitted greater focus on the primary visual content.</p><p>To guarantee access to the metaverse to persons with different cognitive needs, one could suggest offering easy instructions on how to access and navigate the metaverse, together with an easy way to go to a safe place in case they feel overwhelmed, like the quite rooms that are found in physical spaces.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.7.">Revoicing</head><p>Revoicing implies translating a source content and voicing it in another language, be it through dubbing or voice-over. In dubbing the original voices are replaced and there are strict synchrony constraints so that the audience thinks the actors are speaking in the target language <ref type="bibr" target="#b20">[21]</ref>. In voice-over, as described by Franco et al. <ref type="bibr" target="#b21">[22]</ref>, the translated version overlaps with the source version and there are less synchrony constraints.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusions</head><p>This article has put forward some of the opportunities posed by the metaverse to create a truly accessible virtual world. It has presented some of the existing access and translation services which are available in our physical world and could be transferred in the new ecosystem. Research on the best implementation strategies in the virtual world exists for certain access services but more extensive research is still needed. For instance, the implementation of new access services based on artificial intelligence open the door to a myriad of investigations where users need to be central. The metaverse is for all users to populate, hence user-centric methodologies need to be at the core of future developments.</p></div>		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>The authors are members of TransMedia Catalonia, a research group funded by the Department of Universities and Research of the Catalan government under the SGR funding scheme (2021SGR00077) and Xarxa AccessCat (2021XARDI00007).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Metaverse beyond the hype: Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy</title>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">K</forename><surname>Dwivedi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Hughes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Baabdullah</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.ijinfomgt.2022.102542</idno>
		<ptr target="https://doi-org.are.uab.cat/10.1016/j.ijinfomgt.2022.102542" />
	</analytic>
	<monogr>
		<title level="j">Int. J. Inf. Manag</title>
		<imprint>
			<biblScope unit="volume">66</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Born accessible: beyond raising awareness</title>
		<author>
			<persName><forename type="first">P</forename><surname>Orero</surname></persName>
		</author>
		<ptr target="https://ddd.uab.cat/record/222130" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">L.-H</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Braud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hui</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2110.05352</idno>
		<title level="m">All one needs to know about metaverse: A complete survey on technological singularity, virtual ecosystem, and research agenda</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Inclusive Immersion: a review of efforts to improve accessibility in virtual reality, augmented reality and the metaverse</title>
		<author>
			<persName><forename type="first">J</forename><surname>Dudley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Garaj</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10055-023-00850-8</idno>
		<ptr target="https://doi-org.are.uab.cat/10.1007/s10055-023-00850-8" />
	</analytic>
	<monogr>
		<title level="j">Virtual Reality</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page" from="2989" to="3020" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Oncins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Eugeni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Matamala</surname></persName>
		</author>
		<title level="m">Requirements of accessible products and services in the metaverse: Part I -System design perspective</title>
				<imprint>
			<publisher>ITU</publisher>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>ITU Focus Group Technical Specification</note>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Oncins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Eugeni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Matamala</surname></persName>
		</author>
		<title level="m">ITU Focus Group Technical Specification Requirements of accessible products and services in the metaverse: Part II -User perspective</title>
				<imprint>
			<publisher>ITU</publisher>
			<date type="published" when="2023">2023. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">A Metaverse: taxonomy, components, applications, and open challenges</title>
		<author>
			<persName><forename type="first">S.-M</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y.-G</forename><surname>Kim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="4209" to="4251" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Exploring subtitle behaviour for 360° video</title>
		<author>
			<persName><forename type="first">A</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Turner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Patterson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Schmitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Armstrong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Glancy</surname></persName>
		</author>
		<ptr target="https://www.bbc.co.uk/rd/publications/whitepaper330" />
	</analytic>
	<monogr>
		<title level="m">White Paper whp 330</title>
				<imprint>
			<biblScope unit="page">2018</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Dynamic Subtitles in Cinematic Virtual Reality</title>
		<author>
			<persName><forename type="first">S</forename><surname>Rothe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Tran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hussmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 15th European Interactive TV Conference (ACM TVX 2018)</title>
				<meeting>the 15th European Interactive TV Conference (ACM TVX 2018)</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Subtitles in virtual reality: guidelines for the integration of subtitles in 360º content</title>
		<author>
			<persName><forename type="first">B</forename><surname>Agulló</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Matamala</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Íkala</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="643" to="661" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Subtitling for the deaf and hard-of-hearing in immersive environments: results from a focus group</title>
		<author>
			<persName><forename type="first">B</forename><surname>Agulló</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Matamala</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Jostrans. The Journal of Specialised Translation</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="217" to="235" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Accessibility in 360º videos: methodological aspects and main results of the evaluation activities in the ImAc project</title>
		<author>
			<persName><forename type="first">A</forename><surname>Matamala</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sendebar</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="65" to="89" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Subtitles in VR 360° video. Results from an eye-tracking experiment</title>
		<author>
			<persName><forename type="first">M</forename><surname>Brescia-Zapata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Krejtz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">T</forename><surname>Duchowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Hughes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Orero</surname></persName>
		</author>
		<idno type="DOI">10.1080/0907676X.2023.2268122</idno>
	</analytic>
	<monogr>
		<title level="j">Perspectives</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Audio description</title>
	</analytic>
	<monogr>
		<title level="m">New Perspectives Illustrated</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Maszerowska</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Matamala</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Orero</surname></persName>
		</editor>
		<imprint>
			<publisher>Benjamins</publisher>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Audio description in 360º content: results from a reception study</title>
		<author>
			<persName><forename type="first">A</forename><surname>Fidyka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Matamala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Soler-Vilageliu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Arias-Badia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Skase</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="14" to="32" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Audio subtitling</title>
	</analytic>
	<monogr>
		<title level="m">The Routledge Handbook of Audio Description</title>
				<editor>
			<persName><forename type="first">Ch</forename><surname>Taylor</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Perego</surname></persName>
		</editor>
		<imprint>
			<publisher>Routledge</publisher>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">It&apos;s like being in bubbles&apos;: affordances and challenges of virtual learning environments for collaborative learning in interpreter education</title>
		<author>
			<persName><forename type="first">S</forename><surname>Braun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Davitti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Slater</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Interpreter and Translator Trainer</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">3</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Sign language in immersive virtual reality: design, development, and evaluation of a virtual reality learning environment prototype</title>
		<author>
			<persName><forename type="first">V</forename><surname>Kasapakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Dzardanova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Vosinakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Agelada</surname></persName>
		</author>
		<idno type="DOI">10.1080/10494820.2023.2277746</idno>
	</analytic>
	<monogr>
		<title level="m">Interactive Learning Environments</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Easy-to-understand language in audiovisual translation and accessibility: state of the art and future challenges</title>
		<author>
			<persName><forename type="first">A</forename><surname>Matamala</surname></persName>
		</author>
		<idno type="DOI">10.18355/XL.2022.15.02.10</idno>
	</analytic>
	<monogr>
		<title level="j">X-Linguae</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="130" to="144" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Accessible scenic arts and Virtual Reality</title>
		<author>
			<persName><forename type="first">E</forename><surname>Oncins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bernabé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Montagud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">Arnáiz</forename><surname>Uzquiza</surname></persName>
		</author>
		<ptr target="https://raco.cat/index.php/MonTI/article/view/368776" />
	</analytic>
	<monogr>
		<title level="j">MonTi: Monografías de Traducción e Interpretación</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="214" to="241" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Audiovisual translation: dubbing</title>
		<author>
			<persName><forename type="first">F</forename><surname>Chaume</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2012">2012</date>
			<publisher>St. Jerome Publishing</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">Voice-over Translation: An Overview</title>
		<author>
			<persName><forename type="first">E</forename><surname>Franco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Matamala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Orero</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2010">2010</date>
			<publisher>Peter Lang</publisher>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
