<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Journal of Human</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1145/2070481.2070494</article-id>
      <title-group>
        <article-title>Efect of Localization, Pitch, and Gain on Auditory Displacement for Pseudo-Force Feedback: An Exploratory Study</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Daniel Oswaldo Lopez Tassara</string-name>
          <email>daniel.lopez.24@aclab.esys.tsukuba.ac.jp</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Naoto Wakatsuki</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Keiichi Zempo</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Pseudo-haptics, Binaural sound, Accessibility</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Graduate School of Science and Technology, University of Tsukuba</institution>
          ,
          <addr-line>Tsukuba</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute of Systems and Information Engineering, University of Tsukuba</institution>
          ,
          <addr-line>Tsukuba</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>5</volume>
      <fpage>57</fpage>
      <lpage>64</lpage>
      <abstract>
        <p>This paper reports on exploratory trials of an interactive virtual river system where sound simulates water pushing the user's hand, aiming to investigate the role of auditory cues on building displacement for pseudo-force feedback. Two trial sessions-one with restricted sound movement-were conducted during live demonstrations, using an embodiment questionnaire tailored for audio-only environments to collect user feedback. Analysis focused on patterns and contrasts across sessions rather than performance comparison, indicating that efective auditory displacement may rely on accurate sound localization-with variations in pitch (frequency) and gain (volume) enhancing the stimuli-when used in high-agency interactions. As expected in early-stage development, tactile responses were unclear, yet most participants still reported feeling some type of sensation, either tactile or interpreted as such. Ultimately, these insights advance sound-based pseudo-haptics as a step toward more accessible immersive environments for the visually impaired.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Designing inclusive immersive environments demands multi-modal support tailored to user needs,
especially for the visually impaired. As immersive technologies continue to evolve and expand,
embedding accessibility into the design process is essential for creating inclusive experiences [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. Inclusive
immersion requires thoughtful adaptation of content and interaction to suit diverse user needs and
sensory modalities [
        <xref ref-type="bibr" rid="ref1 ref3">1, 3</xref>
        ]. This is particularly challenging when it comes to visually impaired users,
who face major barriers due to the mainly visual nature of virtual reality, as discussed by Zhao et al.
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. In this regard, literature on immersive technologies emphasizes the role of auditory and tactile
modalities in overcoming such barriers [
        <xref ref-type="bibr" rid="ref3 ref5">5, 3</xref>
        ].
      </p>
      <p>
        Audio-tactile interaction enables new haptic approaches to enhance accessibility for visually impaired
users. In this context, pseudo-haptics uses cross-modal interaction to evoke tactile sensations by
stimulating other senses, thus ofering a simpler and more afordable alternative to traditional haptic
devices [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. However, most pseudo-haptic designs rely on visual stimuli, making them inaccessible
for visually impaired users [
        <xref ref-type="bibr" rid="ref7">7, 8</xref>
        ]. For this group, pseudo-haptics using sound presents a promising yet
still underexplored alternative [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ], despite existing research on audio-tactile interaction [9, 10].
      </p>
      <p>
        Recent research on sound-based pseudo-haptics reflects growing eforts to address this gap [
        <xref ref-type="bibr" rid="ref6">11, 12,
13, 6</xref>
        ]. Among these, Lopez et al. [11] explored the use of sound localization to induce force sensations,
ofering valuable technical insights and a comprehensive experimental setup. Building on their work,
this paper reports on exploratory trials conducted using an improved version of that setup (see Fig. 1),
analyzing system features and sound parameters through user feedback to uncover their role in the
      </p>
      <p>CEUR</p>
      <p>ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>Despite limited research on sound-based pseudo-haptics, Lopez et al. [11] draw from prior work to
propose a new design. Bosman et al. [12] found realistic sounds and hand ownership key for believability,
while Kitagawa [9] showed that spatial sound cues can influence tactile and body perception. Building
on this, Lopez et al. propose a pseudo-force feedback design based on sound localization, implemented
across two water-stream scenarios. In the first, users keep their hand steady in a virtual river while
sound simulates the flow of water pushing against it. In the second, they move their hand through a
virtual water tank, with sounds indicating streams that either assist or resist the motion. Both scenarios
omit visual cues and use binaural sound to simulate a pushing force through the displacement of the
hand’s perceived auditory position in virtual space, creating a conflict with its actual position.</p>
      <p>The design presented by Lopez et al. [11] employs the same pseudo-haptic technique as the HEMP
system by Pusch et al. [15], but using a diferent sensory modality. The HEMP study presents an
augmented reality experience via a video see-through HMD, where a virtual force field—rendered as
a steam tube with flowing particles—displaces the hand’s visual position upon immersion, creating a
conflict with its real, kinesthetic position. In that sense, while both systems use spatial sensory conflict
to create a pseudo-haptic efect, Pusch et al. rely on visual displacement, whereas Lopez et al. employ
auditory displacement.</p>
      <p>
        Displacement is a widely used pseudo-haptic technique, yet important aspects remain to be explored.
Essentially, it involves applying translational or rotational movement to user input to produce a distorted
output, leading to a sensory conflict that elicits a pseudo-haptic efect [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. As discussed by Ujitoko
et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], displacement has been used in prior research to elicit various haptic properties, such as
weight, compliance, friction, and even force. However, while its use with visual stimuli has been
extensively explored, its application through auditory stimuli has received little attention (ibid). Given
the distinct nature of auditory and visual modalities, further research is needed to understand how
acoustic parameters—particularly sound localization—can support pseudo-force illusions.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. System</title>
      <p>The system used in this study—originally introduced in [14] and shown in Fig. 1—extends the work
of Lopez et al. [11] to provide a more robust setup for studying pseudo-haptic force feedback using
sound localization, reinforced by obstruction and variations in pitch (frequency) and gain (volume). It
features a virtual river scenario where sound simulates water pushing the user’s hand, while allowing
free interaction by sweeping the hand through the environment.</p>
      <sec id="sec-3-1">
        <title>3.1. Method</title>
        <p>As mentioned earlier, the pseudo-haptic efect emerges from a conflict between the kinesthetic and
auditory position of the hand. The auditory position consists of river-like sounds projecting the hand’s
real, kinesthetic position into the virtual environment through binaural localization. These spatial sound
cues support hand ownership, and their displacement alters self-body perception (i.e., proprioception
[16]), resulting in a sensory conflict that evokes a force sensation</p>
        <p>The displacement involves a lateral shift of the hand’s auditory position after immersion, emulating
the dragging force of the river. It uses a stepwise function with acceleration and deceleration phases to
provide a realistic approximation of the velocity and movement of the hand. First, the velocity increases
exponentially from rest to match the river’s velocity and direction, then decays exponentially towards
zero as the hand, attached to the arm and body, resists the river’s flow. These phases are described by
the following formulas.</p>
        <p>hand() =  max (1 − exp(− ⋅ ) )</p>
        <sec id="sec-3-1-1">
          <title>Acceleration Phase</title>
          <p>hand() =  max exp(− ⋅ ( −  peak))</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>Deceleration Phase</title>
          <p>(1)
(2)
Where:
•  hand() : velocity of the hand at time 
•  max: maximum velocity (river’s velocity)
•  : acceleration constant
•  : deceleration constant
•  : time
•  peak: time at which the maximum velocity is reached</p>
          <p>The displacement parameters were fine-tuned to create a noticeable sensory conflict while remaining
subtle enough to build the illusion. These parameters—maximum velocity and acceleration/deceleration
factors—control the displacement distance over time, and their optimal values were set through
preliminary testing to achieve a noticeable yet believable efect (see Fig. 3). These values enable continuous
displacement with multiple feedback cycles before stopping, simulating the river persistently pushing
the hand.</p>
          <p>The perception of the hand being pushed leads to a corrective process that completes the pseudo-haptic
experience. The auditory displacement alters proprioception, creating a conflict that the user is expected
to resolve through a motor reaction. This involves muscle efort to maintain the original auditory
position of the hand, helping to build the mental representation of force being exerted. Therefore, the
user must be initially instructed to keep the hand steady in order to trigger this corrective process in
response to displacement.</p>
          <p>Finally, it is important to note that the sensory conflict occurs in a comprehensive auditory virtual
environment, with sound obstruction and pitch-gain variations also resulting from user interaction.
These sound cues are then leveraged to reinforce the sensory conflict by improving realism and
expectation on the auditory displacement, thus making it more convincing.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Implementation</title>
        <p>The system is implemented as an Augmented Reality application in Java, Kotlin, and C++, running on
an Android smartphone, leveraging hand position as the primary input and binaural sound cues as the
primary output. Hand position is captured from mid-air gestures via the smartphone’s camera using the
MediaPipe library, while binaural sound cues are rendered via Bluetooth stereo headphones using the
OpenAL library. Additionally, to optimize hand tracking, the smartphone is mounted on a cardboard
box that blocks external interference and includes internal light bulbs for extra illumination (see Fig. 2).</p>
        <p>The audio virtual environment includes three feature layers: environmental parameters that govern
global sound behavior—such as listener position and orientation, attenuation model, and reference and
maximum distances—configured for realistic first-person interaction; sound sources, playing river-like
sounds within those parameters, each with specific properties (gain, pitch, position); and custom features,
which integrates the first two to enable system interactivity. Among these interactions, hand obstruction
simulates how the hand mufles the river sound—as it approaches or enters the virtual river—by mainly
reducing high frequencies based on its position, with asymmetrical efects in each ear. The hand–river
interaction also features distinct sound cues based on contact and movement, including splash sounds
on entry and exit, dynamic positioning of the hand-in-river sound based on lateral hand motion, and
pitch and gain changes when moving against (left) or with the flow (right).</p>
        <p>The pseudo-haptic efect relies on an involuntary sound displacement. Users are first asked to
keep their hand’s auditory position centered, causing the pitch and gain of the hand-in-river sound to
increase—simulating resistance against the water and priming the upcoming shift. Then, the sound
moves to the right (with the flow), simulating the river’s force pushing the hand (see Fig. 4).</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Exploratory trials</title>
      <p>Before moving on to more rigorous testing, eventually engaging visually impaired users, an exploratory
phase with an open pool of participants is essential for gathering initial insights to refine the system.
Exploratory trials were conducted during two public demonstration events—SIGGRAPH Asia 2024
(Session A) and Augmented Humans 2025 (Session B)—where participants engaged freely with the
system in naturalistic conditions (see Figures 5 and 6). The format of the events limited demographic
data collection but allowed for spontaneous user feedback and valuable observations for early-stage
evaluation of the role of sound cues in the pseudo-haptic experience.</p>
      <p>Both sessions used the same setup, but Session B added a water-resistance feature for deeper
exploration. This feature creates a drag efect restricting leftward movement (against flow) of the hand’s
auditory position, triggering audio-tactile interaction to adjust proprioception and reinforce the sensory
conflict. This efect mirrors the stepwise function from Section 3, integrated with respect to time to
produce dragging distance instead of velocity (since velocity now depends on user movement), with
the hand’s real position as the controlling variable. Moreover, the parameters were adjusted to ensure a
perceivable restricting efect (see Fig. 7).</p>
      <p>The trials followed a two-phase protocol—familiarization and pseudo-haptic experience—with
participants blindfolded throughout to eliminate visual input. During familiarization, users moved their
hands through the virtual environment, while auditory feedback helped them grasp the interaction
dynamics and foster a sense of hand ownership. In the pseudo-haptic experience, they were asked to keep
their hand’s auditory position centered in the river, triggering the priming cues and the involuntary
displacement described earlier.</p>
      <p>After each trial, participants filled out a questionnaire assessing usability, embodiment, and first
impressions of the pseudo-haptic experience. The questionnaire was adapted from Gonzalez-Franco
and Peck’s Embodiment Questionnaire [17] to better suit an audio-only virtual environment (see Fig. 8).
It included Likert-scale items ranging from –3 (strongly disagree) to +3 (strongly agree), assessing
Usability (Q1); Body Ownership (Q2); Body Location (Q3, Q4); Agency and Motor Control (Q5, Q6);
Tactile Sensations (Q7, Q8, Q9); and Response to External Stimuli (Q10–Q14). It also contained one
multiple-select item on events triggering tactile sensation (Q15, Fig. 9), and four open-ended questions
for qualitative feedback on the overall experience and system improvements.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <p>Data were collected from 30 participants—17 in Session A and 13 in Session B. One participant from
Session B was excluded due to hand tracking issues, and three participants did not answer all questions.
These exclusions have minimal impact, as most analyses are conducted on a per-question basis using
the available sample size. Results are reported descriptively based on question category.</p>
      <p>Participants consistently rated the system as easy to understand. The usability question (Q1) scored
high in Session A (M = 1.82, SD = 0.73), and even higher in Session B (M = 2.08, SD = 1.51).</p>
      <p>Body ownership and location were strongly perceived. The feeling that the heard hand was one’s
own (Q2) was rated highly, especially in Session B (M = 2.50, SD = 0.67) compared to Session A (M =
1.71, SD = 0.47). Perception of the hand’s location in the river (Q3) showed moderate agreement in both
sessions (Session A: M = 1.65, SD = 0.86; Session B: M = 1.67, SD = 1.37). Participants also agreed that the
sound helped position the hand at the river’s center (Q4), with higher ratings in Session B (M = 2.33, SD
= 0.89) compared to Session A (M = 1.47, SD = 1.23).</p>
      <p>Participants showed a clear sense of agency and motion control. They strongly agreed that their
movements afected the river’s sound (Q5), with high scores in both sessions ( Session A: M = 2.71, SD
= 0.47; Session B: M = 2.42, SD = 0.90). Agreement was slightly lower, but still positive, regarding the
river’s sound influencing their movements (Q6), with Session A scoring M = 1.65 (SD = 1.41) and Session
B scoring M = 1.67 (SD = 1.15).</p>
      <p>Perceived tactile qualities were more varied. The sensation of touching the river (Q7) received low to
moderate agreement (Session A: M = 1.00, SD = 0.94; Session B: M = 0.58, SD = 1.31). The distinction
between moving with or against the flow (Q8) was moderately perceived ( Session A: M = 1.35, SD = 1.41;
Session B: M = 1.42, SD = 1.08). Finally, the feeling of pushing against a force while relocating the hand
(Q9) was notably weak in both sessions (Session A: M = 0.00, SD = 1.37; Session B: M = 0.11, SD = 1.76).</p>
      <p>Participants showed mixed responses to external stimuli. The need to resist the water’s force while
the hand was centered (Q10), ranged from low in Session A (M = 0.29, SD = 1.26) to moderate in Session B
(M = 0.75, SD = 1.36). The sensation of something about to happen due to the sound (Q11) was generally
weaker, barely above neutral (Session A: M = 0.59, SD = 1.42; Session B: M = 0.50, SD = 1.45). The tactile
response to the involuntary rightward-moving sound (Q12) was even weaker, especially in Session B (M
= -0.17, SD = 1.53) compared to Session A (M = 0.41, SD = 1.06). Similarly, the feeling of being pushed by
the river (Q14) was barely perceived in both sessions (Session A: M = 0.29, SD = 1.36; Session B: M = 0.00,
SD = 1.87). However, the instinct to correct the hand’s position when the sound moved rightward (Q13)
was noticeably stronger (Session A: M = 0.94, SD = 1.52; Session B: M = 1.22, SD = 1.92).</p>
      <p>Regarding events triggering tactile sensations (Q15), results varied across sessions (see Fig. 5). In
Session A, the most frequent sensations were linked to the hand drifting left (18%), drifting right (16%),
or involuntarily shifting right while centered (16%). Others resulted from the hand hitting the water
(14%) and a leftward correction toward center (11%). Only 9% reported no tactile sensation, and all
other responses were less than 8% each. In Session B, the most common sensation was triggered by the
hand hitting the water (28%), followed by the hand being centered (17%) or exiting the water (17%). A
sensation linked to the hand moving right was less frequent (11%), and all other responses, including
absence of tactile feedback, were 6% or fewer. Percentages are rounded for clarity.</p>
      <p>Finally, participants in both sessions shared mixed impressions and suggestions, which are included
in the Discussion to better contextualize the analysis.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>This study examined how sound parameters—particularly localization—can produce auditory
displacement to evoke pseudo-force sensations. Therefore, the results were analyzed for patterns and contrasts
between sessions to extract useful insights instead of determining superiority.</p>
      <p>The system was consistently perceived as easy to understand, even though auditory displacement
Usability
Body Own.</p>
      <p>Body Loc.</p>
      <p>Agency</p>
      <p>Tactile
Sensations
External
Stimuli</p>
      <p>Q1
Q2
Q3
Q4
Q5
Q6
Q7
Q8
Q9*
Q10
Q11
Q12
Q13*
Q14*
−3
−2
−1 0 1
Level of agreement
2
3
−3
−2
−1 0 1
Level of agreement
2
3
may disrupt perceptual coherence. This likely reflects the design rationale in [ 14], which enables audio
augmentation while preserving realism, thereby providing a reliable setup for auditory displacement.</p>
      <p>Body ownership and location were clearly perceived; while ownership aids in building a coherent
virtual model [12], body location seems to be a factor directly shaping the pseudo-haptic efect. Precise
body localization was achievable using sound cues alone, but their contribution appears tied to responses
to involuntary displacement (see Q4 and Q13, Fig. 8). This suggests that accurate sound localization
may be key for efective auditory displacement. As one participant recommended, ”Transition sound
can be smoother.” Since localization can be afected by factors like motion, distance, and reverberation
[18, 14], it is important to carefully adjust sound parameters and system features to maximize accuracy.</p>
      <p>Agency appears to play a key role in shaping efective auditory displacement. Stronger feelings of
control over the environment (Q5) were linked to higher tactile responses in high-agency interactions (e.g.,
moving the hand left and right—Q8), while lower feelings of environmental influence (Q6) correspond to
weaker responses during low-agency actions (e.g., keeping the hand steady—Q10, Q11, Q12; small hand
movements—Q9). Excluding the stronger responses during the low-agency involuntary displacement
(Q13)—likely driven by the centering task than by a felt force, as reflected in Q9, Q12, Q14—this suggests
9</p>
      <p>6
5
ater
sensa
tion
P 10
0
in thehaitmitrinovginwgatleetfrin w
moving
ater
righ</p>
      <p>ater
t in w
centered
iinnvwoluntary
correcting</p>
      <p>ge
right
letf mo</p>
      <p>ve
tting</p>
      <p>t of w
ou
no tactile
that fostering a strong sense of agency during displacement, or employing high-agency interactions,
may enhance its efect.</p>
      <p>Focusing on interactions with consistent tactile responses across sessions allows for a deeper analysis
of the sound cues contributing to the displacement. While tactile responses were generally weak (Q7, Q9,
Q12, Q14) even with a clear perception of displacement (Q13), stronger sensations were reported during
left/right hand movements (Q8). Lateral hand movements involved pitch-gain variations afecting sound
localization in both sessions, while localization was dynamically restricted only in Session B (see Section
4). This suggests that pitch-gain variations may contribute meaningfully on producing displacement.
Although displacement was defined as a spatial shift (see Section 2), sound-based pseudo-haptics may
call for a broader model of sensory conflict.</p>
      <p>Overall, efective auditory displacement may rely on accurate localization and strategic pitch and
gain use to enrich stimuli, especially in high-agency contexts.</p>
      <p>Finally, participants did not report clear tactile feedback (see Q7–Q14, Fig. 8), yet most linked certain
events to touch (Q15, Fig. 9)—suggesting implicit interpretation. As one participant noted: ”I didn’t
feel like I was pushing it, but I immediately knew that the movement of my hand had a sound and an
efect.” That said, strong agreement was found for sensations during hand immersion and rightward
movement in water—both tied to high agency, accurate localization, and pitch-gain variation. In contrast,
leftward movement showed mixed results; despite high agency and pitch-gain changes, restricted sound
movement in Session B may have reduced its efect. Low-agency events—like keeping the hand centered,
involuntary displacement, or small leftward corrections—also gave inconsistent outcomes. Exiting the
water showed mixed results—though pitch-gain variation was present, it typically marked the end of
interaction. Ultimately, in-air hand motion evoked minimal sensations—as intended.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>This paper explored how sound properties and interactions support auditory displacement for
pseudoforce sensations, ofering insights toward accessible immersive environments for the visually impaired.
To this end, two trial sessions—one with restricted sound movement—were conducted using an
interactive virtual river system where sound simulated water pushing the user’s hand. These sessions were held
during live demonstrations, with user feedback collected via an embodiment questionnaire—adapted
for audio-only environments—to analyze the pseudo-haptic efects of sound cues across various aspects
of user experience.</p>
      <p>As an exploratory study, analysis focused on patterns and diferences across sessions to gain insights
rather than assert superiority. Findings suggest that the system ofers a robust platform for exploring
sound-based pseudo-force feedback, balancing realism with audio augmentation. In this context, sound
localization appears a reliable cue for displacement—though enhancing its precision could improve
efectiveness—while pitch and gain variations can be applied strategically to reinforce the stimuli.
Moreover, fostering a strong sense of agency during displacement—or using high-agency interactions
for it—seems key for enhancing its efect. Finally, tactile responses were not consistent—as expected in
early-stage development—yet most participants reported perceiving sensations that may have been
tactile or interpreted that way.</p>
      <p>Although the study involved uncontrolled conditions—such as ambient noise, flexible procedures, and
open participation—these suited its exploratory goals. Future research should address these limitations,
including targeted recruitment of visually impaired users—who may be harder to engage—to enable
rigorous testing and conclusive results. The system should also adapt to these findings for further
testing, and scale to broader uses, like emulating force from solid objects.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>This work was supported by JSPS KAKENHI Grant Number 24H00892, Japan.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT and Gemini in order to: Grammar and
spelling check, Paraphrase and reword. After using these tools, the authors reviewed and edited the
content as needed and take full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Dudley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Garaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. O.</given-names>
            <surname>Kristensson</surname>
          </string-name>
          ,
          <article-title>Inclusive immersion: a review of eforts to improve accessibility in virtual reality, augmented reality and the metaverse</article-title>
          ,
          <source>Virtual Real</source>
          .
          <volume>27</volume>
          (
          <year>2023</year>
          )
          <fpage>2989</fpage>
          -
          <lpage>3020</lpage>
          . URL: https://doi.org/10.1007/s10055-023-00850-8. doi:
          <volume>10</volume>
          .1007/s10055- 023- 00850- 8.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mott</surname>
          </string-name>
          , E. Cutrell,
          <string-name>
            <given-names>M. Gonzalez</given-names>
            <surname>Franco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Holz</surname>
          </string-name>
          , E. Ofek,
          <string-name>
            <given-names>R.</given-names>
            <surname>Stoakley</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Ringel Morris, Accessible by design: An opportunity for virtual reality</article-title>
          ,
          <source>in: 2019 IEEE International Symposium on Mixed and Augmented</source>
          Reality
          <string-name>
            <surname>Adjunct (ISMAR-Adjunct</surname>
            <given-names>)</given-names>
          </string-name>
          ,
          <year>2019</year>
          , pp.
          <fpage>451</fpage>
          -
          <lpage>454</lpage>
          . doi:
          <volume>10</volume>
          .1109/ISMAR- Adjunct.
          <year>2019</year>
          .
          <volume>00122</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <issue>W3C</issue>
          ,
          <article-title>Xr accessibility user requirements</article-title>
          ,
          <year>2021</year>
          . URL: https://www.w3.org/TR/xaur/.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Bennett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Benko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Cutrell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Holz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Morris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sinclair</surname>
          </string-name>
          ,
          <article-title>Enabling people with visual impairments to navigate virtual reality with a haptic and auditory cane simulation</article-title>
          ,
          <source>in: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2018</year>
          , p.
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          . URL: https://doi.org/10. 1145/3173574.3173690. doi:
          <volume>10</volume>
          .1145/3173574.3173690.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.</given-names>
            <surname>Creed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Al-Kalbani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Theil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sarcar</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Williams</surname>
          </string-name>
          ,
          <article-title>Inclusive ar/vr: accessibility barriers for immersive technologies</article-title>
          ,
          <source>Universal Access in the Information Society</source>
          <volume>23</volume>
          (
          <year>2024</year>
          )
          <fpage>59</fpage>
          -
          <lpage>73</lpage>
          . URL: https://doi.org/10.1007/s10209-023-00969-0. doi:
          <volume>10</volume>
          .1007/s10209- 023- 00969- 0.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Melaisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rojas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kapralos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Uribe-Quevedo</surname>
          </string-name>
          , K. Collins,
          <article-title>Multimodal interaction of contextual and non-contextual sound and haptics in virtual simulations</article-title>
          ,
          <source>Informatics</source>
          <volume>5</volume>
          (
          <year>2018</year>
          ). URL: https://www.mdpi.com/2227-9709/5/4/43. doi:
          <volume>10</volume>
          .3390/informatics5040043.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ujitoko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ban</surname>
          </string-name>
          ,
          <article-title>Survey of pseudo-haptics: Haptic feedback design and application proposals</article-title>
          ,
          <source>IEEE Transactions on Haptics</source>
          <volume>14</volume>
          (
          <year>2021</year>
          )
          <fpage>699</fpage>
          -
          <lpage>711</lpage>
          . doi:
          <volume>10</volume>
          .1109/TOH.
          <year>2021</year>
          .
          <volume>3077619</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>