=Paper=
{{Paper
|id=Vol-3704/paper7
|storemode=property
|title=Sharing and Exchanging Realities through Video See-Through MR Headsets
|pdfUrl=https://ceur-ws.org/Vol-3704/paper7.pdf
|volume=Vol-3704
|authors=Yu Sun
|dblpUrl=https://dblp.org/rec/conf/realxr/Sun24
}}
==Sharing and Exchanging Realities through Video See-Through MR Headsets==
Sharing and Exchanging Realities through Video
See-Through MR Headsets
Yu Sun
University of St. Gallen, St. Gallen, 9000, Switzerland
Abstract
Humans often have a natural inclination towards exploring and sharing and the intricacies of daily life.
This paper explores concept ideas of using video see-through capable mixed reality (MR) headsets to
enable individuals to share and receive content diverging from traditional media forms. We propose four
ways of sharing personal realities, considering reciprocity levels and temporal factors, encompassing
unidirectional, bidirectional, synchronous, and asynchronous sharing. We further elaborate on some use
cases for each way. The paper also addresses ethical concerns arising from MR sharing, such as privacy
issues, the allure of experiencing life through others, and the potential effects on one’s sense of personal
reality.
Keywords
Mixed Reality, Reality Sharing, Live-streaming
1. Introduction
Sharing personal lives and activities through technology has been researched in human-
computer interaction (HCI). It ranges from sharing calendars to increase self-disclosure and
intimacy [1] to sharing one’s longing through a tangible device [2]. With the evolvement of
technologies, sharing activities can also be supported in various ways. For instance, content
sharing on social media has expanded to encompass a broader range of modalities, from text
and images to short videos, through methods like blogging, Instagram Boomerangs and Reels.
Besides, the shared content can be more engaging by adding creative tools such as camera
filters [3]. Additionally, in the current trend of live-streaming, the streamers and viewers are
becoming more engaged in communicating with each other, and the viewers can influence
the narrative directly [4], such as by instructing streamers during live interactions. Beyond
modality enrichment, engagement, and interactivity, immersiveness could also gain more popu-
larity in content sharing. For example, by utilising 360-degree video streaming technologies,
users can view the shared content through a head-mounted display (HMD) in an immersive
way [5]. Furthermore, objects can be live blended into 360 degree streamed video, making it
more dynamic [6]. Nation et al. [7] found that, by watching a 360-degree immersive video
in an educational setting, students have a higher satisfaction than watching a conventional
video. Besides, the shared 360-degree panorama video can be enhanced by applying mixed
RealXR’24: Workshop on Prototyping and Developing Real-World Applications for Extended Reality, June 04, 2024,
Arenzano (Genoa), Italy
Envelope-Open yu.sun@unisg.ch (Y. Sun)
Orcid 0009-0008-5621-9147 (Y. Sun)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
reality (MR) technology. Lee et al. [8] enriched the video watching experience by overlaying
MR visualisation of non-verbal communication cues, such as visualisation of view focus points
and gesture cues. Besides 360-degree videos, volumetric video also enables the capture of vivid
3D content. However, its use case is often limited to capturing objects or people, making it
challenging to share first-person perspective content from the streamers’ views.
With the various content sharing technologies discussed, Mixed Reality (MR) is particularly
interesting. MR technologies, which offer the possibility of integrating virtual and physical
realities and seamless transition between the two, can offer opportunities for novel experiences
of sharing one’s life and reality. Especially with video-see-through capable MR headsets, we
expect that the sharing and viewing experiences could vary, as we see the world directly through
the streamers’ eyes. These experiences could potentially become more first-person-oriented
and more personal. Therefore, we are curious about how the ways of content sharing might
be designed. This paper provides concepts for sharing realities through video see-through
capable MR headsets. It explores various ways of sharing MR contents, considering reciprocity
levels and temporal factors that shape different forms of sharing, including unidirectional,
bidirectional, synchronous and asynchronous sharing. These ways highlight the versatility and
potential of MR in enhancing the richness and depth of shared experiences. Furthermore, we
address the ethical implications arising from MR reality sharing. Privacy concerns, the allure of
living through others’ experiences, and the impact on personal reality are critical considerations.
We aim to balance technological innovation with ethical responsibility by examining these
facets.
2. Motivation and Background
The motivations of individuals who post their experiences or activities online have been re-
searched by Stone et al. [10]. They found that people primarily share their personal experiences
online for social reasons. This inclination towards sharing can be influenced by personality
traits, with extraversion contributing to the sharing behaviours of personal life [10]. Further-
more, individuals experiencing loneliness or having lower self-esteem may utilise social media
to share personal experiences for therapeutic reasons [10].
Sharing real-life experiences has also motivated technical development to support this pro-
cess. Numerous previous works have investigated the role of technologies in fulfilling various
sharing needs and the impacts of technology-enabled sharing activities. For example, Thayer et
al. [1] found that calendar sharing is a way of achieving intimacy and self-disclosure. Through
authoring event descriptions, users can express their emotions but also keep privacy by avoiding
sensitive information. By practising calendar sharing, users can accomplish relational work.
Neustaedter et al. [11] designed an “always-on video” system for sharing everyday life, sup-
porting expressions of intimacy over distance and building connections. Chattopadhyay et
al. [12] analysed vlogs of “a day in the life of software developers” and found that developers
are inclined to share more non-coding related tasks, such as time spent with family. Olsson et
al. [13] investigated the user needs to share life memories and found people value recollecting
past events and milestones. Sharing common memories is a key factor in strengthening social
ties and bonds.
Figure 1: An example of sharing realities through mixed reality technologies: On the left, a streamer
broadcasts her travel experiences directly through the front camera on the MR headset, sharing her
journey in real-time. On the right, viewers partake in an immersive travel adventure, experiencing the
sights and sounds through the streamer’s perspective, as if they are exploring the uncharted locations
as well. Picture generated by DALL·E 3 [9].
Besides sharing, people also have multifaceted motivations of consuming the shared lives [14],
which can be observed by the surged popularity of Day-In-The-Life videos [15]. Research
shows that factors such as a sense of community, emotional support, and interactivity can
positively affect viewers’ social presence and, subsequently, their engagement with the streaming
content [16]. Moreover, viewers are also appealed by the authenticity of the stories, they can
resonate due to the genuine and relatable content or maybe also learn from the lives of others [15].
This often provides surprising and varied experiences that differ from their own lifestyles.
Building on the motivations behind humans engaging in reality sharing and receiving, as
well as the technical enhancements facilitating this process, we believe that technologies such
as video see-through MR headsets — which allow users to perceive the real world even while
non-transparent displays cover their eyes — can open up new possibilities. These MR headsets
are equipped with cameras mounted on them to capture the real world and stream it onto the
built-in display. This capability forms the basis for broadcasting one’s reality, experiencing
someone else’s reality, or exchanging realities between the two — users can send, receive, or
swap their views with another user’s see-through video feed via network connections for an
immersive experience. However, the quality of the streamed videos may be limited by the
resolution of the cameras, the fidelity of spatial and stereophonic sound, and potential delays
caused by network connectivity and bandwidth constraints.
Yet, the concept of sharing content through an MR headset is not limited to sharing one’s
reality, but also extends to one’s virtual reality or personalised reality. Strecker et al. [17]
highlighted the motivations for sharing one’s personalised reality, such as avoiding filter bubbles
and increasing user’s autonomy and control.
3. Concept
We categorise four different ways of sharing see-through video content, taking into account
both directional and temporal factors. The sharing can be unidirectional or bidirectional, and
it can occur in real-time (synchronously) or be accessed later (asynchronously). We further
elaborate on some use cases for each way.
3.1. Unidirectional Sharing
Unidirectional sharing enables the streamer to live in and perceive their reality, while the
viewer’s perspective is completely replaced by this alternate reality through the headset. This
form of interaction can take place between streamers and viewers on streaming platforms or
among friends in private video chats. This mode of sharing is apt for everyday MR scenarios,
where any aspect of daily life can become content for sharing (see Figure 1). Picture a scenario
where your friend is relaxing in a café with a pleasant ambiance. They could share this pleasant
experience with users and immerse them in a café through the MR see-through video. Similarly,
streamers travelling to foreign countries can immerse their viewers in unique experiences.
For example, they might try a salivating bowl of ramen and transmit this experience to their
audience from a first-person perspective. In educational settings, streaming reality can foster
empathy [18]. Teachers could adopt the perspectives of children, gaining deeper insight into
their challenges and experiences. This empathetic approach can lead to more effective teaching
strategies and a stronger teacher-children connection. Furthermore, unidirectional streaming
method can open the door to novel experiences. Take a music enthusiast who has longed to
know what it’s like to be on stage with a band, performing in front of a vast audience, as an
example. MR sharing technology can fulfil this dream by providing a complete immersion into
the musician’s viewpoint, allowing the fan to step into the shoes of a performer virtually.
However, even though this streaming method incorporates see-through video, and stereo
audio, among others, creating a high-fidelity simulation that engages our visual, auditory,
and potentially olfactory senses [19], it still faces challenges in achieving full body awareness.
Research indicates that a mismatch between virtual locomotion and actual body movement can
negatively impact user experiences [20]. This misalignment can lead to confusion, diminish
the sense of body ownership, and reduce immersiveness. Moreover, without control over the
viewpoint, cyber sickness could be induced [21]. To grant viewers direct control, the camera
could be mounted on a robot, or, similar to the DJI FPV [22] system, where users remotely
control the moving machine and the direction of the cameras, and view through goggles, to
achieve full immersiveness. However, this requires the deployment of remote machines and
could raise security (surveillance) and safety concerns. Kasahara et al. [23] proposed a first-
person omnidirectional video system, utilising an omnidirectional camera and a goggle for
capturing and reviewing first-person content to reduce motion sickness.
Figure 2: Inspired by the “ghost view” in Mario Kart [24], where a semi-transparent character from a
previous video blends into the scene, we envisage the future of streamed video being able to be cropped
and seamlessly embedded in situ into someone’s reality. In this culinary scenario, users are enabled
to live stream someone else’s cooking processes in the immediate surrounding, allowing for real-time
conversation and instruction sharing between them. Picture generated by DALL·E 3 [9].
3.2. Bidirectional Sharing: Swapped Reality
Bidirectional streaming, unlike single directional streaming, replaces the realities of both parties,
making it impossible to do independent activities. Therefore, bidirectional streaming is less
suited for one-sided experience sharing and more apt for creating novel, fun, and creative
experiences. For example, swapping realities can facilitate a body exchange simulation. Such
reality-swapping opens up unique and engaging opportunities, such as empathy-building
and relational bonding exercises, when couples live in each other’s bodies. This experience
can be enhanced by, e.g., coordinating movements — such as synchronising hand touches —
where participants align their actions with their video see-through view. This creates a mental
match and the illusion of controlling the swapped body, thereby intensifying the sense of body
ownership and agency [25, 26].
Reality swapping could also potentially facilitate engaging collaborative experiences and
foster trust. Imagine a couple co-wandering in a forest, each seeing only through the other’s
eyes, unaware of their own immediate surroundings. In this scenario, trust and communication
become essential, as each partner must rely on the other for guidance in an environment with
obstacles. Such an experience could strengthen trust and empathy within the relationship.
However, it is still feasible for individuals to engage in their own activities during bidirectional
sharing, when the videos are cropped and only partially blended into one’s reality (see Figure 2).
By enabling the blending of video segments from a sharing partner in a 3D hologram manner, a
sense of co-presence can be attained even in the absence of physical co-location. This method can
be applied for collaborative tasks, instructional scenarios, or for fostering intimate connections.
Work from Grønbæk et al. [27] has partially blended dissimilar real-world environments into
shared realities, creating a coherent blended space. Furthermore, Yoshino et al. [28] proposed
the idea of blending the streamer themselves into the shared realities. However, this approach
requires additional capturing cameras or pre-scanned full-body avatars, which adds more
complexity than directly streaming through MR headsets.
3.3. Synchronous Sharing: Co-living in the Moment
Synchronous sharing through live-streaming opens up a realm of real-time interaction between
viewers and streamers. This form of sharing transcends passive viewing, allowing for an
active exchange where viewers can engage in live discussions and share immediate reactions.
Furthermore, viewers can also influence the stream’s storyline. For example, they might prompt
streamers to explore specific scenarios or activities. This interactivity caters to a wider range of
interests, potentially enabling a more dynamic and engaging experience, and fostering a sense
of co-living and shared experience.
3.4. Asynchronous Sharing: Memory Sharing
Asynchronous sharing allows for the preservation of streaming videos for future rewatching.
This mode of sharing enables a “yesterday once more” experience, where individuals can relive
past moments with high fidelity. Leveraging spatial computing, which can re-present past
events with stereo video and audio, it makes users feel as if they are actually there. It’s akin to
replaying a memory, providing a high level of presence and authenticity. This extends the life
of shared experiences beyond their initial occurrence.
4. Ethical Concerns
This section outlines a series of ethical implications associated with MR reality sharing, ad-
dressing concerns such as detachment from personal reality, privacy infringement, and the
propensity to prefer memories over the present.
1) Others’ Lives are Better - Detachment from Personal Reality: A potential risk of immersive
experiences of other’s life through MR is detachment from one’s own reality. The zero-distance
immersion in others’ lives might lead some individuals to perceive others’ experiences as more
desirable, potentially causing a reluctance to engage with their own real-life situations.
2) Privacy Infringement: With mobile MR live-streaming becoming more prevalent, concerns
about privacy infringement could rise. The possibility of unintentionally capturing and sharing
the lives of bystanders calls for solutions like face-blurring technologies to protect the privacy
of individuals who have not consented to be part of the shared experience.
3) Living in the Past: There is a risk that individuals may overly indulge in reliving memories,
preferring the comfort of “yesterday once more” to facing present realities. This preference for
memories over current experiences can lead to a disconnect from the present.
4) The Entire History of You: The capability to stream and store realities at any time raises
concerns about consent and the compulsion to share memories. The ethical ramifications of
having one’s entire history potentially accessible and revisitable need to be thoroughly examined
to protect individual autonomy and privacy.
5. Conclusion
This paper explored the potential of MR technologies capable of video see-through in reality
sharing, highlighting how these technologies can augment our experience of sharing and con-
suming personal lives and activities. Through the lens of MR, we believe that both unidirectional
and bidirectional, as well as synchronous and asynchronous modes of sharing, have the potential
to deepen our interactions, enrich our experiences, and extend the lifespan of our memories.
Acknowledgments
In this paper, we used Overleaf’s built-in spell checker, Grammarly, and the current version
of ChatGPT. These tools helped us fix spelling mistakes and get suggestions to improve the
writing of the paper. If not noted otherwise in a specific section, these tools were not used in
other forms.
References
[1] A. Thayer, M. J. Bietz, K. Derthick, C. P. Lee, I love you, let’s share calendars: Calendar
sharing as relationship work, in: Proceedings of the ACM 2012 Conference on Computer
Supported Cooperative Work, CSCW ’12, Association for Computing Machinery, New
York, NY, USA, 2012, p. 749–758. URL: https://doi.org/10.1145/2145204.2145317. doi:10.
1145/2145204.2145317 .
[2] W. Gaver, F. Gaver, Living with light touch: An autoethnography of a simple communica-
tion device in long-term use, in: Proceedings of the 2023 CHI Conference on Human Factors
in Computing Systems, CHI ’23, Association for Computing Machinery, New York, NY,
USA, 2023. URL: https://doi.org/10.1145/3544548.3580807. doi:10.1145/3544548.3580807 .
[3] A. Javornik, B. Marder, J. B. Barhorst, G. McLean, Y. Rogers, P. Marshall, L. Warlop, ‘what
lies behind the filter?’uncovering the motivations for using augmented reality (ar) face
filters on social media and their effect on well-being, Computers in Human Behavior 128
(2022) 107126.
[4] N. K. Suganuma, An ethnography of the Twitch. tv streamer and viewer relationship,
California State University, Long Beach, 2018.
[5] R. Shafi, W. Shuai, M. U. Younus, 360-degree video streaming: A survey of the state of the
art, Symmetry 12 (2020) 1491.
[6] T. Rhee, A. Chalmers, I. Loh, B. Allen, L. Petikam, S. Thompson, T. Revill, Mixed reality 360
live: live blending of virtual objects into 360° streamed video, in: ACM SIGGRAPH 2018
Real-Time Live!, SIGGRAPH ’18, Association for Computing Machinery, New York, NY,
USA, 2018. URL: https://doi.org/10.1145/3229227.3229229. doi:10.1145/3229227.3229229 .
[7] J. A. Nation, J. McNeill, K. Einhellig, J. Bezyak, Nursing Student Experience and Safety
Awareness Using 360-Degree Immersive Video Simulation, Ph.D. thesis, USA, 2020.
AAI27833673.
[8] G. A. Lee, T. Teo, S. Kim, M. Billinghurst, Mixed reality collaboration through sharing
a live panorama, in: SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications,
SA ’17, Association for Computing Machinery, New York, NY, USA, 2017. URL: https:
//doi.org/10.1145/3132787.3139203. doi:10.1145/3132787.3139203 .
[9] OpenAI, Dall-e 3, 2023. URL: https://openai.com/dall-e-3, accessed: 2024-04-22.
[10] C. B. Stone, L. Guan, G. LaBarbera, M. Ceren, B. Garcia, K. Huie, C. Stump, Q. Wang, Why
do people share memories online? an examination of the motives and characteristics of
social media users, Memory 30 (2022) 450–464.
[11] C. Neustaedter, C. Pang, A. Forghani, E. Oduor, S. Hillman, T. K. Judge, M. Massimi,
S. Greenberg, Sharing domestic life through long-term video connections, ACM Trans.
Comput.-Hum. Interact. 22 (2015). URL: https://doi.org/10.1145/2696869. doi:10.1145/
2696869 .
[12] S. Chattopadhyay, T. Zimmermann, D. Ford, Reel life vs. real life: How software developers
share their daily life through vlogs, in: Proceedings of the 29th ACM Joint Meeting
on European Software Engineering Conference and Symposium on the Foundations of
Software Engineering, ESEC/FSE 2021, Association for Computing Machinery, New York,
NY, USA, 2021, p. 404–415. URL: https://doi.org/10.1145/3468264.3468599. doi:10.1145/
3468264.3468599 .
[13] T. Olsson, H. Soronen, K. Väänänen-Vainio-Mattila, User needs and design guidelines for
mobile services for sharing digital life memories, in: Proceedings of the 10th International
Conference on Human Computer Interaction with Mobile Devices and Services, MobileHCI
’08, Association for Computing Machinery, New York, NY, USA, 2008, p. 273–282. URL:
https://doi.org/10.1145/1409240.1409270. doi:10.1145/1409240.1409270 .
[14] Z. Hilvert-Bruce, J. T. Neill, M. Sjöblom, J. Hamari, Social motivations of live-streaming
viewer engagement on twitch, Computers in Human Behavior 84 (2018) 58–67. URL:
https://www.sciencedirect.com/science/article/pii/S0747563218300712. doi:https://doi.
org/10.1016/j.chb.2018.02.013 .
[15] Think with Google, Video search behavior trends, 2023. URL: https://www.thinkwithgoogle.
com/marketing-strategies/video/-video-search-behavior-trends/, accessed: 2024-03-29.
[16] J. Chen, J. Liao, Antecedents of viewers’ live streaming watching: A perspective of social
presence theory, Frontiers in Psychology 13 (2022). URL: https://www.frontiersin.org/
articles/10.3389/fpsyg.2022.839629. doi:10.3389/fpsyg.2022.839629 .
[17] J. Strecker, S. Mayer, K. Bektas, Sharing personalized mixed reality experiences (2023).
[18] D. Fonseca, M. Kraus, A comparison of head-mounted and hand-held displays for 360°
videos with focus on attitude and behavior change, in: Proceedings of the 20th Interna-
tional Academic Mindtrek Conference, AcademicMindtrek ’16, Association for Computing
Machinery, New York, NY, USA, 2016, p. 287–296. URL: https://doi.org/10.1145/2994310.
2994334. doi:10.1145/2994310.2994334 .
[19] C. Javerliat, P.-P. Elst, A.-L. Saive, P. Baert, G. Lavoué, Nebula: An affordable open-
source and autonomous olfactory display for vr headsets, in: Proceedings of the 28th
ACM Symposium on Virtual Reality Software and Technology, VRST ’22, Association for
Computing Machinery, New York, NY, USA, 2022. URL: https://doi.org/10.1145/3562939.
3565617. doi:10.1145/3562939.3565617 .
[20] I. Willaert, R. Aissaoui, S. Nadeau, C. Duclos, D. R. Labbe, Modulating the gait of a real-time
self-avatar to induce changes in stride length during treadmill walking, in: 2020 IEEE
Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW),
IEEE, 2020, pp. 718–719.
[21] S.-Y. Kim, J. H. Lee, J. H. Park, The effects of visual displacement on simulator sickness in
video see-through head-mounted displays, in: Proceedings of the 2014 ACM International
Symposium on Wearable Computers, ISWC ’14, Association for Computing Machinery,
New York, NY, USA, 2014, p. 79–82. URL: https://doi.org/10.1145/2634317.2634339. doi:10.
1145/2634317.2634339 .
[22] DJI, Dji fpv - immersive cinematic drone experience, https://www.dji.com/ch/dji-fpv, 2024.
Accessed: 2024-04-21.
[23] S. Kasahara, S. Nagai, J. Rekimoto, First person omnidirectional video: System design
and implications for immersive experience, in: Proceedings of the ACM International
Conference on Interactive Experiences for TV and Online Video, TVX ’15, Association for
Computing Machinery, New York, NY, USA, 2015, p. 33–42. URL: https://doi.org/10.1145/
2745197.2745202. doi:10.1145/2745197.2745202 .
[24] Ghost (mario kart series) - super mario wiki, the mario encyclopedia, https://www.
mariowiki.com/Ghost_(Mario_Kart_series), 2024. Accessed: 31-March-2024.
[25] M. C. Egeberg, S. L. Lind, S. Serubugo, D. Skantarova, M. Kraus, Extending the human
body in virtual reality: Effect of sensory feedback on agency and ownership of virtual
wings, in: Proceedings of the 2016 Virtual Reality International Conference, 2016, pp. 1–4.
[26] S. Jung, C. E. Hughes, The effects of indirectly implied real body cues to virtual body
ownership and presence in a virtual reality environment, in: Proceedings of the 22nd
ACM Conference on Virtual Reality Software and Technology, VRST ’16, Association for
Computing Machinery, New York, NY, USA, 2016, p. 363–364. URL: https://doi.org/10.1145/
2993369.2996346. doi:10.1145/2993369.2996346 .
[27] J. E. S. Grønbæk, K. Pfeuffer, E. Velloso, M. Astrup, M. I. S. Pedersen, M. Kjær, G. Leiva,
H. Gellersen, Partially blended realities: Aligning dissimilar spaces for distributed mixed
reality meetings, in: Proceedings of the 2023 CHI Conference on Human Factors in
Computing Systems, CHI ’23, Association for Computing Machinery, New York, NY, USA,
2023. URL: https://doi.org/10.1145/3544548.3581515. doi:10.1145/3544548.3581515 .
[28] K. Yoshino, H. Kawakita, T. Handa, K. Hisatomi, Viewing style of augmented reality/virtual
reality broadcast contents while sharing a virtual experience, in: Proceedings of the 26th
ACM Symposium on Virtual Reality Software and Technology, VRST ’20, Association for
Computing Machinery, New York, NY, USA, 2020. URL: https://doi.org/10.1145/3385956.
3422110. doi:10.1145/3385956.3422110 .