=Paper= {{Paper |id=Vol-2827/CAC-Paper_2 |storemode=property |title=Exploring the Creative Possibilities of Infinite Photogrammetry through Spatial Computing and Extended Reality with Wave Function Collapse |pdfUrl=https://ceur-ws.org/Vol-2827/CAC-Paper_2.pdf |volume=Vol-2827 |authors=Aviv Elor,Samantha Conde }} ==Exploring the Creative Possibilities of Infinite Photogrammetry through Spatial Computing and Extended Reality with Wave Function Collapse== https://ceur-ws.org/Vol-2827/CAC-Paper_2.pdf
Exploring the Creative Possibilities of Infinite
Photogrammetry through Spatial Computing and
Extended Reality with Wave Function Collapse
Aviv Elora , Samantha Condea
a
 University of California, Santa Cruz, Department of Computational Media, Jack Baskin School of Engineering, 1156
High St, Santa Cruz, California, 95064, United States


                                         Abstract
                                         Modern extended reality systems that merge virtual and augmented reality provide a unique design
                                         space for creative applications. These devices have begun to incorporate spatial computing, or methods
                                         of runtime digital photogrammetry which translate the physical world into the virtual. In this study, we
                                         examine the use of extended reality for “infinite photogrammetry,” a system of mapping the physical
                                         world into a virtual experience and procedurally generating an infinite version of the scanned architecture.
                                         We explore our system through a use case of mapping a residential home for infinite photogrammetry
                                         with the Magic Leap Spatial Computing Headset, Wave Function Collapse Algorithm, and Unity Game
                                         Engine. We conclude with a discussion on the creative applications of infinite photogrammetry and
                                         considerations for future research.

                                         Keywords
                                         Infinite Photogrammetry, Photogrammetry, Spatial Computing, Extended Reality, Virtual Reality, Aug-
                                         mented Reality, Wave Function Collapse, Procedural Content Generation, Applied Generative Algorithms




1. Introduction
Modern extended reality (XR) systems have gone a long way technologically in enhancing user
immersion through widening the field of view, increasing frame-rate, leveraging low latency
motion capture, and providing realistic surround sound [1]. As a result, we see a new wave
of mass adoption of commercial XR Head-Mounted Displays (HMDs) such as the Magic Leap
One, Microsoft Hololens, HTC Vive, Oculus Quest, PlayStation Morpheus, and more that have
entered the market with nearly over 200 million projected systems sold since 2016 [1]. These
systems are becoming ever more mobile and intrinsic to the average consumer’s entertainment
experience, enabling a mode of full-body engagement combining the physical and virtual world
[2, 3]. More recently, these devices have begun incorporating simultaneous localization and
mapping to transfer the physical world’s architecture into the digital environment, as seen with
the photogrammetry like spatial computing and meshing capabilities of the Magic Leap One [4].
These mediums provide new opportunities to explore tools for casual creation and generative

Joint Proceedings of the ICCC 2020 Workshops (ICCC-WS 2020), September 7-11 2020, Coimbra (PT) / Online
Envelope-Open aelor@ucsc.edu (A. Elor); sconde@ucsc.edu (S. Conde)
GLOBE https://www.avivelor.com/ (A. Elor); https://samanthaconde.cargo.site (S. Conde)
Orcid 0000-0001-5356-3948 (A. Elor)
                                       © 2020 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
computing.
   The use of XR and photogrammetry have been increasing due to the benefits that result
from this combination. The main usage of this combination has been to reconstruct objects
or locations from the real world to a mixed reality environment. Virtual reality (VR) has
been getting most of the attention for recreating real-life objects and locations, but what is
not mentioned is the time it takes to develop these things. VR usually takes a lot more time,
precision, and accuracy to develop an object as compared to AR. Portalés et al. have found that
utilizing Augmented Reality (AR) with photogrammetry has been more cost-efficient to create
such objects [5]. Not only that, but the time saved using AR with photogrammetry is almost
more than 50% [5]. AR and photogrammetry have their advantages not only for being more
time and cost-efficient but also for providing accessibility to people. Two examples where AR
and photogrammetry were utilized to create more accessible environments are Drap et al.’s
VENUS project and Pietroszek’s mixed reality exhibition. Drap et al. used photogrammetry
to survey marine areas of the Pianosoa island. It is a step forward to having archaeologists
investigate untouched and unreachable areas of the deep ocean [6]. This is a great way to
digitally archive and preserve underwater findings without compromising them. In a related
application, Pietroszek created a mixed reality exhibition to make it more accessible for people
who cannot visit a normal exhibition due to location, disability, or socioeconomic status [7].
From these works, we argue that the incorporation of extended reality devices may provide
unique opportunities for casual recreation.
   In 2015, Compton & Mateas defined an alternative design space for system creation: “A Casual
Creator is an interactive system that encourages the fast, confident, and pleasurable exploration
of a possibility space, resulting in the creation or discovery of surprising new artifacts that
bring feelings of pride, ownership, and creativity to the users that make them” [8]. These tools
emphasize creativity and design support by enabling a flow of choice and rapid iteration while
providing both passive and active automation [9]. Moreover, the curious users of casual creators
have been hypothesized to be driven primarily by the curiosity and capability of a system’s
design space [10]. In this study, we explore the usage of an XR headset to understand the
potential of these devices for the use of photogrammetry, converting the physical world into the
virtual. We also examine autonomy for XR enabled photogrammetry to explore how it can be
extended for generative experiences through examining procedural content generation (PCG).
   PCG algorithms applied to photogrammetry may produce some interesting design artifacts
for game and experience design. As games have been evolving rapidly, so has the use of PCG
[11]. Designers use PCG to implement content that has been automatically generated from
assets at random [12]. In this definition, content is a broad term for what researchers, game
designers, and academics would want to generate. Applying PCG to photogrammetry helps
game designers create infinite possibilities for levels, non-playable characters, and many more
objects in a digital game. This combination allows for more opportunities to surprise users and
even the designers themselves.
   An algorithm for PCG that has been gaining traction in the creative design world is Gumin’s
WaveFunctionCollapse (WFC) [13]. WFC is a non-backtracking, greedy search algorithm that
enables large output generated from a small number of constraints determined by a window
of input media. The algorithm has attracted the attention of game creators, PCG researchers,
and level designers over the past years [14, 15]. It enables designers to speed up the time and
production costs of asset creation while providing them with constraints to manipulate pattern
generation. For our study, we are interested in extending this algorithm to 3D world generation
by utilizing photogrammetry with an extended reality headset. To the best of our knowledge,
this study is one of the first to bridge spatial computing with WFC for Infinite Photogrammetry.
We hope to examine the combination of these technologies to demonstrate a proof of concept
and consider its creative applications for future research.




          (a) Scanning the Physical World                 (b) Infinite Photogrammetry Pipeline
Figure 1: On the left we see a user walking around their home with a Magic Leap One. House geometry
is procedurally meshed in the unity game engine and stored as mesh for manipulation with Wave
Function Collapse. On the right we see the system pipeline for creating infinite photogrammetry.




2. System Design
This project leverages the capabilities of the Magic Leap Spatial Computing Headset when
combined with the Wave Function Collapse Algorithm and the Unity Game Engine. The
goal was to create a playable experience that generates infinite photogrammetry of a scanned
environment. To this end, we designed our system to (1) create a methodology of translating
physical world geometry into virtual 3D environments with Magic Leap, (2) adapt a prior Wave
Function Collapse algorithm to generate new architecture via Unity3D, and (3) to explore the
application of infinite photogrammetry. This process can be described in four stages: capturing,
meshing, formatting, and building, as shown in Figure 1b. In this section, we discuss the tools
we used to enable this system’s design.
   The Magic Leap One (MLO) headset, an extended reality interaction system, is a “spatial
computing” headset that overlays augmented reality while performing simultaneous localization
and mapping on the physical world [4]. MLO was examined as a development platform because,
at the time of this study, little to no evaluations were found in academia for development testing
of our proposed application. Seeing the physical world around the user is critical for safety
when mapping environments. The untethered headset differs from other commercially available
XR HMDs by projecting light directly into the user’s eyes while also enabling higher input
modalities through hand tracking, eye tracking, dynamic sound fields, and 6-Degree of Freedom
(DoF) controllers with haptic feedback [4].
   To enable the visualization and interaction with the virtual world, the Unity Game Engine
was chosen as the primary driver of our experience. Unity is a flexible real-time 3D development
platform that enables the creation, operation, and rapid prototyping of interactive virtual
content [16]. Unity was chosen due to its flexible capabilities, which allows it to build the
same experience between multiple operating systems such as WebGL, Magic Leap, HTC Vive,
Windows, Mac, and more [16]. Thus, we developed our experience in Unity 2019.1.5f1 through
two separate build instances: Lumin (MLO SDK 0.21) and WebGL (OpenGL 4.5).
    To obtain a mesh of the physical world, we utilized the MLO World Reconstruction Spatial
Mapper, an algorithm to detect real-world surfaces and construct a runtime virtual mesh
to represent the real world’s collisions for game engine [17, 18]. We converted the world
reconstruction mapper into a serialized mesh during runtime, which is then stored as an asset
for later manipulation. This process allows us to capture the rough geometry of the user’s
surroundings as they walk through and map their desired game architecture, as shown in Figure
1a. From there, we translate the asset into a playable scene to allow the user to walk through
and navigate their scans virtually.
    We examined this process during a ten-minute session as a user walked through their home.
This consisted of rapidly scanning a 1459 square foot residential home with two bedrooms,
two bathrooms, one office, a living room, and a kitchen. The results of this process can be
seen in Figure 1a and 2a, where some of the rooms are reconstructed for the user to virtually
walk around their scans in the unity game engine. After the scanned geometry is captured and
serialized to independent mesh assets, we then proceed to format the assets for WFC.
    To enable PCG, we modified Kleineberg’s Infinite City adaption of WFC [19]. Using the
serialized meshes of the geometry scanned by the MLO spatial mapper, we divide the rooms
into one-meter voxels and define WFC constraints through mapping the six sides of the room
with numbered adjacency keys as shown in Figure 2b. The user is then able to define the
WFC generative adjacency of rooms through one-meter voxels chunks. Such rooms become
generated through chunks in relation to the user’s world position in the unity engine. As a
result of this process, we end with a unity experience that can generate an infinite form of
photogrammetry produced from the MLO Mixed Reality headset. The infinite house produced
from this process can be seen in Figure 2b. A demo of the experience can be found at https:
// github.com/ avivelor/ InfinitePhotogrammetry.


3. Results and Discussion
We were able to successfully test our system in a residential home and generate an infinite
version of the house from a ten-minute scanning session. This produced a virtual experience
in which the user was able to re-visit the scanned geometry and walk through both a static
and an infinite WFC generated version of the home. Subsequently, our exploratory system
may suggest that utilizing the spatial computing capabilities of modern XR devices can produce
interesting virtual artifacts from both a static and generative perspective. In this section, we
reflect on our system design for creative use and consider future research areas to understand
how Infinite Photogrammetry could be better tailored as a creative tool.
   Spatial computing systems are becoming ever more mainstream with consumer applications
such as Snapchat, Instagram, and Facebook, who leverage augmented reality filters for social
communication in videos and photos [20]. As XR devices become more affordable, we may see
a similar trend in this adoption and should consider the creative possibilities of XR’s enhanced
input modalities and full-body interaction. More creative applications, such as Minecraft Earth,
are beginning to utilize AR for users to build block-based game worlds within their own homes
[21]. Other researchers are exploring extended reality for creative tools within architecture,
art, design, games, media, and e-publishing [22]. This includes extended reality creator tools
such as collaboration and education [23, 24]. Such tools and environments have been shown to
positively impact mental health [25], learning [26], and physical exercise [27, 28].




                 (a) Static House                                   (b) Infinite House
Figure 2: On the left we see examples of stored mesh geometry from a user during a ten-minute
scanning session. A custom depth shader is applied and overlaid over the physical world to represent
scanned geometry for the user with distance correlating to color. The user can review the scans by
walking through the house in the unity engine through an MLO headset, a standalone build, or a
WebGL build. On the right we see An infinite house generated by the users scanned geometry and wave
function collapse. The user is able to define the WFC generative adjacency of rooms through chunks of
the scanned geometry represented by one meter voxels. The rooms are then generated through chunks
in relation to the users world position in the unity engine.
   For our proposed Infinite Photogrammetry application, more work must be done to determine
its creative possibilities and refine its uses toward a casual creator. More evaluation must be
done with Infinite Photogrammetry on more architectures such as museums, outdoor parks, and
historical sites. In addition, efforts must be made to increase understanding of user perception
and creativity within the tool. From this end, we believe that Infinite Photogrammetry may be
of interest to be explored within the following fields:

    • Video Game Designers interested in mapping real-world architecture for generative or
      static game levels;
    • Artists of virtual environments interested in emergent design patterns from real-world
      terrain;
    • Film producers scouting physical locations for filmmaking and or capturing virtual assets
      for special effects;
    • and curious creators interested in exploring the design space of infinite photogrammetry
      for world-building and manipulation.

To this end, Infinite Photogrammetry may enable a system of creators to capture real-world
environments with ease and creatively manipulate them from both static and PCG perspectives.
We hope to refine this system for multiple extended reality devices such as mobile augmented
reality with ARKit, ARCore, and WebXR [29, 30]. Additionally, it may be interesting to influence
infinite photogrammetry with emotion personalization, which can be tuned from an immersive
virtual environment [31]. More significant input systems should be crafted and explored to
enabled runtime creator tools such as manipulating WFC adjacency, smoothing scanned world
geometry, and translating base color texture from world reconstruction.


4. Conclusion
In this paper, we presented the creative application of Infinite Photogrammetry. We discuss how
modern extended reality headsets can be utilized for Infinite Photogrammetry to translate the
physical world into a virtual environment. We piloted our system through scanning a residential
home to transfer a user’s surrounding into a playable experience that can be infinitely generated
with the Wave Function Collapse algorithm. Lastly, we considered the creative possibilities of
this application as well as areas for future research. Although more work is to be done, a step
towards Infinite Photogrammetry may enable a deeper dive into the creative manipulation of
the physical world through the virtual.


Acknowledgments
The authors would like to thank and acknowledge Professor Angus Forbes for his advice and
expert opinion during the exploration of this project.
References
 [1] M. Beccue, C. Wheelock, Research Report: Virtual Reality for Consumer Markets,
     Technical Report, Tractica Research, 2016. URL: https://www.tractica.com/research/
     virtual-reality-for-consumer-markets/.
 [2] S.-N. Chang, W.-L. Chen, Does visualize industries matter? a technology foresight of
     global virtual reality and augmented reality industry, in: 2017 International Conference
     on Applied System Innovation (ICASI), IEEE, 2017, pp. 382–385.
 [3] S. Liu, Forecast Augmented (AR) and Virtual Reality (VR) Market Size Worldwide From
     2016 to 2023 (in Billion US Dollars), Statista, 2019.
 [4] M. Leap, Magic leap one–creator edition, Internet: https://www. magicleap. com/magic-
     leap-one [Jan. 19, 2019] (2019).
 [5] C. Portalés, J. L. Lerma, S. Navarro, Augmented reality and photogrammetry: A synergy
     to visualize physical and virtual city environments, ISPRS Journal of Photogrammetry
     and Remote Sensing 65 (2010) 134–142.
 [6] P. Drap, J. Seinturier, D. Scaradozzi, P. Gambogi, L. Long, F. Gauch, Photogrammetry
     for virtual exploration of underwater archeological sites, in: Proceedings of the 21st
     international symposium, CIPA, 2007, p. 1e6.
 [7] K. Pietroszek, Mixed-reality exhibition for museum of peace corps experiences using
     ahmed toolset, in: Symposium on Spatial User Interaction, 2019, pp. 1–2.
 [8] K. Compton, M. Mateas, Casual creators., in: ICCC, 2015, pp. 228–235.
 [9] K. Compton, Casual creators: Defining a genre of autotelic creativity support systems,
     University of California, Santa Cruz, 2019.
[10] M. J. Nelson, S. E. Gaudl, S. Colton, S. Deterding, Curious users of casual creators, in:
     Proceedings of the 13th International Conference on the Foundations of Digital Games,
     2018, pp. 1–6.
[11] M. Hendrikx, S. Meijer, J. Van Der Velden, A. Iosup, Procedural content generation for
     games: A survey, ACM Transactions on Multimedia Computing, Communications, and
     Applications (TOMM) 9 (2013) 1–22.
[12] J. Togelius, E. Kastbjerg, D. Schedl, G. N. Yannakakis, What is procedural content gener-
     ation? mario on the borderline, in: Proceedings of the 2nd international workshop on
     procedural content generation in games, 2011, pp. 1–6.
[13] M. Gumin, Wavefunctioncollapse, GitHub repository (2016).
[14] I. Karth, A. M. Smith, Wavefunctioncollapse is constraint solving in the wild, in: Proceed-
     ings of the 12th International Conference on the Foundations of Digital Games, 2017, pp.
     1–10.
[15] A. Sandhu, Z. Chen, J. McCoy, Enhancing wave function collapse with design-level
     constraints, in: Proceedings of the 14th International Conference on the Foundations of
     Digital Games, 2019, pp. 1–9.
[16] Unity Technologies, Unity real-time development platform | 3d, 2d vr & ar, Internet:
     https://unity.com/ [Jun. 06, 2019] (2019).
[17] M. Leap, Magic leap developer - spatial meshing, Internet: https://developer.magi-
     cleap.com/en-us/learn/guides/meshing-in-unity [May. 29, 2020] (2019).
[18] D. DeTone, T. Malisiewicz, A. Rabinovich, Toward geometric deep slam, arXiv preprint
     arXiv:1707.07410 (2017).
[19] M. Kleineberg, Infinite procedurally generated city with the wave function collapse
     algorithm, Internet: https://marian42.de/article/wfc/ [May. 29, 2020] (2019).
[20] D. Harborth, Augmented reality in information systems research: a systematic literature
     review, in: Twenty-third Americas Conference on Information Systems, Boston, 2017.
[21] S. Khanna, Augmented reality: The present and the future, CYBERNOMICS 1 (2019)
     15–18.
[22] M. Abbasi, P. Vassilopoulou, L. Stergioulas, Technology roadmap for the creative industries,
     Creative Industries Journal 10 (2017) 40–58.
[23] D. Andone, M. Frydenberg, Experiences in online collaborative learning with augmented
     reality., eLearning & Software for Education 2 (2017).
[24] S. Serafin, A. Adjorlu, N. Nilsson, L. Thomsen, R. Nordahl, Considerations on the use
     of virtual and augmented reality technologies in music education, in: 2017 IEEE Virtual
     Reality Workshop on K-12 Embodied Learning through Virtual & Augmented Reality
     (KELVAR), IEEE, 2017, pp. 1–4.
[25] D. Potts, K. Loveys, H. Ha, S. Huang, M. Billinghurst, E. Broadbent, Zeng: Ar neurofeedback
     for meditative mixed reality, in: Proceedings of the 2019 on Creativity and Cognition,
     2019, pp. 583–590.
[26] G. Papanastasiou, A. Drigas, C. Skianis, M. Lytras, E. Papanastasiou, Virtual and augmented
     reality effects on k-12, higher and tertiary education students’ twenty-first century skills,
     Virtual Reality 23 (2019) 425–436.
[27] K. Kunze, K. Minamizawa, S. Lukosch, M. Inami, J. Rekimoto, Superhuman sports: Applying
     human augmentation to physical exercise, IEEE Pervasive Computing 16 (2017) 14–17.
[28] A. Elor, M. Teodorescu, S. Kurniawan, Project star catcher: A novel immersive virtual reality
     experience for upper limb rehabilitation, ACM Transactions on Accessible Computing
     (TACCESS) 11 (2018) 1–25.
[29] J. Linowes, K. Babilinski, Augmented Reality for Developers: Build practical augmented
     reality applications with Unity, ARCore, ARKit, and Vuforia, Packt Publishing Ltd, 2017.
[30] B. Maclntyre, T. F. Smith, Thoughts on the future of webxr and the immersive web, in:
     2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-
     Adjunct), IEEE, 2018, pp. 338–342.
[31] A. Elor, A. Song, isam: Personalizing an artificial intelligence model for emotion with
     pleasure-arousal-dominance in immersive virtual reality, in: 2020 15th IEEE International
     Conference on Automatic Face and Gesture Recognition (FG 2020)(FG), IEEE, 2020, pp.
     583–587.