<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Exploring the Creative Possibilities of Infinite Photogram metry through Spatial Computing and Extended Reality with Wave Function Collapse</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aviv Elor</string-name>
          <email>aelor@ucsc.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Samantha Conde</string-name>
          <email>sconde@ucsc.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Infinite Photogrammetry, Photogrammetry, Spatial Computing, Extended Reality, Virtual Reality, Aug-</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>High St</institution>
          ,
          <addr-line>Santa Cruz, California, 95064</addr-line>
          ,
          <country country="US">United States</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of California, Santa Cruz, Department of Computational Media, Jack Baskin School of Engineering</institution>
          ,
          <addr-line>1156</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>mented Reality</institution>
          ,
          <addr-line>Wave Function Collapse, Procedural Content Generation, Applied Generative Algorithms</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>Modern extended reality systems that merge virtual and augmented reality provide a unique design space for creative applications. These devices have begun to incorporate spatial computing, or methods of runtime digital photogrammetry which translate the physical world into the virtual. In this study, we examine the use of extended reality for “infinite photogrammetry,” a system of mapping the physical world into a virtual experience and procedurally generating an infinite version of the scanned architecture. We explore our system through a use case of mapping a residential home for infinite photogrammetry with the Magic Leap Spatial Computing Headset, Wave Function Collapse Algorithm, and Unity Game Engine. We conclude with a discussion on the creative applications of infinite photogrammetry and considerations for future research.</p>
      </abstract>
      <kwd-group>
        <kwd>Collapse</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Modern extended reality (XR) systems have gone a long way technologically in enhancing user
immersion through widening the field of view, increasing frame-rate, leveraging low latency
motion capture, and providing realistic surround sound [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. As a result, we see a new wave
of mass adoption of commercial XR Head-Mounted Displays (HMDs) such as the Magic Leap
entered the market with nearly over 200 million projected systems sold since 2016 [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. These
systems are becoming ever more mobile and intrinsic to the average consumer’s entertainment
experience, enabling a mode of full-body engagement combining the physical and virtual world
[
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. More recently, these devices have begun incorporating simultaneous localization and
mapping to transfer the physical world’s architecture into the digital environment, as seen with
the photogrammetry like spatial computing and meshing capabilities of the Magic Leap One [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
These mediums provide new opportunities to explore tools for casual creation and generative
Joint Proceedings of the ICCC 2020 Workshops (ICCC-WS 2020), September 7-11 2020, Coimbra (PT) / Online
nEvelop-O
LGOBE
https://www.avivelor.com/ (A. Elor); https://samanthaconde.cargo.site (S. Conde)
      </p>
      <p>© 2020 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
computing.</p>
      <p>
        The use of XR and photogrammetry have been increasing due to the benefits that result
from this combination. The main usage of this combination has been to reconstruct objects
or locations from the real world to a mixed reality environment. Virtual reality (VR) has
been getting most of the attention for recreating real-life objects and locations, but what is
not mentioned is the time it takes to develop these things. VR usually takes a lot more time,
precision, and accuracy to develop an object as compared to AR. Portalés et al. have found that
utilizing Augmented Reality (AR) with photogrammetry has been more cost-eficient to create
such objects [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Not only that, but the time saved using AR with photogrammetry is almost
more than 50% [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. AR and photogrammetry have their advantages not only for being more
time and cost-eficient but also for providing accessibility to people. Two examples where AR
and photogrammetry were utilized to create more accessible environments are Drap et al.’s
VENUS project and Pietroszek’s mixed reality exhibition. Drap et al. used photogrammetry
to survey marine areas of the Pianosoa island. It is a step forward to having archaeologists
investigate untouched and unreachable areas of the deep ocean [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. This is a great way to
digitally archive and preserve underwater findings without compromising them. In a related
application, Pietroszek created a mixed reality exhibition to make it more accessible for people
who cannot visit a normal exhibition due to location, disability, or socioeconomic status [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
From these works, we argue that the incorporation of extended reality devices may provide
unique opportunities for casual recreation.
      </p>
      <p>
        In 2015, Compton &amp; Mateas defined an alternative design space for system creation: “A Casual
Creator is an interactive system that encourages the fast, confident, and pleasurable exploration
of a possibility space, resulting in the creation or discovery of surprising new artifacts that
bring feelings of pride, ownership, and creativity to the users that make them” [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. These tools
emphasize creativity and design support by enabling a flow of choice and rapid iteration while
providing both passive and active automation [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Moreover, the curious users of casual creators
have been hypothesized to be driven primarily by the curiosity and capability of a system’s
design space [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. In this study, we explore the usage of an XR headset to understand the
potential of these devices for the use of photogrammetry, converting the physical world into the
virtual. We also examine autonomy for XR enabled photogrammetry to explore how it can be
extended for generative experiences through examining procedural content generation (PCG).
      </p>
      <p>
        PCG algorithms applied to photogrammetry may produce some interesting design artifacts
for game and experience design. As games have been evolving rapidly, so has the use of PCG
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Designers use PCG to implement content that has been automatically generated from
assets at random [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. In this definition, content is a broad term for what researchers, game
designers, and academics would want to generate. Applying PCG to photogrammetry helps
game designers create infinite possibilities for levels, non-playable characters, and many more
objects in a digital game. This combination allows for more opportunities to surprise users and
even the designers themselves.
      </p>
      <p>
        An algorithm for PCG that has been gaining traction in the creative design world is Gumin’s
WaveFunctionCollapse (WFC) [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. WFC is a non-backtracking, greedy search algorithm that
enables large output generated from a small number of constraints determined by a window
of input media. The algorithm has attracted the attention of game creators, PCG researchers,
and level designers over the past years [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ]. It enables designers to speed up the time and
production costs of asset creation while providing them with constraints to manipulate pattern
generation. For our study, we are interested in extending this algorithm to 3D world generation
by utilizing photogrammetry with an extended reality headset. To the best of our knowledge,
this study is one of the first to bridge spatial computing with WFC for Infinite Photogrammetry.
We hope to examine the combination of these technologies to demonstrate a proof of concept
and consider its creative applications for future research.
      </p>
      <p>(a) Scanning the Physical World
(b) Infinite Photogrammetry Pipeline</p>
    </sec>
    <sec id="sec-2">
      <title>2. System Design</title>
      <p>This project leverages the capabilities of the Magic Leap Spatial Computing Headset when
combined with the Wave Function Collapse Algorithm and the Unity Game Engine. The
goal was to create a playable experience that generates infinite photogrammetry of a scanned
environment. To this end, we designed our system to (1) create a methodology of translating
physical world geometry into virtual 3D environments with Magic Leap, (2) adapt a prior Wave
Function Collapse algorithm to generate new architecture via Unity3D, and (3) to explore the
application of infinite photogrammetry. This process can be described in four stages: capturing,
meshing, formatting, and building, as shown in Figure 1b. In this section, we discuss the tools
we used to enable this system’s design.</p>
      <p>
        The Magic Leap One (MLO) headset, an extended reality interaction system, is a “spatial
computing” headset that overlays augmented reality while performing simultaneous localization
and mapping on the physical world [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. MLO was examined as a development platform because,
at the time of this study, little to no evaluations were found in academia for development testing
of our proposed application. Seeing the physical world around the user is critical for safety
when mapping environments. The untethered headset difers from other commercially available
XR HMDs by projecting light directly into the user’s eyes while also enabling higher input
modalities through hand tracking, eye tracking, dynamic sound fields, and 6-Degree of Freedom
(DoF) controllers with haptic feedback [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        To enable the visualization and interaction with the virtual world, the Unity Game Engine
was chosen as the primary driver of our experience. Unity is a flexible real-time 3D development
platform that enables the creation, operation, and rapid prototyping of interactive virtual
content [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Unity was chosen due to its flexible capabilities, which allows it to build the
same experience between multiple operating systems such as WebGL, Magic Leap, HTC Vive,
Windows, Mac, and more [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Thus, we developed our experience in Unity 2019.1.5f1 through
two separate build instances: Lumin (MLO SDK 0.21) and WebGL (OpenGL 4.5).
      </p>
      <p>
        To obtain a mesh of the physical world, we utilized the MLO World Reconstruction Spatial
Mapper, an algorithm to detect real-world surfaces and construct a runtime virtual mesh
to represent the real world’s collisions for game engine [
        <xref ref-type="bibr" rid="ref17 ref18">17, 18</xref>
        ]. We converted the world
reconstruction mapper into a serialized mesh during runtime, which is then stored as an asset
for later manipulation. This process allows us to capture the rough geometry of the user’s
surroundings as they walk through and map their desired game architecture, as shown in Figure
1a. From there, we translate the asset into a playable scene to allow the user to walk through
and navigate their scans virtually.
      </p>
      <p>We examined this process during a ten-minute session as a user walked through their home.
This consisted of rapidly scanning a 1459 square foot residential home with two bedrooms,
two bathrooms, one ofice, a living room, and a kitchen. The results of this process can be
seen in Figure 1a and 2a, where some of the rooms are reconstructed for the user to virtually
walk around their scans in the unity game engine. After the scanned geometry is captured and
serialized to independent mesh assets, we then proceed to format the assets for WFC.</p>
      <p>To enable PCG, we modified Kleineberg’s Infinite City adaption of WFC [ 19]. Using the
serialized meshes of the geometry scanned by the MLO spatial mapper, we divide the rooms
into one-meter voxels and define WFC constraints through mapping the six sides of the room
with numbered adjacency keys as shown in Figure 2b. The user is then able to define the
WFC generative adjacency of rooms through one-meter voxels chunks. Such rooms become
generated through chunks in relation to the user’s world position in the unity engine. As a
result of this process, we end with a unity experience that can generate an infinite form of
photogrammetry produced from the MLO Mixed Reality headset. The infinite house produced
from this process can be seen in Figure 2b. A demo of the experience can be found at https:
// github.com/ avivelor/ InfinitePhotogrammetry.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Results and Discussion</title>
      <p>We were able to successfully test our system in a residential home and generate an infinite
version of the house from a ten-minute scanning session. This produced a virtual experience
in which the user was able to re-visit the scanned geometry and walk through both a static
and an infinite WFC generated version of the home. Subsequently, our exploratory system
may suggest that utilizing the spatial computing capabilities of modern XR devices can produce
interesting virtual artifacts from both a static and generative perspective. In this section, we
reflect on our system design for creative use and consider future research areas to understand
how Infinite Photogrammetry could be better tailored as a creative tool.</p>
      <p>Spatial computing systems are becoming ever more mainstream with consumer applications
such as Snapchat, Instagram, and Facebook, who leverage augmented reality filters for social
communication in videos and photos [20]. As XR devices become more afordable, we may see
a similar trend in this adoption and should consider the creative possibilities of XR’s enhanced
input modalities and full-body interaction. More creative applications, such as Minecraft Earth,
are beginning to utilize AR for users to build block-based game worlds within their own homes
[21]. Other researchers are exploring extended reality for creative tools within architecture,
art, design, games, media, and e-publishing [22]. This includes extended reality creator tools
such as collaboration and education [23, 24]. Such tools and environments have been shown to
positively impact mental health [25], learning [26], and physical exercise [27, 28].
(a) Static House
(b) Infinite House</p>
      <p>For our proposed Infinite Photogrammetry application, more work must be done to determine
its creative possibilities and refine its uses toward a casual creator. More evaluation must be
done with Infinite Photogrammetry on more architectures such as museums, outdoor parks, and
historical sites. In addition, eforts must be made to increase understanding of user perception
and creativity within the tool. From this end, we believe that Infinite Photogrammetry may be
of interest to be explored within the following fields:
• Video Game Designers interested in mapping real-world architecture for generative or
static game levels;
• Artists of virtual environments interested in emergent design patterns from real-world
terrain;
• Film producers scouting physical locations for filmmaking and or capturing virtual assets
for special efects;
• and curious creators interested in exploring the design space of infinite photogrammetry
for world-building and manipulation.</p>
      <p>To this end, Infinite Photogrammetry may enable a system of creators to capture real-world
environments with ease and creatively manipulate them from both static and PCG perspectives.
We hope to refine this system for multiple extended reality devices such as mobile augmented
reality with ARKit, ARCore, and WebXR [29, 30]. Additionally, it may be interesting to influence
infinite photogrammetry with emotion personalization, which can be tuned from an immersive
virtual environment [31]. More significant input systems should be crafted and explored to
enabled runtime creator tools such as manipulating WFC adjacency, smoothing scanned world
geometry, and translating base color texture from world reconstruction.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>In this paper, we presented the creative application of Infinite Photogrammetry. We discuss how
modern extended reality headsets can be utilized for Infinite Photogrammetry to translate the
physical world into a virtual environment. We piloted our system through scanning a residential
home to transfer a user’s surrounding into a playable experience that can be infinitely generated
with the Wave Function Collapse algorithm. Lastly, we considered the creative possibilities of
this application as well as areas for future research. Although more work is to be done, a step
towards Infinite Photogrammetry may enable a deeper dive into the creative manipulation of
the physical world through the virtual.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>The authors would like to thank and acknowledge Professor Angus Forbes for his advice and
expert opinion during the exploration of this project.
arXiv:1707.07410 (2017).
[19] M. Kleineberg, Infinite procedurally generated city with the wave function collapse
algorithm, Internet: https://marian42.de/article/wfc/ [May. 29, 2020] (2019).
[20] D. Harborth, Augmented reality in information systems research: a systematic literature
review, in: Twenty-third Americas Conference on Information Systems, Boston, 2017.
[21] S. Khanna, Augmented reality: The present and the future, CYBERNOMICS 1 (2019)
15–18.
[22] M. Abbasi, P. Vassilopoulou, L. Stergioulas, Technology roadmap for the creative industries,</p>
      <p>Creative Industries Journal 10 (2017) 40–58.
[23] D. Andone, M. Frydenberg, Experiences in online collaborative learning with augmented
reality., eLearning &amp; Software for Education 2 (2017).
[24] S. Serafin, A. Adjorlu, N. Nilsson, L. Thomsen, R. Nordahl, Considerations on the use
of virtual and augmented reality technologies in music education, in: 2017 IEEE Virtual
Reality Workshop on K-12 Embodied Learning through Virtual &amp; Augmented Reality
(KELVAR), IEEE, 2017, pp. 1–4.
[25] D. Potts, K. Loveys, H. Ha, S. Huang, M. Billinghurst, E. Broadbent, Zeng: Ar neurofeedback
for meditative mixed reality, in: Proceedings of the 2019 on Creativity and Cognition,
2019, pp. 583–590.
[26] G. Papanastasiou, A. Drigas, C. Skianis, M. Lytras, E. Papanastasiou, Virtual and augmented
reality efects on k-12, higher and tertiary education students’ twenty-first century skills,
Virtual Reality 23 (2019) 425–436.
[27] K. Kunze, K. Minamizawa, S. Lukosch, M. Inami, J. Rekimoto, Superhuman sports: Applying
human augmentation to physical exercise, IEEE Pervasive Computing 16 (2017) 14–17.
[28] A. Elor, M. Teodorescu, S. Kurniawan, Project star catcher: A novel immersive virtual reality
experience for upper limb rehabilitation, ACM Transactions on Accessible Computing
(TACCESS) 11 (2018) 1–25.
[29] J. Linowes, K. Babilinski, Augmented Reality for Developers: Build practical augmented
reality applications with Unity, ARCore, ARKit, and Vuforia, Packt Publishing Ltd, 2017.
[30] B. Maclntyre, T. F. Smith, Thoughts on the future of webxr and the immersive web, in:
2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct
(ISMARAdjunct), IEEE, 2018, pp. 338–342.
[31] A. Elor, A. Song, isam: Personalizing an artificial intelligence model for emotion with
pleasure-arousal-dominance in immersive virtual reality, in: 2020 15th IEEE International
Conference on Automatic Face and Gesture Recognition (FG 2020)(FG), IEEE, 2020, pp.
583–587.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Beccue</surname>
          </string-name>
          , C. Wheelock,
          <source>Research Report: Virtual Reality for Consumer Markets</source>
          ,
          <source>Technical Report, Tractica Research</source>
          ,
          <year>2016</year>
          . URL: https://www.tractica.com/research/ virtual-reality
          <article-title>-for-consumer-markets/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.-N.</given-names>
            <surname>Chang</surname>
          </string-name>
          , W.-L. Chen,
          <article-title>Does visualize industries matter? a technology foresight of global virtual reality and augmented reality industry</article-title>
          ,
          <source>in: 2017 International Conference on Applied System Innovation (ICASI)</source>
          , IEEE,
          <year>2017</year>
          , pp.
          <fpage>382</fpage>
          -
          <lpage>385</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Forecast Augmented (AR) and Virtual Reality (VR) Market Size Worldwide From 2016 to 2023 (in Billion US Dollars)</article-title>
          , Statista,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Leap</surname>
          </string-name>
          ,
          <article-title>Magic leap one-creator edition</article-title>
          , Internet: https://www. magicleap. com/magicleap-one
          <source>[Jan. 19</source>
          ,
          <year>2019</year>
          ] (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.</given-names>
            <surname>Portalés</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Lerma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Navarro</surname>
          </string-name>
          ,
          <article-title>Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments</article-title>
          ,
          <source>ISPRS Journal of Photogrammetry and Remote Sensing</source>
          <volume>65</volume>
          (
          <year>2010</year>
          )
          <fpage>134</fpage>
          -
          <lpage>142</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>Drap</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Seinturier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Scaradozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gambogi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Long</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Gauch</surname>
          </string-name>
          ,
          <article-title>Photogrammetry for virtual exploration of underwater archeological sites</article-title>
          ,
          <source>in: Proceedings of the 21st international symposium, CIPA</source>
          ,
          <year>2007</year>
          , p.
          <fpage>1e6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K.</given-names>
            <surname>Pietroszek</surname>
          </string-name>
          ,
          <article-title>Mixed-reality exhibition for museum of peace corps experiences using ahmed toolset</article-title>
          ,
          <source>in: Symposium on Spatial User Interaction</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>2</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>K.</given-names>
            <surname>Compton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mateas</surname>
          </string-name>
          , Casual creators., in: ICCC,
          <year>2015</year>
          , pp.
          <fpage>228</fpage>
          -
          <lpage>235</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>K.</given-names>
            <surname>Compton</surname>
          </string-name>
          ,
          <article-title>Casual creators: Defining a genre of autotelic creativity support systems</article-title>
          , University of California, Santa Cruz,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>M. J. Nelson</surname>
            ,
            <given-names>S. E.</given-names>
          </string-name>
          <string-name>
            <surname>Gaudl</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Colton</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Deterding</surname>
          </string-name>
          ,
          <article-title>Curious users of casual creators</article-title>
          ,
          <source>in: Proceedings of the 13th International Conference on the Foundations of Digital Games</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hendrikx</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Meijer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Van Der</given-names>
            <surname>Velden</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Iosup</surname>
          </string-name>
          ,
          <article-title>Procedural content generation for games: A survey, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 9 (</article-title>
          <year>2013</year>
          )
          <fpage>1</fpage>
          -
          <lpage>22</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Togelius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Kastbjerg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schedl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. N.</given-names>
            <surname>Yannakakis</surname>
          </string-name>
          ,
          <article-title>What is procedural content generation? mario on the borderline</article-title>
          ,
          <source>in: Proceedings of the 2nd international workshop on procedural content generation in games, 2011</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gumin</surname>
          </string-name>
          , Wavefunctioncollapse, GitHub repository (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>I.</given-names>
            <surname>Karth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <article-title>Wavefunctioncollapse is constraint solving in the wild</article-title>
          ,
          <source>in: Proceedings of the 12th International Conference on the Foundations of Digital Games</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sandhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. McCoy</surname>
          </string-name>
          ,
          <article-title>Enhancing wave function collapse with design-level constraints</article-title>
          ,
          <source>in: Proceedings of the 14th International Conference on the Foundations of Digital Games</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Unity</surname>
            <given-names>Technologies</given-names>
          </string-name>
          ,
          <article-title>Unity real-time development platform | 3d, 2d vr &amp; ar</article-title>
          , Internet: https://unity.com/ [Jun. 06,
          <year>2019</year>
          ] (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Leap</surname>
          </string-name>
          ,
          <article-title>Magic leap developer - spatial meshing</article-title>
          , Internet: https://developer.magicleap.com/en-us/learn/guides/meshing-in
          <source>-unity [May. 29</source>
          ,
          <year>2020</year>
          ] (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>D. DeTone</surname>
            , T. Malisiewicz,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Rabinovich</surname>
          </string-name>
          ,
          <article-title>Toward geometric deep slam, arXiv preprint</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>