Understanding Creators Mental Models in Immersive Virtual Reality Programming Margherita Andrao1,2 , Lei Zhang3 , Lucas Elvira-Martín4 , Barbara Treccani1 , Massimo Zancanaro1,2 , Paloma Díaz4 and Andrea Bellucci4 1 University of Trento, Trento, Italy 2 Fondazione Bruno Kessler, Trento, Italy 3 University of Michigan, MI, USA 4 Universidad Carlos III de Madrid Universidad Carlos III de Madrid, Leganés, Madrid, Spain Abstract Empowering users - independently of their programming expertise - to create dynamic elements and scenes of virtual environments directly while immersed in them has gained increasing attention in recent years to reach the full potential of immersive technologies applicability. Immersive authoring tools seem to be an outstanding solution to provide developers, creators, and researchers a natural way to program their virtual environments, supporting their goals, needs, and creativity. The study of mental models and reasoning strategies of users with different levels of expertise in programming while engaging with these tools could shed light on potential, limits, and open challenges to create and design immersive authoring tools that can effectively support individuals who work with immersive technologies. In this paper, we present a study design that aims to investigate reasoning strategies and mental representations of Virtual Reality researchers, developers, and creators while creating new dynamic scenes through an immersive authoring tool named FlowMatic. Keywords Immersive authoring, Virtual Reality, End-user development, Mental models 1. Introduction Recent advancements in Extended Reality (XR) systems – including Virtual Reality (VR), Aug- mented Reality (AR), and Mixed Reality (MR) – have shed light on new opportunities and potentials for immersive technologies employment in various fields such as research in educa- tion [1], psychology [2], medical surgery [3]. As these technologies have become more readily available, it is important to design tools that are flexible enough to be used in a wide range of situations, allowing people to configure their extended reality according to their specific needs and skills. Yet the flexible use of immersive technologies is challenged by the high creation barrier such as requirements of specific programming skills, thus creating a potential gap in distribution across different fields and limiting applicability opportunities. One promising direction is to envision and develop appropriate authoring tools to enable end-users - regardless of their programming expertise - to effectively program their virtual environment according RealXR: Prototyping and Developing Real-World Applications for Extended Reality, June 4, 2024, Arenzano (Genoa), Italy $ margherita.andrao@unitn.it (M. Andrao); raynez@umich.edu (L. Zhang); luelvira@pa.uc3m.es (L. Elvira-Martín); barbara.treccani@unitn.it (B. Treccani); massimo.zancanaro@unitn.it (M. Zancanaro); pdp@inf.uc3m.es (P. Díaz); abellucc@inf.uc3m.es (A. Bellucci)  0000-0003-2245-9835 (M. Andrao); 0009-0007-5423-7627 (L. Elvira-Martín); 0000-0001-8028-0708 (B. Treccani); 0000-0002-1554-5703 (M. Zancanaro); 0000-0002-9493-7739 (P. Díaz); 0000-0003-4035-5271 (A. Bellucci) © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings to their needs, creativity, goals as well as physical and cognitive abilities (see [4]). This also resonates with the goals of End-User Development (EUD) research field [5] that aim to empower non-programmer users to create, modify, and define the behaviors of their digital artifacts. However, enabling end-users, including people with little to no programming experience, to create VR applications is uniquely challenging since it requires understanding of advanced concepts, such as 3D graphics and modeling, as well as knowledge of different programming approaches. Traditionally, text-based programming languages have been the go-to choice for crafting interactive scenes and behaviors in VR. More recently, commercial authoring environments and game engines such as Unity or Unreal engine introduced the support of visual flow-based languages (e.g., Unreal Blueprints1 to ease the development for inexperienced programming by offering a more accessible and intuitive visual-oriented approach to programming within VR environments, leveraging the extensive research conducted in the field of visual languages (e.g., [6]). However, these approaches come with limitations, since they operate in two-dimensional interfaces and pose challenges for 3D development, such as grasping spatial relationships and interactions within the 3D world and the need to go back and forth, from the editing environment to the immersive environment, for live testing. A potential approach to overcoming these challenges is known as immersive authoring [7]. This concept involves users actively engaging in the creation, modification, and testing of 3D content from within the VR environment itself, allowing for a direct and immersive interaction with the virtual world. In recent years different visual-based immersive programming environments have been created both from the academia [8, 9, 10] as well as in commercial platforms such as Rec Room. For example, Zhang and Oney [8, 11] developed FlowMatic, an immersive authoring tool that allows users to create interactive VR scenes by providing a set of primitives that can be directly manipulated in a visual flow-based diagram in VR. Immersive authoring tools have the potential to enable not only VR developers but also beginners or non-programmers in crafting VR experiences, leading to a wider use of immersive technologies across diverse fields [4]. However, understanding user mental models [12, 13] remains underexplored. While developing immersive authoring systems is essential for advanc- ing XR technology adoption, understanding user mental models ensures that these systems align with users’ specific understanding, skills, and needs, considering how users mentally map and respond to a spatially rich and multi-sensory environment. Investigating users’ mental models during immersive programming tasks (e.g., [14]) can provide valuable insights into the effectiveness of this approach, user requirements, potential improvements, and variations based on users’ programming expertise as well as the embodied sensemaking [15] of the programming task, which involves understanding how the immersive nature of programming in VR o AR aligns with the way individuals physically interact with and comprehend their environment. In this paper, we discuss a study design that explores the mental representations and reasoning strategies of XR researchers, developers, and creators with varying levels of programming expertise as they interact with FlowMatic [8]. In addition to their mental models, we set out to investigate to which extent immersive flow-based programming can support users’ creativity (see also [16, 17]). This research constitutes an essential step in designing more effective 1 https://www.unrealengine.com/es-ES/ Figure 1: A screenshot of the visual flow-based diagram in FlowMatic [8]. solutions and maximizing the potential use of VR by both expert and non-expert users who engage with this technology in their professional tasks. 2. The Study We designed a first qualitative study to explore the mental representations and reasoning strategies of XR researchers, developers, and creators during their interaction with an immersive authoring tool - named FlowMatic [8]. Our goal was to understand the aspects that can facilitate or hinder users’ strategies for programming within VR environments with this immersive authoring tool. In particular, we aim to explore EUD’s limits and potentials in this context, along with the relationship among participants’ programming experience, their mental representation of the system functionality, and the perceived support to their creativity. Participants. For this study, we plan to recruit XR researchers, developers, and creators with varying levels of programming expertise. As VR can be widely used to conduct experiments in different fields, we deemed researchers a motivated sample for using and programming VR environments. We will recruit two groups of participants with different programming expertise: (i) participants with no formal background in programming (beginners or non-expert group) and (ii) participants with expertise in programming and a formal background (expert group). The System. FlowMatic [8] is a tool that allows users to craft interactive VR scenes while immersed in VR. It provides a set of programming primitives that can be directly manipulated in a visual flow-based diagram (see Figure 1). Users can define the behaviors of virtual objects by connecting them with the programming primitives in the visual diagram. Task and Procedure. Participants will be involved in individual sessions in the laboratory, lasting approximately 2 hours. Each session will be video and audio-recorded, then transcribed. The setup will include a Meta Quest 2 HMD for immersing participants in the VR environment they will program, a computer for recording the VR perspective of the participant (live casting), and a video camera for environmental recording during the interaction. After a description of the system (supported by an explanatory video), participants will engage in a familiarization phase in VR through guided training with FlowMatic (same as Task 1 in [11]). Then, we will ask each participant to program two VR dynamic scenes using FlowMatic while thinking aloud [18] to elicit their mental models during the interaction with the immersive authoring tool. In the first task, we will ask participants to watch a video displaying a specific task and to recreate it in VR using FlowMatic. The video will display a simplified version of the Posner cueing task (a well-known psychological paradigm used to investigate the spatial orientation of visual attention; [19]). After the completion of the first task, to assess the usability of the system, we will employ (i) the User Experience Questionnaire (UEQ; [20]), and (ii) Six of the cognitive dimensions of notation (CDs; scales: Viscosity, Abstraction, Closeness of Mapping, Hard Mental Operations, Provisionality, Progressive Evaluation [21]) used for usability assessment of visual languages (e.g., [22]). As a second open-ended task, participants will be asked to create a new dynamic scene in VR using FlowMatic. This will allow us to assess the degree of familiarity with the system and how FlowMatic can support the creativity of researchers. We will adopt the Creativity Support Index (CSI; [23]) to assess FlowMatic’s capability to support the creative process of users. For both tasks, we will measure the participants’ number of errors and the completion time. Then, in a semi-structured interview, we will explicitly ask participants about their reasoning strategies, FlowMatic’s limits, necessities, potentials, and challenges. Finally, participants will be asked to complete a form (Qualtrics) with simple questions about their experiences with VR/AR in work and non-work settings, their research experience, programming languages, EUD systems, and some demographic information (age, sex, education, etc.). 3. Discussion and Conclusion In this paper, we presented the design of an exploratory study aimed at investigating reasoning strategies and mental models of researchers with different levels of programming expertise during their interaction with an immersive authoring tool named FlowMatic [8] for creating interactive VR scenes. Our goal is to explore which aspects of this tool potentially and effectively support the creation of new scenarios and the creativity as well as the expressiveness of users with different backgrounds. Specifically, we believe that investigating mental models and rea- soning strategies can shed light on how to design systems that can support not only developers in creating VR environments but also non-expert users (e.g., researchers without a formal programming background) who would like to use VR in their work. We expect that individuals will be diversely supported in creating new scenarios based on their reasoning strategy and mental representation [12, 24] and that the effectiveness of the created scenarios would lead to greater expression of their creativity [25]. Overall, we anticipate that the immersive experience of directly manipulating visual primitives and objects can support embodied sensemaking [15] in programming tasks by offering a unique and intuitive experience that influences the mental models of participants. We expect this to be particularly true for individuals who are more naturally prone to grasp this physical/embodied dimension of the interaction. Acknowledgments This work is supported by the Spanish State Research Agency (AEI) under grant Sense2MakeSense (PID2019-109388GB-I00) References [1] S. Barteit, L. Lanfermann, T. Bärnighausen, F. Neuhann, C. Beiersmann, et al., Augmented, mixed, and virtual reality-based head-mounted devices for medical education: systematic review, JMIR serious games 9 (2021) e29080. [2] T. D. Parsons, A. Gaggioli, G. Riva, Extended reality for the clinical, affective, and social neurosciences, Brain Sciences 10 (2020) 922. [3] N. B. Dadario, T. Quinoa, D. Khatri, J. Boockvar, D. Langer, R. S. D’Amico, Examining the benefits of extended reality in neurosurgery: A systematic review, Journal of Clinical Neuroscience 94 (2021) 41–53. URL: https://www.sciencedirect.com/science/article/pii/ S0967586821004938. doi:https://doi.org/10.1016/j.jocn.2021.09.037. [4] H. Coelho, P. Monteiro, G. Gonçalves, M. Melo, M. Bessa, Authoring tools for virtual reality experiences: a systematic review, Multimedia Tools and Applications 81 (2022) 28037–28060. [5] H. Lieberman, F. Paternò, M. Klann, V. Wulf, End-user development: An emerging paradigm, in: End user development, Springer, 2006, pp. 1–8. [6] A. Kelly, R. B. Shapiro, J. de Halleux, T. Ball, Arcadia: A rapid prototyping platform for real-time tangible interfaces, in: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 1–8. [7] G. A. Lee, C. Nelles, M. Billinghurst, G. J. Kim, Immersive authoring of tangible augmented reality applications, in: Third IEEE and ACM international symposium on mixed and augmented reality, IEEE, 2004, pp. 172–181. [8] L. Zhang, S. Oney, Flowmatic: An immersive authoring tool for creating interactive scenes in virtual reality, in: Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, UIST ’20, Association for Computing Machinery, New York, NY, USA, 2020, p. 342–353. URL: https://doi.org/10.1145/3379337.3415824. doi:10.1145/ 3379337.3415824. [9] J. T. Murray, Realityflow: Open-source multi-user immersive authoring, in: 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 2022, pp. 65–68. doi:10.1109/VRW55335.2022.00024. [10] D. Pintani, A. Caputo, D. Mendes, A. Giachetti, Cider: Collaborative interior design in extended reality, in: Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter, 2023, pp. 1–11. [11] L. Zhang, S. Oney, Studying the benefits and challenges of immersive dataflow program- ming, in: 2019 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), 2019, pp. 223–227. doi:10.1109/VLHCC.2019.8818856. [12] D. Norman, The design of everyday things: Revised and expanded edition, Basic books, 2013. [13] P. N. Johnson-Laird, Mental models and human reasoning, Proceedings of the National Academy of Sciences 107 (2010) 18243–18250. [14] X. Liu, Y. Shi, C. Yu, C. Gao, T. Yang, C. Liang, Y. Shi, Understanding in-situ programming for smart home automation, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 7 (2023). URL: https://doi.org/10.1145/3596254. doi:10.1145/3596254. [15] C. Hummels, J. Van Dijk, Seven principles to design for embodied sensemaking, in: Proceedings of the ninth international conference on tangible, embedded, and embodied interaction, 2015, pp. 21–28. [16] C. Remy, L. MacDonald Vermeulen, J. Frich, M. M. Biskjaer, P. Dalsgaard, Evaluating creativity support tools in hci research, in: Proceedings of the 2020 ACM designing interactive systems conference, 2020, pp. 457–476. [17] E. Wolf, S. Klüber, C. Zimmerer, J.-L. Lugrin, M. E. Latoschik, ” paint that object yel- low”: Multimodal interaction to enhance creativity during design tasks in vr, in: 2019 International conference on multimodal interaction, 2019, pp. 195–204. [18] T. Boren, J. Ramey, Thinking aloud: Reconciling theory and practice, IEEE transactions on professional communication 43 (2000) 261–278. [19] M. I. Posner, Orienting of attention: Then and now, Quarterly journal of experimental psychology 69 (2016) 1864–1875. [20] B. Laugwitz, T. Held, M. Schrepp, Construction and evaluation of a user experience ques- tionnaire, in: HCI and Usability for Education and Work: 4th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society, USAB 2008, Graz, Austria, November 20-21, 2008. Proceedings 4, Springer, 2008, pp. 63–76. [21] A. F. Blackwell, C. Britton, A. Cox, T. R. Green, C. Gurr, G. Kadoda, M. S. Kutar, M. Loomes, C. L. Nehaniv, M. Petre, et al., Cognitive dimensions of notations: Design tools for cognitive technology, in: Cognitive Technology: Instruments of Mind: 4th International Conference, CT 2001 Coventry, UK, August 6–9, 2001 Proceedings, Springer, 2001, pp. 325–341. [22] R. Holwerda, F. Hermans, A usability analysis of blocks-based programming editors using cognitive dimensions, in: 2018 IEEE symposium on visual languages and human-centric computing (VL/HCC), IEEE, 2018, pp. 217–225. [23] E. Cherry, C. Latulipe, Quantifying the creativity support of digital tools through the creativity support index, ACM Transactions on Computer-Human Interaction (TOCHI) 21 (2014) 1–25. [24] G. Fischer, User modeling in human–computer interaction, User modeling and user- adapted interaction 11 (2001) 65–86. [25] J. Urban Davis, F. Anderson, M. Stroetzel, T. Grossman, G. Fitzmaurice, Designing co- creative ai for virtual environments, in: Proceedings of the 13th Conference on Creativity and Cognition, C&C ’21, Association for Computing Machinery, New York, NY, USA, 2021. URL: https://doi.org/10.1145/3450741.3465260. doi:10.1145/3450741.3465260.