=Paper=
{{Paper
|id=Vol-3704/paper9
|storemode=property
|title=Prototyping Augmented Reality Experiences Using Virtual and Augmented Reality
|pdfUrl=https://ceur-ws.org/Vol-3704/paper9.pdf
|volume=Vol-3704
|authors=Álvaro Montero,Telmo Zarraonandia,Paloma Díaz,Ignacio Aedo
|dblpUrl=https://dblp.org/rec/conf/realxr/MonteroZ0A24
}}
==Prototyping Augmented Reality Experiences Using Virtual and Augmented Reality==
Prototyping augmented reality experiences using
virtual and augmented reality
Álvaro Montero1,∗ , Telmo Zarraonandia1 , Paloma Díaz1 and Ignacio Aedo1
1
Universidad Carlos III de Madrid, Department of Computer Science and Engineering, Leganés, 28911, España
Abstract
Authoring tools for augmented reality (AR) aim to enable users with no programming skills to create AR
experiences. The dominant approach uses mobile-based AR technology. However, some authors have
proposed VR-based authoring tools allowing the remote creation of AR experiences in a virtual replica of
the space. In this paper we describe an exploratory study that compared these two approaches with 10
participants. No significant differences in effort and user satisfaction were found. VR authoring was even
potentially faster. This suggests VR authoring could be a complementary alternative to mobile-based
AR under some conditions, particularly for rapid prototyping and when accessing the physical site to
augment is challenging.
Keywords
Augmented Reality, Virtual Reality, Prototyping, End-User Development, Authoring Tools
1. Introduction
This research advocates for enhancing Augmented Reality (AR) prototyping processes to pro-
mote its adoption by supporting alternative edition scenarios. Most authoring AR tools im-
plement an approach in which the user superimposes virtual content onto the image of the
real environment captured by a device, often their own mobile phone. This method requires
the user’s physical presence in the space to augment. However this is not always feasible and
sustainable, especially if the augmentation follows an iterative process that needs to assess
several prototypes in realistic scenarios to improve the user experience. Some authors have
proposed an alternative authoring method utilizing immersive Virtual Reality (VR) technology
[1, 2, 3] . Equipped with a VR headset, users can create AR experiences within a virtual replica
of the target space, enabling remote augmentation. Users interact with VR content through
controllers, mimicking real-life object manipulation.
Apart from some successful use cases, to the best of our knowledge, there are no comparative
studies between VR and AR authoring for AR experiences. These studies are essential to demon-
strate whether VR authoring matches the efficiency of AR and does not add more complexity.
To address this gap, we implemented a system that permits authoring AR experiences using
both VR and mobile-based AR. We conducted a comparative study between the two authoring
RealXR’24: Prototyping and Developing Real-World Applications of Extended Reality, June 03–04, 2024, Genoa, Italy
∗
Corresponding author.
Envelope-Open ammontes@inf.uc3m.es (Á. Montero); tzarraon@inf.uc3m.es (T. Zarraonandia); mpaloma.diaz@uc3m.es
(P. Díaz); ignacio.aedo@uc3m.es (I. Aedo)
Orcid 0000-0002-2511-9986 (Á. Montero); 0000-0003-3574-0984 (T. Zarraonandia); 0000-0002-9493-7739 (P. Díaz);
0000-0001-5819-0511 (I. Aedo)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
approaches. Our findings suggest that users do not perceive significant differences between both
approaches in terms of mental and physical effort, satisfaction, and effectiveness. Moreover,
users were slightly faster in completing the AR authoring task when using VR. Despite their
preliminary nature, these results suggest that the VR authoring can serve as a complementary
alternative to AR authoring approach. This is especially useful when employing an iterative
process, wherein interacting with the real site each time a change is required could be difficult,
costly and less sustainable.
The paper proceeds as follows: Section 2 reviews VR authoring for AR, Section 3 outlines sys-
tem design, Section 4 presents empirical study results, and the paper concludes with discussion
and future research directions.
2. Related Works
One of the earliest proposals of immersive VR authoring for AR is CAVE-AR [1], whose most
distinct characteristic is that the authoring process is supported by CAVE-VR technology. VR-
based AR authoring has also been employed along with IoT technology to create augmented
smart environments, as described by [2] and [3]. Both systems use 2D image markers to
align virtual world positions with real-world locations. In the case of [3], prototyping is
complemented with SLAM technology. These three cases support displaying the augmented
scene using standard mobile devices. Additionally, it is possible to find systems that use this
approach to create augmentations for the Microsoft HoloLens device, such as the ScalAR system
[4] and Corcisan Twin [5]. In all these cases, the systems were evaluated through use cases or,
at most, gathering feedback from users regarding the usability and overall experience of the
system. Despite these valuable contributions, there is no comparison of both approaches, VR vs
AR authoring methods, to understand the potential and limitations of each approach.
3. The VR/AR Authoring System
To support the study, we developed an AR authoring tool with two different modules (Fig. 1):
SimulAR, which implements the VR authoring approach, and InSituAR, which implements the
AR authoring approach.
Both tools implement a similar editing process, provide the same set of menus for the
operations, access to the same virtual content repository and present the same look and feel.
However, SimulAR uses immersive VR technology to create and edit the AR virtual replica that
was generated through photogrammetry, 104 images of the room were shot with a reflex camera
and then were processed with the Agisoft Metashape software. The user interacts with the
replic using the Oculus Rift and its controllers. On the contrary, in InSituAR the editing process
is done with a mobile phone, and the interaction is mediated by the device screen. In both cases
the AR experience is displayed using a mobile phone with TANGO technology, which is used to
recognize the environment to augment and display the virtual content at specific positions.
Figure 1: Architecture of the authoring System
3.1. The Authoring Process
Firstly, the user browses a Virtual Object Gallery to select the object to be placed in the real
or simulated environment. Once the object is positioned within the scene, the user can adjust
its properties using the Action Menu. This menu offers options to modify the size, orientation,
and position, as well as assign predefined behaviors or remove the object from the scene. In
SimulAR (Fig. 2 left), users perform this process while immersed in a virtual replica of the
environment using a VR headset and controllers. They navigate the environment by walking or
using joystick controls, and interact with menus and objects using a ray pointer. To adjust object
orientation and size, users select the desired operation from the editing menu and manipulate
the controllers accordingly by rotating or moving them closer or further away. In InSituAR
(Fig. 2 right), the authoring process takes place directly at the augmentation site using a mobile
phone. Users interact with menus and options by tapping on the device’s screen. To add virtual
content to the scene, users focus the device’s camera at the desired location and use their fingers
to place it by dragging and dropping elements from the Virtual Object Gallery Menu. Rotation
and scaling of objects are performed by swiping and pinching/spreading gestures, respectively.
Once the user finishes editing the scene, the design is saved in the server-broker, and it is ready
to be displayed in the AR player.
Figure 2: Authoring process using InSimulAR (left) and InSituAR (right)
3.2. Displaying the AR Experience
The AR Player module retrieves the AR scene design from the server and displays it on a
mobile device with TANGO technology. Prior to this, environment scanning is necessary using
TANGO’s area learning technology. The system matches the generated ADT file with the
one on the server, containing environment description and virtual content information. The
motion tracking system determines the device’s position and orientation in the real world,
establishing the 3D coordinate origin and quaternion identity. Using this data, it calculates the
size and position for displaying augmented objects on the device screen, creating the illusion of
real-world placement.
4. Comparative Study Between VR Authoring and AR Authoring
To evaluate and compare the outcomes of the VR and AR authoring approaches, we conducted a
comparative empirical study. Each participant in the experiment developed an AR scene using
the two approaches. We aimed at gaining insights into differences in (1) the amount of effort
required, (2) the user satisfaction and (3) the time required. 10 volunteers (3 females) with ages
ranging from 20 to 37 years (24.8±5.4) were recruited. The group included people with little or
no previous experience with VR (6 participants) and AR (5 participants) technologies, and some
experts (2 participants). Participants utilized an Oculus Rift CV1 Head-Mounted Display (HMD)
and Oculus Touch Controllers for VR authoring tasks. For AR authoring, an Asus Zenfone AR
with Google Tango was employed.
The authoring activity involved two tasks: Free Design (T1) and Reproduce Design (T2). In
the Free Design task participants were tasked with augmenting a room by placing nine virtual
objects from a gallery of objects. The objects included home appliances, furniture, and panels
such as a chair, a fire alarm button, a television, and an emergency exit panel. The objects were
deliberately chosen to challenge participants to manipulate objects of varying sizes, ranging
from too small to too large. Participants could determine the final position and size of each
object within the room. In the Reproduce Design task the participant was instructed to place a
virtual chair in the room. The goal was to align the chair’s location, size, and orientation as
closely as possible to a reference picture of the chair that was previously shown.
The augmentation location was a meeting room in the university of approximately 45 square
meters. The virtual replica of the room for VR authoring tasks was generated via photogramme-
try, using 104 images captured with a reflex camera. The Agisoft Metashape software processed
these images to create a 3D model of the room.
The workload assessment in the study employed the NASA Task Load Index (NASA TLX)
questionnaire. Additionally, participants were given a questionnaire to evaluate their experience.
They rated their satisfaction on a Likert scale while performing basic editing operations on
virtual objects such as selecting, moving, rotating, and resizing. They also assessed the difficulty,
accuracy, and ease of learning. Furthermore, an open-ended question allowed participants to
provide justifications and suggestions. The completion time for the second authoring task (T2)
was measured, while the time for the first task (T1) was not considered, as participants had the
freedom to create their designs using existing virtual objects.
Table 1
Means and Standard Deviations of Nasa TLX (left) and Experience (right) questionnaries
NASA TLX VR AR EXPERIENCE VR AR
Mental Demand(MD) 33.5±17.5 27.5±14.8 Q1 - Select 6.2±1.03 5.7±1.34
Physical Demand(PD) 22.5±17.2 31.0±17.6 Q2 - Move 6.0±1.15 5.0±1.33
Temporal Demand(TD) 31.5±13.8 37.5±14.2 Q3 - Rotate 6.0±0.67 5.4±1.84
Own Performance (OP) 22.5±19.9 20.0±13.1 Q4 - Scale 5.6±1.58 5.6±1.35
Efforts(EF) 32.0±18.6 30.0±16.3 Q5 - Difficulty 2.3±1.25 2.8±0.63
Frustration (FR) 26.5±22.9 31.0±20.7 Q6 - Accuracy 4.7±2.05 5.0±1.76
Overall Workload (RTLX) 28.0±18.3 19.5±16.5 Q7 - Ease to learn 6.3±0.95 6.0±0.82
4.1. Results and Discussion
The preliminary results obtained suggests that immersive VR authoring can effectively and
satisfactorily support the creation of AR experiences similar to the AR approach. The level of
physical and mental effort required (Table 1) was similar for the two approaches: the differences
between the mean scores were very small for all the factors, being the highest for Physical
Demand (8.5 points out of 100). The responses to the questionnarie on the experience show that
the level of satisfaction was between 5 and 6.3 for the the four basic interaction tasks (Q1 to
Q4), for both authoring approaches. The ratings for Difficulty to Use (Q5), Accuracy (Q6), and
Ease of Learning to Use the tool (Q7) were also very similar, being the highest difference of 0.5
points for Difficulty to use. Finally, the time taken to complete the second task was smaller for
the VR authoring (54.5±30.2 seconds vs. 75.1±39.6 seconds), showing a statistically significant
difference between both approaches (p=0.03).
Even though the number of participants is not high enough to provide sound conclusions,
these results suggest that VR technology might be a viable and complementary alternative to AR
authoring tools. The results related to mental and physical effort and overall user satisfaction
are particularly noteworthy because most users had limited or no prior experience with VR
technology, while they extensively use mobile phones in their daily activities. It is worth
mentioning here that after a short training session, participants could use the VR technology
as satisfactorily as the mobile AR. The smaller time taken for designing the AR scene in VR
mode might be influenced by different issues. Firstly, the joystick on the VR controller enables
faster movement within the environment compared to physically walking in AR authoring
mode. Additionally, the wider field of view provided by devices like the Oculus Rift reduces the
need for users to check multiple viewpoints. Moreover, the VR pointer allows manipulation of
objects from a greater distance compared to mobile AR authoring, where users may need to
move closer due to the limited scope of the mobile depth camera. However, AR still provides a
more realistic perspective of the final experience, for which doing part of the prototyping tasks
using AR would be still required to adjust the scene to the real conditions.
It is important to note here that we do not imply that VR authoring is a better approach. Indeed,
one major drawback when compared to AR authoring is the requirement of creating a virtual
model of the environment to augment, which is unnecessary in AR authoring. Additionally,
VR and AR authoring are not mutually exclusive. Both approaches can be used together, for
instance, the bulk of the AR scene can be created in VR mode to save time when placing and
manipulating several objects, followed by refining the final experience in AR mode. As far as the
physical space can be simulated realistically, this mixed approach could make AR prototyping
more sustainable as a great part of the work can be done in the design physical space, supporting
rapid prototyping cycles and postponing the last adjustments to the real space till the end of the
process. Moreover, new immersive devices like the Meta Quest 3 might simplify this approach
by enabling interaction in mixed environments, and facilitating the creation of the virtual replica
mesh through their scanning features.
5. Conclusions and Future Works
The main contribution of this study is a comparison between two approaches for creating AR
experiences: VR and mobile-based AR authoring. The initial findings indicate that VR authoring
could be a viable and complementary alternative option to mobile AR authoring, as users did
not perceive it as more demanding in terms of effort and reported similar levels of satisfaction.
When prototyping complex AR experiences in non-accessible physical sites, recreating the
physical space in VR might be a sustainable and valid way for rapid prototyping. Our current
work aims to define a hybrid method for prototyping AR experiences. This method will leverage
the best features of both authoring approaches at different stages of the authoring process.
Additionally, it will support the combination of these techniques to facilitate the collaborative
creation of AR scenes, both synchronously and asynchronously.
Acknowledgments
This work is supported by the Spanish State Research Agency (AEI) under grant Sense2Make-
Sense (PID2019-109388GB-I00).
References
[1] M. Cavallo, A. G. Forbes, Cave-ar: A vr authoring system to interactively design, simulate,
and debug multi-user ar experiences, in: 2019 IEEE Conference on Virtual Reality and 3D
User Interfaces (VR), IEEE, 2019, pp. 872–873.
[2] B. Soedji, J. Lacoche, E. Villain, Creating ar applications for the iot : a new pipeline, in: 26th
ACM Symposium on Virtual Reality Software and Technology, VRST ’20, ACM, 2020, pp.
1–2. URL: http://dx.doi.org/10.1145/3385956.3422088. doi:10.1145/3385956.3422088 .
[3] J. Lacoche, E. Villain, Prototyping context-aware augmented reality applications for smart
environments inside virtual reality, in: GRAPP 2022, 2022.
[4] Scalar: Authoring semantically adaptive augmented reality experiences in virtual reality, in:
CHI Conference on Human Factors in Computing Systems, CHI ’22, ACM, 2022, pp. 1–18.
[5] A. Prouzeau, Y. Wang, B. Ens, W. Willett, T. Dwyer, Corsican twin: Authoring in situ
augmented reality visualisations in virtual reality, in: Proceedings of the International
Conference on Advanced Visual Interfaces, AVI ’20, ACM, 2020, pp. 1–9.