=Paper= {{Paper |id=Vol-3408/short-s1-03 |storemode=property |title=Extended Reality: Exploring End User Development Capabilities |pdfUrl=https://ceur-ws.org/Vol-3408/short-s1-03.pdf |volume=Vol-3408 |authors=Valentino Artizzu |dblpUrl=https://dblp.org/rec/conf/iseud/Artizzu23 }} ==Extended Reality: Exploring End User Development Capabilities== https://ceur-ws.org/Vol-3408/short-s1-03.pdf
Extended Reality: Exploring End User Development
Capabilities
Valentino Artizzu1
1
    Department of Mathematics and Computer Science, University of Cagliari, Cagliari, Italy


                                         Abstract
                                         The paper outlines the initial efforts of the Author, who is focused on developing solutions to assist
                                         end-users in generating Extended Reality experiences, even if they lack programming or 3D modelling
                                         expertise. The current research of the Author involves creating a system for vocational training creators
                                         to produce processes that function as either training or guidance tools for various Extended Reality
                                         devices, as well as an execution engine that can adjust the process based on the environment context of
                                         the user.

                                         Keywords
                                         Extended Reality, Mixed Reality, Augmented Reality, Virtual Reality, End User Development, Natural
                                         Language, Rule System, Event-Condition-Action Rules, Vocational Training




1. Introduction
Consumer Extended Reality (XR) devices, such as Virtual, Augmented, and Mixed Reality, have
gained popularity in recent years, offering immersive experiences mainly for gaming but also
in other fields such as education and work tools. However, as with other technologies in the
past, the more users adopt XR, the more they will demand control over creating content. At
present, it is still challenging to enable end-users to create XR content, as it requires a team
of experts in 3D modelling, code development, design, etc. Additionally, current development
cycles are not suitable for end-users with evolving needs over time, as involving a professional
developer is not always feasible.
   The goal of the PhD research discussed in this paper is to find solutions that support end-users
in creating Extended Reality experiences without requiring programming experience or 3D
modelling skills. The paper will discuss the current state of the research, starting with the
current literature.


2. Related Work
In this section, the relevant work related to the PhD research of the Author will be defined,
highlighting the limitations of the current literature.


IS-EUD 2023: 9th International Symposium on End-User Development, 6-8 June 2023, Cagliari, Italy
Envelope-Open valentino.artizzu@unica.it (V. Artizzu)
Orcid 0000-0003-0263-2434 (V. Artizzu)
                                       © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
   The PhD research of the Author falls under the scope of End User Development (EUD) [1]
approaches, and the literature suggests that most available work for XR experiences is limited
to defining static scenes or multimedia content overlays.
   Fungus [2] is an open-source Unity extension for developing visual narratives that uses
flowcharts to construct visual novels, but they require a large amount of screen space and
reduces end-user comprehension [3].
   Fanni et al [4] propose PAC-PAC, a tool for creating point-and-click games using a web-
based authoring interface, and the definition of behavior is made through natural language,
Event-Condition-Action rules. This work shares its roots with the previous research work of
the Author, where the Rule system guidelines have been applied into a VR-focused derivative,
ECARules4All [5]. The project extended the foundations of PAC-PAC to accommodate more
complex interactions in full VR environments and defined roles for the Template Builders, End
User Developers, and Final Users: the first one is the expert developer who creates almost-
complete environments (the template), so the End User Developer, an user with average skills
in computer use, can customise to get the final product that will be used by the Final User.
   Ariano et al [6] present an approach to making the process of creating smart home automa-
tions more engaging and accessible for non-professional developers. The approach involves
using augmented reality technology to provide users with more relevant, context-sensitive
representations of connected sensors and objects. Unfortunately, despite they found some
interesting insights from their study, it was only limited to smart home automations, and the
PhD research of the Author would like to expand the idea to a more general context, where any
field can benefit from the EUD paradigm.
   The vocational training field is another area related to the work of this paper. Many contribu-
tions can be found, such as work related to weld training [7] and construction management [8],
proving that learning in an immersive environment can be beneficial to the end-user. However,
none of them gave the possibility to customize the experience once it was built by the developer.
   Regarding vocational training for AR, Chiang et al [9] conducted a systematic review and
found that AR training has positive effects on vocational training outcomes.
   XOOM [10] is a tool designed for non-ICT people that lets them create web-based VR
applications using 360° videos and superimposing content into the virtual scenes. The proposed
work would like to expand one of the possibilites the authors envisioned for XOOM, which is
vocational training, using complete VR experiences, with a focus on generalizing the training
definition instead of creating universal rules for scene interaction.


3. Current State of Research
Upon examining the current state of the art in Extended Reality (XR) and vocational training,
it is evident that only a limited number of studies attempt to enable end-users to define XR
experiences without relying on professional expertise. Additionally, there is no established
method for creating or updating XR-based vocational training experiences by the end-users
themselves, as the process tends to be monolithic. Consequently, the proposed research aims to
bridge this gap.
   To involve end users in the development process, the study proposes the creation of a system
that allows them to define vocational training procedures using a specialised authoring tool.
The Author envisions that the creation of the interface for the tool will be realised by employing
a Rule-authoring approach [11], where for every step the End User Developer can define the
behaviour of the objects that must interact with the user (or interact with each other) in order to
proceed to the following steps. The authoring tool, once a procedure has been finished editing,
will produce two serialised files: the procedure and the context file. The former will contain
the list of the steps defined by the End User, the main modality and a text description for both
of these elements, the latter will contain, for each step, zero or more alternative modalities
(with their respective descriptions) that will be activated if a certain user-defined or built-in
context event is raised during the execution of the procedure. These files can be transferred
to any device compatible with the implementation, covering the entire XR spectrum (e.g., as a
VR training experience or an AR guidance tool). To accomplish this, the project also involves
developing an execution engine that interprets procedure steps and translates them into XR
interactions. The novelty of the system lies in its ability to adapt to the environment of the user
and modify the interaction modality (e.g., touch, controller click, voice, gaze-and-commit, etc.)
based on the current context.
   The components of the project include (as depicted in Figure 2):

    • An authoring tool for creating procedure and context files that can be used or transferred
      to other devices;
    • A Procedure Engine that reads and executes the steps of the procedure file based on user
      interactions (e.g., progressing to the next step upon button click);
    • A Context Engine that monitors potential issues detected by dedicated sensors (e.g., loud
      environment, unrecognizable user voice, user preferences, etc.) and adjusts the interaction
      modality in real-time, either during task execution or at task change if the original next
      step is incompatible with the current context. In particular, the adaptability of this engine
      is achieved by linking it to the sensors data of the system-executing device, and triggering
      context changes found in the context file when an event from the sensors of the device is
      detected.

To ensure compatibility with any device, the Procedure Engine and Context Engine will be
enclosed in an importable package that developers can incorporate into their projects. This
package will also contain the authoring tool, although developers can choose not to import it
as it is not essential for the base features to function correctly. The package will come with a
sample interface for swift implementation of the various modes of the system and a set of APIs
for adapting the sensor data of the target device into a format usable by the Context Engine.
Additionally, the package will include a mechanism for providing feedforward [12] to end-users,
allowing them to override the decision of the system if they prefer.
   The research methodology is structured as follows:

    • Initial problem identification;
    • State-of-the-art review on Extended Reality and Vocational Training;
    • Design, prototyping, and implementation of user-friendly tools for supporting end-users
      in the authoring process and trainees in the learning process;
Figure 1: The current prototype interface. From top left to bottom right: The base interface, in its
expanded version; the feedforward prompt; the modality choice list; the interface after a modality
override.


    • Validation of results through user tests and/or comparisons with the current state of the
      art.

The project is currently in the prototyping phase, with a preliminary functional version of
the training system implemented using Mixed Reality Toolkit 3 [13]. It currently features an
interface that displays the current step description and modality to use; the previous step, next
step, and alternatives for the current step can be accessed by pressing the appropriate button at
the bottom right side of the main panel (the ”current task” panel, which remains constantly
visible throughout the interface). The context change can be activated by pressing predefined
keys on the keyboard, which simulate sensor signals. These events can also be viewed by
pressing the top right button of the main panel, which will show which event is currently
occurring through icons. During task execution, from a graphical point of view, the next step
will become the current step, and the task performed by the user will become the previous step.
By pressing the button on the top left of the main panel, the user can enable the feedforward
option, which notifies the user of the description of next task, and asks if they want to force
a modality, in which case the context change will be ignored until the task is completed. At
the end of the procedure, the user will be notified of the completion of the steps with a specific
message.


4. Conclusions
Overall, the paper presents a solution for enhancing XR experiences through End User Develop-
ment principles, specifically in the context of vocational training. The proposed methodology
involves the use of an authoring tool that allows users to define training procedures and migrate
them throughout the XR spectrum. The system is also capable of adapting the procedures based
Figure 2: The proposed system structure and components


on the environment context of the user, using sensors and user preferences.
  It is worth noting that the research is currently being developed in collaboration with leading
european research groups and will also involve in the future the expertise of an Italian ICT
company focused on the valorization of cultural and environmental heritage.


5. University Doctoral Program and Context
The PhD project is led by Valentino Artizzu, a second-year PhD student in Computer Science
in the Department of Mathematics and Computer Science of the University of Cagliari (Italy),
starting from January 2022. The expected end of the program is in December 2024. He has a
Master’s Degree in Computer Science at the University of Cagliari in 2021, with a thesis based
on the implementation of the rule engine that later became one of the main components of
ECARules4All [5], after a scholarship and the beginning of the PhD program. His main research
interests are XR technologies, Videogame Development and Design, and Internet of Things. He
works in the CG3HCI (Computer Graphics & Human Computer Interaction) research group,
founded by prof. Riccardo Scateni, who leads the Computer Graphics part of the group, and
later joined by prof. Lucio Davide Spano, who leads the Human Computer Interaction one.

5.0.1. Acknowledgments
This work is supported by the Italian Ministry and University and Research (MIUR) under the
PON program: The National Operational Program “Research and Innovation” 2014–2020 (PON
R&I).
References
 [1] H. Lieberman, F. Paternò, V. Wulf, End user development, volume 9, Springer, 2006.
 [2] Fungus games, https://fungusgames.com.
 [3] T. R. G. Green, M. Petre, Usability analysis of visual programming environments: a
     ‘cognitive dimensions’ framework, Journal of Visual Languages & Computing 7 (1996)
     131–174.
 [4] F. A. Fanni, M. Senis, A. Tola, F. Murru, M. Romoli, L. D. Spano, I. Blec̆ić, G. A. Trunfio, Pac-
     pac: end user development of immersive point and click games, in: End-User Development:
     7th International Symposium, IS-EUD 2019, Hatfield, UK, July 10–12, 2019, Proceedings 7,
     Springer, 2019, pp. 225–229.
 [5] V. Artizzu, G. Cherchi, D. Fara, V. Frau, R. Macis, L. Pitzalis, A. Tola, I. Blecic, L. D. Spano,
     Defining configurable virtual reality templates for end users, Proceedings of the ACM on
     Human-Computer Interaction 6 (2022) 1–35.
 [6] R. Ariano, M. Manca, F. Paternò, C. Santoro, Smartphone-based augmented reality for end-
     user creation of home automations, Behaviour & Information Technology 42 (2023)
     124–140. URL: https://doi.org/10.1080/0144929X.2021.2017482. doi:10.1080/0144929X.
     2021.2017482 .
 [7] R. Stone, E. McLaurin, P. Zhong, K. Watts, Full virtual reality vs. integrated virtual reality
     training in welding (2013).
 [8] R. Sacks, A. Perlman, R. Barak, Construction safety training using immersive virtual
     reality, Construction Management and Economics 31 (2013) 1005–1017.
 [9] F.-K. Chiang, X. Shang, L. Qiao, Augmented reality in vocational training: A systematic
     review of research and applications, Computers in Human Behavior 129 (2022) 107125.
[10] F. Garzotto, M. Gelsomini, V. Matarazzo, N. Messina, D. Occhiuto, Xoom: An end-user
     development tool for web-based wearable immersive virtual tours, in: Web Engineering:
     17th International Conference, ICWE 2017, Rome, Italy, June 5-8, 2017, Proceedings 17,
     Springer, 2017, pp. 507–519.
[11] End-user development for personalizing applications, things, and robots, International Jour-
     nal of Human-Computer Studies 131 (2019) 120–130. URL: https://www.sciencedirect.com/
     science/article/pii/S1071581919300722. doi:https://doi.org/10.1016/j.ijhcs.2019.
     06.002 , 50 years of the International Journal of Human-Computer Studies. Reflections on
     the past, present and future of human-centred technologies.
[12] T. Djajadiningrat, K. Overbeeke, S. Wensveen, But how, donald, tell us how? on the
     creation of meaning in interaction design through feedforward and inherent feedback, in:
     Proceedings of the 4th conference on Designing interactive systems: processes, practices,
     methods, and techniques, 2002, pp. 285–291.
[13] Mixed Reality Toolkit 3 Developer Documentation - MRTK3, 2023. URL: https://learn.
     microsoft.com/en-us/windows/mixed-reality/mrtk-unity/mrtk3-overview/.