=Paper= {{Paper |id=Vol-3704/paper6 |storemode=property |title=Towards Semantic Web Integration in Authoring Tools for XR Educational Content Development |pdfUrl=https://ceur-ws.org/Vol-3704/paper6.pdf |volume=Vol-3704 |authors=Alex Gabriel |dblpUrl=https://dblp.org/rec/conf/realxr/Gabriel24 }} ==Towards Semantic Web Integration in Authoring Tools for XR Educational Content Development== https://ceur-ws.org/Vol-3704/paper6.pdf
                                Towards semantic Web integration in authoring tools
                                for XR educational content development
                                Alex Gabriel1
                                1
                                    Université de Lorraine, ERPI, F-54000 Nancy, France


                                               Abstract
                                               Interest in leveraging Augmented Reality (AR) and Virtual Reality (VR) for educational enhancement is
                                               growing rapidly, but widespread adoption faces significant challenges. Instructors, often not experts in
                                               immersive experience development and constrained by time, struggle to create impactful XR instructional
                                               content. This challenge underscores the need for accessible authoring tools to empower instructors to
                                               efficiently create engaging XR content, facilitating broader adoption of VR and AR in education. Using
                                               the Design Science Research Paradigm, developing a pedagogical XR content creation tool helped identify
                                               practical challenges, revealing limitations in human-machine and machine-machine interactions. This
                                               article proposes research directions in human-machine interaction, knowledge engineering, and artificial
                                               intelligence to effectively address these challenges.

                                               Keywords
                                               content creation, extended reality (XR), semantic web, interoperability, authoring tool




                                1. Introduction
                                Extended reality (XR) technologies have emerged as a potent tool for training within the
                                manufacturing sector [1]. Immersive XR training demonstrates potential in enhancing worker
                                performance and fostering increased engagement [1]. XR comprises two distinct subcategories:
                                Augmented Reality (AR) and Virtual Reality (VR), each exhibiting its own continuum [2]. These
                                technologies have attracted considerable interest, particularly in the domain of training and
                                education. AR has attained a level of maturity conducive to meta-analyses regarding learning
                                outcomes [3, 4], while VR has undergone extensive exploration, leading to numerous literature
                                reviews concerning its educational applications [5, 6, 7]. Significantly, these technologies,
                                notably AR, are acknowledged for their capability in facilitating experiential and active learning
                                [8]. However, the widespread adoption of extended reality (XR) technologies encounters a
                                significant obstacle due to the intricate technicalities [9]. Addressing this challenge hinges
                                on the development of suitable authoring toolkits, which are instrumental in optimizing XR’s
                                application in education. Based on the experience in developing an authoring tool for XR
                                instructional content creation which eliminate the prerequisite for programming skills, it has
                                been identified various scientific locks. In the context of creating XR educational content and
                                using it for training purposes, it became necessary to work on the user experience for both
                                content creators and learners. However, reach a satisfying user experience would requires some

                                RealXR: Prototyping and Developing Real-World Applications for Extended Reality, June 4, 2024, Arenzano (Genoa), Italy
                                $ alex.gabriel@univ-lorraine.fr (A. Gabriel)
                                 0000-0002-3676-6417 (A. Gabriel)
                                             © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).




CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
technical improvement. The rest of this paper will present how the semantic web, Artificiel
Intelligence and model-driven engineering could contribute to the development of tools, keeping
in mind the benefits for end-users such as teachers and learners. These reflections are based on
the development of the HELP XR authoring tool and its experimental use in a training context
with engineering students learning how to operate machinery.


2. Background
2.1. Authroing Tool
Admitting the relative benefits of XR for training, these technologies require technical skills to be
implemented in the pedagogical context. A qualitative study with Saudi educators revealed their
awareness but lack of hands-on technical experience in educational XR [10]. It also indicated
instructors’ interest in minimal coding solutions for XR creation, yet noted that freely available
XR apps may offer limited educational value [10].
   This highlights the need for authoring tools; however, it is still difficult to navigate through
the solutions that may exist. A primary issue is the limitation of experience to one specific
technology within certain authoring tools: either AR or VR, but rarely both [11]. A second issue
is the confusion surrounding certain classifications concerning content creation, its use and
the level of expertise required [12], or even the lack of reflection around content creation as a
factor in the acceptance of XR technology [13].
   However, there is already a wealth of literature review on AR [8, 14], VR [15, 9] and XR [16]
authoring tools. This literature recurrently stresses the need to know how to program as a
prerequisite for adoption [16, 10]. However, scripting and GUI programming are also expected
[16, 8]. Beyond the ability to program behaviors and interactions, the positioning of content in
space and the sequencing of activities is another point of research, with immersive approaches
[17], non-immersive and possibly a combination of both [18]. Despite an abundance of literature
on authoring tools, few have been the subject of user studies to assess their relevance.[8].

2.2. Low code programming
To address the problem of simplifying application creation for non-experts, low-code and no-
code platforms have become a topic in the scientific literature [19]. The development of this
programmatic approach aims to address various challenges such as the shortage of skilled
employees, workflow automation to increase delivery time and quality while staying within
budget [19]. The Low-Code Development Platform (LCDP) represents a significant reduction in
manual coding, enabling faster deployment of applications using visual tools and the efficient
preparation of data to create multi-tier workflows [19]. It includes declarative languages,
dynamic graphical user interfaces, and visual diagrams. One of the most well-known examples
of low-code visual programming is Scratch [20].
   Although low-code literature is increasing, research seems to mainly focus on technical
aspects rather than social aspects [19]. It is precisely these social aspects that aim to explore
usability problems during initial usage and testing [19]. The need for further research into
the usability of tools integrating low code was also highlighted by another literature review
based on 207 articles [21]. The concept of "low-code" is often accompanied by the notion of
model-driven development (MDD) and model-driven engineering (MED) [21].

2.3. Programming with Large Language Model
As for programming support, Large Language Models (LLMs) are currently being extensively
explored. The field of LLMs has surged in popularity since ChatGPT went online. Over recent
years, a plethora of models have emerged, often exhibiting exceptional overall performance
[22]. Consequently, a wide array of applications has emerged, including utilizing these models
as assistants [23]. They can be seamlessly integrated directly into Integrated Development
Environments (IDEs) like Github Copilot, OpenAI Codex, or freeware alternatives such as
Ollama integration. One critical factor influencing code quality is the quality of the prompt
used to generate the code [23]. To address this issue, domain-specific languages are emerging,
aiming to structure the process and ensure prompt quality [24]. One might question the need
for low-code and model-driven approaches if natural language is sufficient to produce code.
However, current research is focusing on simplifying the process of writing prompts using
block programming, thereby improving the control and efficiency of LLM use [25].


3. Methodology
On the one hand, there is ongoing work on authoring tools for creating XR content, which often
entail limitations due to the requirement for coding skills.efforts are being made to simplify and
streamline the software development process through low-code/no-code approaches and, more
broadly, model-driven devlopment. Additionally, this approach can potentially benefit from LLM-
based assistants. These considerations prompt us to question the future prospects of authoring
tools for XR content creation. Drawing upon the Design Science Research Paradigm and action
research, this reflection is based on experiences from the development and experimentation
involving learners and instructors.

3.1. Experimentation
The initial context for this research was to train students in the use of production machines
through AR and VR simulations as part of a design course. This training involved providing
step-by-step descriptions of machine procedures, along with visual indications of the actions to
be carried out and their locations. Additionally, there was a lack of available tools for creating
content that could be used in both AR and VR environments.
   Inspired by model-driven development approaches, the creation of this content using the
Hybrid Extended Learning Platform (HELP XR) was designed to be achieved through an activity
diagram specifying different types of information, photos, videos, and actions required at each
stage. This information is linked to either the real machine or its 3D representation, depending
on the technology utilized.
   This led to two experiments: the first focused on assessing the impact of the AR client on
learners, while the second aimed to evaluate the acceptability of the authoring tool among
instructors, specifically regarding their attitudes towards technology. The former experiment
involved 89 engineering students divided into two groups: one using AR-based learning and the
other receiving traditional instructor-led training. The latter experiment engaged 14 instructors
to assess the authoring tool.


4. Results
4.1. Instructor and learner experiment
The experimentation with learners revealed that students who learned with AR made as many
errors and mistakes as students who learned with an instructor. However, it was noted that AR
learners exhibited a potential automation bias, as their task reproduction was slower compared
to learners with instructors.
   Regarding the experimentation with instructors, the evaluation of their interaction with the
authoring tools using UTAUT2 [26] and GCAS [27] resulted in an overall positive evaluation,
particularly in terms of "effort expectancy" and "facilitating conditions," regardless of their
attitudes towards technology. However, the "use behavior" factor is relatively low compared to
other factors, and the intention to use the tool is notably lower among individuals less inclined
towards digital technology. It means the system is relativly easy to use but not enough mature
to trigger a higher level of behavior change.

4.2. Development
Before delving into technical aspects of the Hybrid Extended Learning Platform (HELP XR),
the ambition of this project was to create an online platform that simplifies the process for
instructors to create their own XR training content. The initial use case focused on creating
machine tool training content accessible in AR, VR, and also in a non-headset form using
WebXR. The training content within the platform is interactive, allowing buttons to trigger
events and doors to be opened. Additionally, training can be conducted collaboratively with
multiple individuals in the same virtual environment. The HELP XR system consists of several
components (Figure 1). These components include an API, a web-based authoring tool, a WebXR
client, and specialized clients for different XR devices. At its core are the authoring tool web app
and the API, which enable content creation and data access across XR devices. The authoring
tool web app serves as the interface for creating training materials (through blocks in an activity
diagram as shown by Figure 2 left), uploading 3D models and multimedia files, and defining
artifact behavior as illustrated by Figure 2 right. The back-end includes an API and a database
for storing and processing training data. This API facilitates the import and reuse of 3D models
and multimedia files, and provides access to training information for XR devices.
   Two types of XR devices were used: Microsoft HoloLens 2 and Meta Quest 2, each with its
own client app developed in Unity for accessing training in AR or VR, respectively. Additionally,
the WebXR app addresses accessibility concerns by allowing access to content similar to a
computer video game, as well as immersive content when accessed with VR devices.
Figure 1: Schematic representation of the components of HELP XR




Figure 2: Screenshots of the HELP XR authoring tool for training scenario definition (left) and 3D model
behavior (right)


5. Discussion
The development of the tool and its in-situ evaluation with learners and instructors led to the
identification of both technical and conceptual challenges. Although the system allows for the
creation of XR content that is accessible through AR, VR, and web-based interfaces, a primary
challenge is the generalization of this interoperability. This development was inspired by the
ARLEM standard [28]; however, its implementation has been subject to interpretation and is
primarily designed for augmented reality. Drawing on the model of the Semantic Web, creating
a vocabulary (Linked Open Vocabulary) similar to schema.org could enhance interoperability
between systems. Furthermore, given the increasing prevalence of 3D content on the web, such
a vocabulary could facilitate the extraction of knowledge embedded within these environments
from a Linked Open Data perspective. Consequently, the process of transcribing the ARLEM
standard into an ontology has been initiated. Nonetheless, several technical aspects require
validation, and the standard must be harmonized with existing vocabularies [29, 30].
   The second issue concerns the formalization of training to create the environment that is
relevant to the instructional objectives. The original hypothesis was to use formalism inspired
by activity diagrams for the main scenarios and block programming for artifact behaviors.
The instructors’ evaluation showed that it was a relatively good decision but not sufficient to
achieve the desired level of "use behavior" according to the UTAUT2 evaluation. It could be
hypothesized that the activity diagram is not relevant for everyone. Furthermore, the current
tool is mainly limited to sequential closed-ended scenarios and is not adapted to pedagogical
objectives that require open-ended scenarios and extensive interaction with the environment.
With the increasing popularity and performance of Large Language Models (LLMs), another
human-machine interaction alternative to create pedagogical scenarios would be the LLM-based
assistant. This constitutes a research opportunity to evaluate the benefits of designing assistants
in this specific context of XR content creation, both from a technical and user experience
perspective.
   The third challenge relates to the creation of 3D content. While simplifying the development
of XR training scenarios with customized interactions, this process requires users to import
3D models tailored to the specific situation—a skill not typically expected of instructors and
teachers. One potential approach is to revisit the possibilities of using AI to generate 3D models
suited for authoring systems. Neural Radiance Fields (NeRF) or Neural Graphics Pipelines
(NGP) [31] are solutions currently being developed to accelerate the process of 3D modeling
for videogames, for instance. However, in the context of complex interactive objects that could
be required for XR environments (such as machines with buttons), these solutions are not
sufficiently performant and produce only a single mesh. A research perspective would be the
hybridization of these approaches with a formal description of the expected 3D model in order
to create objects with components and potential ready-to-use interactions.


6. Conclusion
Building upon the development of the HELP XR tool and user experiments, this article suggests
three research directions for advancing authoring tools in XR. The first direction involves
exploring knowledge engineering to develop vocabularies specifically designed for XR environ-
ment descriptions, aligning with emerging standards and semantic web practices. The second
direction entails investigating the use of Large Language Models (LLMs) to support content
creators in crafting XR pedagogical scenarios. Lastly, the third avenue seeks to explore the
hybridization of generative AI for 3D models in order to streamline the creation of 3D models
suitable for XR interactions driven by training scenarios.


References
 [1] S. Doolani, C. Wessels, V. Kanal, C. Sevastopoulos, A. Jaiswal, H. Nambiappan, F. Make-
     don, A Review of Extended Reality (XR) Technologies for Manufacturing Training,
     Technologies 8 (2020) 77. URL: https://www.mdpi.com/2227-7080/8/4/77. doi:10.3390/
     technologies8040077, number: 4 Publisher: Multidisciplinary Digital Publishing Insti-
     tute.
 [2] P. A. Rauschnabel, R. Felix, C. Hinsch, H. Shahab, F. Alt, What is XR? Towards a Framework
     for Augmented and Virtual Reality, Computers in Human Behavior 133 (2022) 107289.
     URL: https://linkinghub.elsevier.com/retrieve/pii/S074756322200111X. doi:10.1016/j.
     chb.2022.107289.
 [3] J. Garzón, J. Acevedo, Meta-analysis of the impact of Augmented Reality on students’
     learning gains, Educational Research Review 27 (2019) 244–260. URL: https://linkinghub.
     elsevier.com/retrieve/pii/S1747938X18301805. doi:10.1016/j.edurev.2019.04.001.
 [4] Z. A. Yilmaz, V. Batdi, Meta-Analysis of the Use of Augmented Reality Applications in
     Science Teaching, Journal of Science Learning 4 (2021) 267–274. URL: https://ejournal.upi.
     edu/index.php/jslearning/article/view/92. doi:10.17509/jsl.v4i3.30570.
 [5] L. Freina, M. Ott, A Literature Review on Immersive Virtual Reality in Education: State Of
     The Art and Perspectives., The international scientific conference elearning and software
     for education 1 (2015) 10–1007.
 [6] S. Kavanagh, A. Luxton-Reilly, B. Wuensche, B. Plimmer, A systematic review of Virtual
     Reality in education, Themes in Science & Technology Education 2 (2017) 85–119.
 [7] M. A. Rojas-Sánchez, P. R. Palos-Sánchez, J. A. Folgado-Fernández, Systematic liter-
     ature review and bibliometric analysis on virtual reality and education, Education
     and Information Technologies 28 (2023) 155–192. URL: https://link.springer.com/10.1007/
     s10639-022-11167-5. doi:10.1007/s10639-022-11167-5.
 [8] A. Dengel, M. Z. Iqbal, S. Grafe, E. Mangina, A Review on Augmented Reality Au-
     thoring Toolkits for Education, Frontiers in Virtual Reality 3 (2022) 798032. URL:
     https://www.frontiersin.org/articles/10.3389/frvir.2022.798032/full. doi:10.3389/frvir.
     2022.798032.
 [9] S. Vert, D. Andone, Virtual Reality Authoring Tools for Educators, ITM Web of Conferences
     29 (2019) 03008. URL: https://www.itm-conferences.org/10.1051/itmconf/20192903008.
     doi:10.1051/itmconf/20192903008.
[10] M. Meccawy, Teachers’ prospective attitudes towards the adoption of extended re-
     ality technologies in the classroom: interests and concerns, Smart Learning Envi-
     ronments 10 (2023) 36. URL: https://doi.org/10.1186/s40561-023-00256-8. doi:10.1186/
     s40561-023-00256-8.
[11] I. L. Chamusca, C. V. Ferreira, T. B. Murari, A. L. Apolinario, I. Winkler, Towards Sustain-
     able Virtual Reality: Gathering Design Guidelines for Intuitive Authoring Tools, Sustain-
     ability 15 (2023) 2924. URL: https://www.mdpi.com/2071-1050/15/4/2924. doi:10.3390/
     su15042924, number: 4 Publisher: Multidisciplinary Digital Publishing Institute.
[12] M. Nebeling, M. Speicher, The Trouble with Augmented Reality/Virtual Reality Authoring
     Tools, in: 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct
     (ISMAR-Adjunct), IEEE, Munich, Germany, 2018, pp. 333–337. URL: https://ieeexplore.ieee.
     org/document/8699236/. doi:10.1109/ISMAR-Adjunct.2018.00098.
[13] S. H.-W. Chuah, Why and Who Will Adopt Extended Reality Technology? Literature
     Review, Synthesis, and Future Research Agenda, 2018. URL: https://papers.ssrn.com/
     abstract=3300469. doi:10.2139/ssrn.3300469.
[14] M. Ez-zaouia, I. Marfisi-Schottman, M. Oueslati, C. Mercier, A. Karoui, S. George, A
     Design Space of Educational Authoring Tools for Augmented Reality, in: K. Kiili, K. Antti,
     F. De Rosa, M. Dindar, M. Kickmeier-Rust, F. Bellotti (Eds.), Games and Learning Alliance,
     volume 13647, Springer International Publishing, Cham, 2022, pp. 258–268. URL: https://
     link.springer.com/10.1007/978-3-031-22124-8_25. doi:10.1007/978-3-031-22124-8_
     25, series Title: Lecture Notes in Computer Science.
[15] H. Coelho, P. Monteiro, G. Gonçalves, M. Melo, M. Bessa, Authoring tools for virtual
     reality experiences: a systematic review, Multimedia Tools and Applications 81 (2022)
     28037–28060. URL: https://link.springer.com/10.1007/s11042-022-12829-9. doi:10.1007/
     s11042-022-12829-9.
[16] M. Nebeling, XR tools and where they are taking us: characterizing the evolving research
     on augmented, virtual, and mixed reality prototyping and development tools, XRDS:
     Crossroads, The ACM Magazine for Students 29 (2022) 32–38. URL: https://dl.acm.org/doi/
     10.1145/3558192. doi:10.1145/3558192.
[17] V. Pires De Oliveira, R. De Jesus Macedo, F. Vinicius De Freitas, A. Machado, T. Murari,
     I. Winkler, Virtual reality authoring tools acceptance and use: An exploratory study
     with the UTAUT2 model, in: Symposium on Virtual and Augmented Reality, ACM, Rio
     Grande Brazil, 2023, pp. 299–303. URL: https://dl.acm.org/doi/10.1145/3625008.3625053.
     doi:10.1145/3625008.3625053.
[18] R. Horst, R. Naraghi-Taghi-Off, L. Rau, R. Doerner, Authoring With Virtual Reality
     Nuggets—Lessons Learned, Frontiers in Virtual Reality 3 (2022). URL: https://www.
     frontiersin.org/articles/10.3389/frvir.2022.840729.
[19] N. Prinz, C. Rentrop, M. Huber, Low-Code Development Platforms – A Literature Review,
     in: AMCIS 2021 Proceedings, 2021.
[20] M. Resnick, J. Maloney, A. Monroy-Hernández, N. Rusk, E. Eastmond, K. Brennan, A. Mill-
     ner, E. Rosenbaum, J. Silver, B. Silverman, Y. Kafai, Scratch: programming for all, Communi-
     cations of the ACM 52 (2009) 60–67. URL: https://dl.acm.org/doi/10.1145/1592761.1592779.
     doi:10.1145/1592761.1592779.
[21] D. Pinho, A. Aguiar, V. Amaral, What about the usability in low-code platforms? A
     systematic literature review, Journal of Computer Languages 74 (2023) 101185. URL:
     https://linkinghub.elsevier.com/retrieve/pii/S259011842200082X. doi:10.1016/j.cola.
     2022.101185.
[22] S. Minaee, T. Mikolov, N. Nikzad, M. Chenaghlu, R. Socher, X. Amatriain, J. Gao, Large
     Language Models: A Survey, 2024. URL: http://arxiv.org/abs/2402.06196, arXiv:2402.06196
     [cs].
[23] M. Kazemitabaar, R. Ye, X. Wang, A. Z. Henley, P. Denny, M. Craig, T. Grossman,
     CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assis-
     tant that Balances Student and Educator Needs, 2024. URL: http://arxiv.org/abs/2401.11314.
     doi:10.1145/3613904.3642773, arXiv:2401.11314 [cs].
[24] K. Okuda, S. Amarasinghe, AskIt: Unified Programming Interface for Programming with
     Large Language Models, 2023. URL: http://arxiv.org/abs/2308.15645, arXiv:2308.15645 [cs].
[25] Y. Cai, S. Mao, W. Wu, Z. Wang, Y. Liang, T. Ge, C. Wu, W. You, T. Song, Y. Xia, J. Tien,
     N. Duan, Low-code LLM: Visual Programming over LLMs, 2023. URL: http://arxiv.org/abs/
     2304.08103, arXiv:2304.08103 [cs].
[26] V. Venkatesh, J. Y. L. Thong, X. Xu, Consumer Acceptance and Use of Information
     Technology: Extending the Unified Theory of Acceptance and Use of Technology, MIS
     Quarterly 36 (2012) 157–178. URL: https://www.jstor.org/stable/41410412. doi:10.2307/
     41410412, publisher: Management Information Systems Research Center, University of
     Minnesota.
[27] P. Roussos, The Greek computer attitudes scale: construction and assessment of psycho-
     metric properties, Computers in Human Behavior 23 (2007) 578–590. doi:10.1016/j.
     chb.2004.10.027.
[28] F. Wild, C. Perey, B. Hensen, R. Klamma, IEEE Standard for Augmented Reality Learning
     Experience Models, in: 2020 IEEE International Conference on Teaching, Assessment,
     and Learning for Engineering (TALE), IEEE, Takamatsu, Japan, 2020, pp. 1–3. URL: https:
     //ieeexplore.ieee.org/document/9368405/. doi:10.1109/TALE48869.2020.9368405.
[29] B. Abu-Salih, MetaOntology: Toward developing an ontology for the metaverse, Frontiers
     in Big Data 5 (2022) 998648. URL: https://www.frontiersin.org/articles/10.3389/fdata.2022.
     998648/full. doi:10.3389/fdata.2022.998648.
[30] K. Li, B. P. L. Lau, X. Yuan, W. Ni, M. Guizani, C. Yuen, Toward Ubiquitous Semantic
     Metaverse: Challenges, Approaches, and Opportunities, IEEE Internet of Things Journal 10
     (2023) 21855–21872. URL: https://ieeexplore.ieee.org/abstract/document/10208153. doi:10.
     1109/JIOT.2023.3302159, conference Name: IEEE Internet of Things Journal.
[31] T. Takikawa, T. Müller, M. Nimier-David, A. Evans, S. Fidler, A. Jacobson, A. Keller,
     Compact Neural Graphics Primitives with Learned Hash Probing, in: SIGGRAPH Asia 2023
     Conference Papers, SA ’23, Association for Computing Machinery, New York, NY, USA,
     2023, pp. 1–10. URL: https://doi.org/10.1145/3610548.3618167. doi:10.1145/3610548.
     3618167.