Initial Concepts for Augmented and Virtual Reality-based Enterprise Modeling? Fabian Muff[0000−0002−7283−6603] and Hans-Georg Fill[0000−0001−5076−5341] University of Fribourg, Research Group Digitalization and Information Systems fabian.muff|hans-georg.fill@unifr.ch http://www.unifr.ch/inf/digits Abstract. One current challenge in enterprise modeling is to establish it as a common practice in everyday work instead of its traditional role as an expert discipline. In this paper we present first steps in this direc- tion through augmented and virtual reality-based conceptual modeling. For this purpose we developed a novel meta-metamodeling framework for augmented and virtual reality-based conceptual modeling and im- plemented it in a prototypical tool. This permits us to derive further requirements for the representation and processing of enterprise models in such environments. Keywords: Conceptual Modeling · Augmented Reality · Virtual Reality 1 Introduction One vision that has recently been formulated for enterprise modeling states, that within some years from now on, modeling shall be embedded in our daily work practices [8]. This means, that people engage in modeling without noticing it and it becomes a common practice, just like the use of office applications today. For achieving this vision, multiple challenges must be addressed in research including adequate model formats, the context of stakeholders or the scope of models. In the following, first research results in augmented and virtual reality (AR/ VR)-based conceptual modeling towards realizing this vision are presented. We focus mainly on the presentation and representation of models, as well as on the scope of models. This includes the analysis of everyday work practices, the identification of adequate situations for model creation and use, as well as the selection of appropriate content in particular contexts. Thereby we build upon previous work where we derived constituents of AR-based applications [6]. As a sample scenario, let us imagine a domain expert working on a task using a machine in some business process. Suppose that the person would like to know about the possible next steps in the process. In a traditional setting, this person would have to revert to a classical modeling tool and be familiar with the used modeling notation. Consider now that the person wears a head-mounted display ? Copyright © 2021 for this paper by its author. Use permitted under Creative Com- mons License Attribution 4.0 International (CC BY 4.0). 50 F. Muff et al. (HMD) that automatically displays the relevant information about the process and embeds the visualization into the real world at the specific location in the form of augmented reality. This would mean that the model is embedded into the current work practice. When analyzing this scenario, there are many aspects that must be consid- ered for combining modeling and AR. In particular, we can revert to a previously described conceptual framework for AR and denote the different steps of the process as content and the working environment of the domain expert as con- text [6]. Since the different tasks should be visualized automatically, this can be considered as the interaction. As existing metamodeling approaches in the area of enterprise modeling so far do not contain AR-specific concepts, we developed a novel meta-metamodeling framework for AR/VR for realizing such scenarios. This will serve for deriving more concrete requirements in in the following. The remainder of the paper is structured as follows: In Section 2 we briefly discuss related work. In Section 3, we will introduce the framework we developed for integrating AR/VR concepts in metamodeling. In Section 4 we present the additionally derived requirements for such an approach. The paper will end with a conclusion and an outlook to future work. 2 Related Work The representation of models in three-dimensional space has been investigated by several authors. As summarized in [3], previous approaches focused for example on the 3D representation of business process models, their interactive generation or the layouting of three-dimensional models. Due to technological advancement, decreasing prices for high-end devices for augmented and virtual reality applica- tions and the availability of high-level software libraries, more recent approaches explored how to use AR/VR technologies in this context. Abdul et al. presented an approach for visualizing BPMN collaboration mod- els extracted from a standard file format in VR [1]. The user can insert suitable three-dimensional representations for the different elements. Subsequently, the process can be simulated and validated in VR. However, this approach is specific for a given purpose and cannot be adapted to other use cases. Ruiz-Rube et al. presented a tool that focuses on a metamodeling approach for the creation of AR editors for domain-specific languages (DSL) [7]. Their main contribution lies on creating AR model editors for mobile devices. The metamodel is based on ECORE and extended with AR concepts. However, it lacks several concepts typically used in enterprise modeling such as decomposi- tion, ports, or attribute specifications for nodes, edges, and model instances. Metzger et al. designed and implemented a system for interacting with virtual process models by using smart glasses [5]. Their approach permits to create and modify process models in virtual reality, other modeling languages are however not directly supported. In summary, there are several previous approaches that target the use of augmented and virtual reality in conceptual modeling. However, to the best of Initial Concepts for AR and VR-based Enterprise Modeling 51 our knowledge no publications so far address this topic on the meta-metalevel in a generic way. 3 A Meta-Metamodeling Framework for AR and VR For developing a meta-metamodeling framework for AR and VR, we followed an exploratory and experimental research approach. We first investigated ex- isting meta-metamodels as described for example in [4]. This permitted us to identify the relevant concepts typically used in traditional 2D metamodeling. Subsequently, we derived the concepts necessary for AR and VR representations in 3D space. This was largely influenced by the technical requirements for re- alizing AR and VR applications using a state-of-the-art technology stack that would run on arbitrary AR and VR devices using a web-based environment. This resulted in the meta-metamodel shown in Figure 1. metaobject - uuid - name - description 1..1 - geometry 1..1 - coordinates2D has_ports Port has_ports 0..* is_instance_of_scene - relativeCoordinates3D - absoluteCoordinates3D 0..* 0..* has attribute 0..* 0..1 0..* 1..1 0..* attribute 1..1 class contains_classes 0..* 1..1 0..* scene_type 0..* 0..* 0..* has_attribute has_type 1..1 1..1 0..* role_from 1..1 has 1..1 relationclass role 1..* 0...* role_to 1..1 attribute_type 1..1 has 1..1 1..1 1..1 has 0..* reference_attribute has has 0..1 has 0..* 0..* 0..* 0..* has role_scene_reference 0..* role_port_reference role_class_reference 0..* is_instance_of_relationclass is_instance_of_reference_attribute is_instance_of_attribute 0..* 0..* 0..* is_instance_of _class 0..* assigned_to 0..1 assigned_to 0..* class_instance assigned_to 0..* attribute_instance 0..* 0..* 0..1 0..1 0..1 class_instance_ref attribute_instance_ref is_instance_of_role 0..* 0..* 0..* 1..1 0..* relationclass_instance 0..1 has_relationclass_ reference 0..* scene_instance_ref role_instance 0..1 scene_instance 2..2 0..* 0..1 0..1 instance_object 0..1 0..* connected_to - uuid_instance_object port_instance_ref - name 0..1 port_instance - description 0..* is_instance_of_port - geometry 0..* - coordinates2D - relativeCoordinates3D - absoluteCoordinates3D - visibility connected_to assigned_to Fig. 1: UML Diagram of the Meta-Metamodel enabling AR and VR The innovative aspect of this meta-metamodel is, that it can be simultane- ously used for 2D and 3D modeling. Unlike previous meta-metamodeling ap- proaches, it is however natively based on 3D space. It must be noted that the 52 F. Muff et al. Fig. 2: Visualization of the modeling Fig. 3: 3rd person view of a user wearing tool in a 2D browser on an AR HMD an AR HMD and looking at AR content Fig. 4: Example of a BPMN-diagram Fig. 5: Example of an ER-diagram drawn from the modeling tool shown in AR in the modeling tool and shown in AR meta-metamodel shown in Figure 1 only contains an excerpt of the actual con- structs due to limitations of space. It is composed of a meta layer and an instance layer. This is to show the relation between the definition of a modeling language and the instantiation of the specific objects when defining a model. The main classes in the meta-metamodel inherit the general properties from the superclass metaobject. The core classes inheriting from metaobject are class, role, scene type, attribute and attribute type. The core part comprises classes and relationclasses that are contained in one or multiple scene types. A scene type represents the closed 3D space of a model. Classes, relationclasses and scene types have attributes that are further detailed with exactly one attribute type. Classes, relationclasses and scene types can be set in relation to each other by relationclasses. Each relationclass has exactly two roles assigned. A from role and a to role. Further, each role has at least one Initial Concepts for AR and VR-based Enterprise Modeling 53 reference to a class, relationclass or scene type, that defines, to what this role can connect. Further classes and scene types can have ports. Also to these ports, roles can be assigned. All constructs inheriting from metaobject have a visual representation. This representation is defined with a domain-specific language called VizRep, that defines the 3D representation and behavior of an object. This information is stored in the geometry attribute in metaobject. Further, each visual object has 2D coordinates for the positioning in a 2D modeling environment, as well as relative 3D coordinates (relativeCoordinates3D) for positioning objects in AR and VR environments relative to the user position. These positions may differ from the coordinates used for the 2D screen representation. Further, each metaobject may have absolute 3D coordinates (absoluteCoordinates3D) for the positioning of objects using real world coordinates like GPS coordinates or indoor positioning information. In the lower half of Figure 1 the instance layer of the meta-metamodel is depicted. It shows the constructs for holding information of the instances of the metamodel when instantiating a model, for example instances for class, attribute and scene instances. For evaluating the technical feasibility of the meta-metamodel, a prototyp- ical implementation has been created using JavaScript and WebXR1 via the ThreeJS2 library. The resulting modeling tool works entirely in a 3D environ- ment, and not like most other modeling tools on a 2D canvas. This has the advantage, that the models can be used in a traditional 2D environment by holding the depth coordinates constant, or without changes in a 3D mode for AR and VR. An example of the browser-based modeling tool is shown in Fig- ure 2. Further, we conducted tests by specifying subsets of BPMN3 and ERD [2] with the new tool. Examples of these tests by using an AR HMD (Head- Mounted-Display) in the form of an MS HoloLens2 can be seen in Figures 3, 4 and 5. 4 Requirements for Enterprise Modeling in AR and VR With the insights gained above, we can formulate the following requirements for enterprise modeling in AR/VR. First, the technology stack underlying such modeling tools must support AR and VR, including according hardware de- vices. Further, the graphical representation and positioning of objects must be accomplished in 3D space. For the representation of 3D geometries, the already mentioned VizRep language can be used. This is a new and generic JavaScript function to define the visual representation of the different components, the according labels, the used attributes for the labels etc. This enables the defi- nition of 3D objects and the specification of according labels. This has direct consequences for the interaction with models where novel types of user-machine interaction need to be used, as for example discussed in detail in [7]. 1 https://www.w3.org/TR/webxr/ 2 https://threejs.org/docs/ 3 https://www.omg.org/spec/BPMN/ 54 F. Muff et al. Concerning the positioning of objects, an AR/VR-enabled modeling envi- ronment permits to place objects in virtual 3D space as well as attach them to real-world coordinates, e.g. to attach a task in a process or an entity type in an ER diagram to a physical machine. Thus, this information needs to be provided in addition to the traditional 2D coordinates. This leads to new types of enterprise modeling scenarios, e.g. for using enterprise models as a guidance in the style of a map in the real-world. Further, one of the strengths of AR devices is to analyze the environment and recognize the situation of the user through different sensors. For integrating this in enterprise modeling, the properties of the context need to be inferred so that the models can be adapted to a specific situation. Again, this information about the context has to be made available for the objects in an enterprise model. As existing modeling languages do not consider such aspects, they will have to be adapted for this purpose, e.g. by a context attribute for a BPMN task. 5 Conclusion and Future Work In this paper, a first design of an AR and VR enabled meta-metamodeling frame- work as well as a prototype were shown. With first tests we could verify that the use of the framework and the implementation of some basic modeling languages for AR and VR is feasible. In further work we will extend the framework and the implementation. In particular, we will address interaction techniques and the positioning of models and their elements using real world coordinates, as well as the integration of situational context in AR. References 1. Abdul, B.M., Corradini, F., Re, B., Rossi, L., Tiezzi, F.: UBBA: Unity Based BPMN Animator. In: Cappiello, C., Ruiz, M. (eds.) Information Systems Engineering in Responsible Information Systems. pp. 1–9. Springer (2019) 2. Chen, P.P.: The entity-relationship model - toward a unified view of data. ACM Trans. Database Syst. 1(1), 9–36 (1976) 3. Fill, H.G.: Visualisation for Semantic Information Systems. Springer/Gabler (2009) 4. Kern, H., Hummel, A., Kühne, S.: Towards a comparative analysis of meta- metamodels. In: SPLASH ’11. pp. 7–12. ACM (2011) 5. Metzger, D., Niemöller, C., Jannaber, S., Berkemeier, L., Brenning, L., Thomas, O.: The next generation? design and implementation of a smart glasses-based modelling system. Enterp. Model. Inf. Syst. Archit. Int. J. Concept. Model. 13, 18:1–25 (2018) 6. Muff, F., Fill, H.G.: Towards embedding legal visualizations in work practices by using augmented reality. Jusletter IT 27 May 2021 (2021) 7. Ruiz-Rube, I., Baena-Pérez, R., Mota, J.M., Sánchez, I.A.: Model-driven develop- ment of augmented reality-based editors for domain specific languages. IxD&A 45, 246–263 (2020) 8. Sandkuhl, K., Fill, H.G., Hoppenbrouwers, S., Krogstie, J., Matthes, F., Opdahl, A., Schwabe, G., Uludag, O., Winter, R.: From expert discipline to common practice: A vision and research agenda for extending the reach of enterprise modeling. BISE 60(1), 69–80 (2018)