<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Initial Concepts for Augmented and Virtual Reality-based Enterprise Modeling?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Concepts</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Fribourg, Research Group Digitalization and Information Systems</institution>
        </aff>
      </contrib-group>
      <fpage>49</fpage>
      <lpage>54</lpage>
      <abstract>
        <p>One current challenge in enterprise modeling is to establish it as a common practice in everyday work instead of its traditional role as an expert discipline. In this paper we present rst steps in this direction through augmented and virtual reality-based conceptual modeling. For this purpose we developed a novel meta-metamodeling framework for augmented and virtual reality-based conceptual modeling and implemented it in a prototypical tool. This permits us to derive further requirements for the representation and processing of enterprise models in such environments.</p>
      </abstract>
      <kwd-group>
        <kwd>Conceptual Modeling</kwd>
        <kwd>Augmented Reality</kwd>
        <kwd>Virtual Reality</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        One vision that has recently been formulated for enterprise modeling states, that
within some years from now on, modeling shall be embedded in our daily work
practices [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. This means, that people engage in modeling without noticing it and
it becomes a common practice, just like the use of o ce applications today. For
achieving this vision, multiple challenges must be addressed in research including
adequate model formats, the context of stakeholders or the scope of models.
      </p>
      <p>
        In the following, rst research results in augmented and virtual reality (AR/
VR)-based conceptual modeling towards realizing this vision are presented. We
focus mainly on the presentation and representation of models, as well as on
the scope of models. This includes the analysis of everyday work practices, the
identi cation of adequate situations for model creation and use, as well as the
selection of appropriate content in particular contexts. Thereby we build upon
previous work where we derived constituents of AR-based applications [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>As a sample scenario, let us imagine a domain expert working on a task using
a machine in some business process. Suppose that the person would like to know
about the possible next steps in the process. In a traditional setting, this person
would have to revert to a classical modeling tool and be familiar with the used
modeling notation. Consider now that the person wears a head-mounted display
? Copyright © 2021 for this paper by its author. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0).
(HMD) that automatically displays the relevant information about the process
and embeds the visualization into the real world at the speci c location in the
form of augmented reality. This would mean that the model is embedded into
the current work practice.</p>
      <p>
        When analyzing this scenario, there are many aspects that must be
considered for combining modeling and AR. In particular, we can revert to a previously
described conceptual framework for AR and denote the di erent steps of the
process as content and the working environment of the domain expert as
context [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Since the di erent tasks should be visualized automatically, this can be
considered as the interaction. As existing metamodeling approaches in the area
of enterprise modeling so far do not contain AR-speci c concepts, we developed
a novel meta-metamodeling framework for AR/VR for realizing such scenarios.
This will serve for deriving more concrete requirements in in the following.
      </p>
      <p>The remainder of the paper is structured as follows: In Section 2 we brie y
discuss related work. In Section 3, we will introduce the framework we developed
for integrating AR/VR concepts in metamodeling. In Section 4 we present the
additionally derived requirements for such an approach. The paper will end with
a conclusion and an outlook to future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        The representation of models in three-dimensional space has been investigated by
several authors. As summarized in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], previous approaches focused for example
on the 3D representation of business process models, their interactive generation
or the layouting of three-dimensional models. Due to technological advancement,
decreasing prices for high-end devices for augmented and virtual reality
applications and the availability of high-level software libraries, more recent approaches
explored how to use AR/VR technologies in this context.
      </p>
      <p>
        Abdul et al. presented an approach for visualizing BPMN collaboration
models extracted from a standard le format in VR [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The user can insert suitable
three-dimensional representations for the di erent elements. Subsequently, the
process can be simulated and validated in VR. However, this approach is speci c
for a given purpose and cannot be adapted to other use cases.
      </p>
      <p>
        Ruiz-Rube et al. presented a tool that focuses on a metamodeling approach
for the creation of AR editors for domain-speci c languages (DSL) [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Their
main contribution lies on creating AR model editors for mobile devices. The
metamodel is based on ECORE and extended with AR concepts. However, it
lacks several concepts typically used in enterprise modeling such as
decomposition, ports, or attribute speci cations for nodes, edges, and model instances.
      </p>
      <p>
        Metzger et al. designed and implemented a system for interacting with virtual
process models by using smart glasses [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Their approach permits to create and
modify process models in virtual reality, other modeling languages are however
not directly supported.
      </p>
      <p>In summary, there are several previous approaches that target the use of
augmented and virtual reality in conceptual modeling. However, to the best of
for
and</p>
      <sec id="sec-2-1">
        <title>VR-based</title>
      </sec>
      <sec id="sec-2-2">
        <title>Enterprise Modeling 51</title>
        <p>our
in a
knowledge no
generic way.
publications
so
far
address
this
topic
on
the
meta-metalevel
3</p>
        <p>A</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Meta-Metamo deling</title>
    </sec>
    <sec id="sec-4">
      <title>Framework for AR and VR</title>
      <p>
        For developing a meta-metamodeling framework for AR and VR, we followed
an exploratory and experimental research approach. We rst investigated
existing meta-metamodels as described for example in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. This permitted us to
identify the relevant concepts typically used in traditional 2D metamodeling.
Subsequently, we derived the concepts necessary for AR and VR representations
in 3D space. This was largely in uenced by the technical requirements for
realizing AR and VR applications using a state-of-the-art technology stack that
would run on arbitrary AR and VR devices using a web-based environment. This
resulted in the meta-metamodel shown in Figure 1.
      </p>
      <p>metaobject
- uuid
- name
- description
- geometry
- coordinates2D
- relativeCoordinates3D
- absoluteCoordinates3D
has
1..1
1..1
1..1</p>
      <p>class
relationclass
has
0..* role_port_reference
is_instance_of_relationclass
is_instance_of _class
0..*
0..*</p>
      <p>0..*
0..*
class_instance
0..1</p>
      <p>has_ports
0..* 0..*
0..*
has at ribute
contains_classes
0..*
0...*
role_from
role_to
0..1 assigned_to
0..1class_instance_ref
0..*</p>
      <p>0..*
attribute_instance
relationclass_instance 0..1 has_relationclass_ reference
is_instance_of_scene
1..1
0..*</p>
      <p>0..*
has_attribute
attribute
0..*
0..1
0..*
has_type
1..1
attribute_type
1..1
0..*</p>
      <p>Port
1..1
0..* has_ports
1..1
1..1
has
role
1..1</p>
      <p>The innovative aspect of this meta-metamodel is,
ously used for 2D and 3D modeling. Unlike previous
proaches, it is however natively based on 3D space. It
that it can be
simultanemeta-metamodeling
apmust be noted that the
meta-metamodel shown in Figure 1 only contains an excerpt of the actual
constructs due to limitations of space. It is composed of a meta layer and an instance
layer. This is to show the relation between the de nition of a modeling language
and the instantiation of the speci c objects when de ning a model. The main
classes in the meta-metamodel inherit the general properties from the superclass
metaobject. The core classes inheriting from metaobject are class, role, scene type,
attribute and attribute type.</p>
      <p>The core part comprises classes and relationclasses that are contained in one
or multiple scene types. A scene type represents the closed 3D space of a model.
Classes, relationclasses and scene types have attributes that are further detailed
with exactly one attribute type. Classes, relationclasses and scene types can be
set in relation to each other by relationclasses. Each relationclass has exactly
two roles assigned. A from role and a to role. Further, each role has at least one
Initial Concepts for AR and VR-based Enterprise Modeling
reference to a class, relationclass or scene type, that de nes, to what this role
can connect. Further classes and scene types can have ports. Also to these ports,
roles can be assigned.</p>
      <p>All constructs inheriting from metaobject have a visual representation. This
representation is de ned with a domain-speci c language called VizRep, that
de nes the 3D representation and behavior of an object. This information is
stored in the geometry attribute in metaobject. Further, each visual object has 2D
coordinates for the positioning in a 2D modeling environment, as well as relative
3D coordinates (relativeCoordinates3D) for positioning objects in AR and VR
environments relative to the user position. These positions may di er from the
coordinates used for the 2D screen representation. Further, each metaobject may
have absolute 3D coordinates (absoluteCoordinates3D) for the positioning of
objects using real world coordinates like GPS coordinates or indoor positioning
information.</p>
      <p>In the lower half of Figure 1 the instance layer of the meta-metamodel is
depicted. It shows the constructs for holding information of the instances of the
metamodel when instantiating a model, for example instances for class, attribute
and scene instances.</p>
      <p>
        For evaluating the technical feasibility of the meta-metamodel, a
prototypical implementation has been created using JavaScript and WebXR1 via the
ThreeJS2 library. The resulting modeling tool works entirely in a 3D
environment, and not like most other modeling tools on a 2D canvas. This has the
advantage, that the models can be used in a traditional 2D environment by
holding the depth coordinates constant, or without changes in a 3D mode for
AR and VR. An example of the browser-based modeling tool is shown in
Figure 2. Further, we conducted tests by specifying subsets of BPMN3 and ERD
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] with the new tool. Examples of these tests by using an AR HMD
(HeadMounted-Display) in the form of an MS HoloLens2 can be seen in Figures 3, 4
and 5.
4
      </p>
    </sec>
    <sec id="sec-5">
      <title>Requirements for Enterprise Modeling in AR and VR</title>
      <p>
        With the insights gained above, we can formulate the following requirements
for enterprise modeling in AR/VR. First, the technology stack underlying such
modeling tools must support AR and VR, including according hardware
devices. Further, the graphical representation and positioning of objects must be
accomplished in 3D space. For the representation of 3D geometries, the already
mentioned VizRep language can be used. This is a new and generic JavaScript
function to de ne the visual representation of the di erent components, the
according labels, the used attributes for the labels etc. This enables the de
nition of 3D objects and the speci cation of according labels. This has direct
consequences for the interaction with models where novel types of user-machine
interaction need to be used, as for example discussed in detail in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
1 https://www.w3.org/TR/webxr/
2 https://threejs.org/docs/
3 https://www.omg.org/spec/BPMN/
      </p>
      <p>Concerning the positioning of objects, an AR/VR-enabled modeling
environment permits to place objects in virtual 3D space as well as attach them
to real-world coordinates, e.g. to attach a task in a process or an entity type
in an ER diagram to a physical machine. Thus, this information needs to be
provided in addition to the traditional 2D coordinates. This leads to new types
of enterprise modeling scenarios, e.g. for using enterprise models as a guidance
in the style of a map in the real-world.</p>
      <p>Further, one of the strengths of AR devices is to analyze the environment and
recognize the situation of the user through di erent sensors. For integrating this
in enterprise modeling, the properties of the context need to be inferred so that
the models can be adapted to a speci c situation. Again, this information about
the context has to be made available for the objects in an enterprise model. As
existing modeling languages do not consider such aspects, they will have to be
adapted for this purpose, e.g. by a context attribute for a BPMN task.
5</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusion and Future Work</title>
      <p>In this paper, a rst design of an AR and VR enabled meta-metamodeling
framework as well as a prototype were shown. With rst tests we could verify that the
use of the framework and the implementation of some basic modeling languages
for AR and VR is feasible. In further work we will extend the framework and
the implementation. In particular, we will address interaction techniques and the
positioning of models and their elements using real world coordinates, as well as
the integration of situational context in AR.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Abdul</surname>
            ,
            <given-names>B.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Corradini</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Re</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rossi</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tiezzi</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>UBBA: Unity Based BPMN Animator</article-title>
          . In: Cappiello,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Ruiz</surname>
          </string-name>
          , M. (eds.)
          <source>Information Systems Engineering in Responsible Information Systems</source>
          . pp.
          <volume>1</volume>
          {
          <issue>9</issue>
          . Springer (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>P.P.:</given-names>
          </string-name>
          <article-title>The entity-relationship model - toward a uni ed view of data</article-title>
          .
          <source>ACM Trans. Database Syst</source>
          .
          <volume>1</volume>
          (
          <issue>1</issue>
          ),
          <volume>9</volume>
          {
          <fpage>36</fpage>
          (
          <year>1976</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Fill</surname>
            ,
            <given-names>H.G.</given-names>
          </string-name>
          :
          <article-title>Visualisation for Semantic Information Systems</article-title>
          . Springer/Gabler (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Kern</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hummel</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , Kuhne, S.:
          <article-title>Towards a comparative analysis of metametamodels</article-title>
          .
          <source>In: SPLASH '11</source>
          . pp.
          <volume>7</volume>
          {
          <fpage>12</fpage>
          .
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Metzger</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , Niemoller,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Jannaber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Berkemeier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Brenning</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Thomas</surname>
          </string-name>
          ,
          <string-name>
            <surname>O.</surname>
          </string-name>
          :
          <article-title>The next generation? design and implementation of a smart glasses-based modelling system</article-title>
          .
          <source>Enterp. Model. Inf. Syst. Archit. Int. J. Concept. Model</source>
          .
          <volume>13</volume>
          ,
          <issue>18</issue>
          :1{
          <issue>25</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Mu</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fill</surname>
            ,
            <given-names>H.G.</given-names>
          </string-name>
          :
          <article-title>Towards embedding legal visualizations in work practices by using augmented reality</article-title>
          .
          <source>Jusletter IT 27 May</source>
          <year>2021</year>
          (
          <year>2021</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Ruiz-Rube</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baena-Perez</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mota</surname>
            ,
            <given-names>J.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sanchez</surname>
            ,
            <given-names>I.A.</given-names>
          </string-name>
          :
          <article-title>Model-driven development of augmented reality-based editors for domain speci c languages</article-title>
          .
          <source>IxD&amp;A</source>
          <volume>45</volume>
          ,
          <volume>246</volume>
          {
          <fpage>263</fpage>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Sandkuhl</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fill</surname>
            ,
            <given-names>H.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hoppenbrouwers</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krogstie</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Matthes</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Opdahl</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schwabe</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Uludag</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Winter</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>From expert discipline to common practice: A vision and research agenda for extending the reach of enterprise modeling</article-title>
          .
          <source>BISE</source>
          <volume>60</volume>
          (
          <issue>1</issue>
          ),
          <volume>69</volume>
          {
          <fpage>80</fpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>