<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Initial Concepts for Augmented and Virtual Reality-based Enterprise Modeling</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Fabian</forename><surname>Muff</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Research Group Digitalization and Information Systems</orgName>
								<orgName type="institution">University of Fribourg</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Hans-Georg</forename><surname>Fill</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Research Group Digitalization and Information Systems</orgName>
								<orgName type="institution">University of Fribourg</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Initial Concepts for Augmented and Virtual Reality-based Enterprise Modeling</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">FA0DA96854E9B5685B613FBF5398611F</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T05:29+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Conceptual Modeling</term>
					<term>Augmented Reality</term>
					<term>Virtual Reality</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>One current challenge in enterprise modeling is to establish it as a common practice in everyday work instead of its traditional role as an expert discipline. In this paper we present first steps in this direction through augmented and virtual reality-based conceptual modeling. For this purpose we developed a novel meta-metamodeling framework for augmented and virtual reality-based conceptual modeling and implemented it in a prototypical tool. This permits us to derive further requirements for the representation and processing of enterprise models in such environments.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>One vision that has recently been formulated for enterprise modeling states, that within some years from now on, modeling shall be embedded in our daily work practices <ref type="bibr" target="#b7">[8]</ref>. This means, that people engage in modeling without noticing it and it becomes a common practice, just like the use of office applications today. For achieving this vision, multiple challenges must be addressed in research including adequate model formats, the context of stakeholders or the scope of models.</p><p>In the following, first research results in augmented and virtual reality (AR/ VR)-based conceptual modeling towards realizing this vision are presented. We focus mainly on the presentation and representation of models, as well as on the scope of models. This includes the analysis of everyday work practices, the identification of adequate situations for model creation and use, as well as the selection of appropriate content in particular contexts. Thereby we build upon previous work where we derived constituents of AR-based applications <ref type="bibr" target="#b5">[6]</ref>.</p><p>As a sample scenario, let us imagine a domain expert working on a task using a machine in some business process. Suppose that the person would like to know about the possible next steps in the process. In a traditional setting, this person would have to revert to a classical modeling tool and be familiar with the used modeling notation. Consider now that the person wears a head-mounted display (HMD) that automatically displays the relevant information about the process and embeds the visualization into the real world at the specific location in the form of augmented reality. This would mean that the model is embedded into the current work practice.</p><p>When analyzing this scenario, there are many aspects that must be considered for combining modeling and AR. In particular, we can revert to a previously described conceptual framework for AR and denote the different steps of the process as content and the working environment of the domain expert as context <ref type="bibr" target="#b5">[6]</ref>. Since the different tasks should be visualized automatically, this can be considered as the interaction. As existing metamodeling approaches in the area of enterprise modeling so far do not contain AR-specific concepts, we developed a novel meta-metamodeling framework for AR/VR for realizing such scenarios. This will serve for deriving more concrete requirements in in the following.</p><p>The remainder of the paper is structured as follows: In Section 2 we briefly discuss related work. In Section 3, we will introduce the framework we developed for integrating AR/VR concepts in metamodeling. In Section 4 we present the additionally derived requirements for such an approach. The paper will end with a conclusion and an outlook to future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related Work</head><p>The representation of models in three-dimensional space has been investigated by several authors. As summarized in <ref type="bibr" target="#b2">[3]</ref>, previous approaches focused for example on the 3D representation of business process models, their interactive generation or the layouting of three-dimensional models. Due to technological advancement, decreasing prices for high-end devices for augmented and virtual reality applications and the availability of high-level software libraries, more recent approaches explored how to use AR/VR technologies in this context.</p><p>Abdul et al. presented an approach for visualizing BPMN collaboration models extracted from a standard file format in VR <ref type="bibr" target="#b0">[1]</ref>. The user can insert suitable three-dimensional representations for the different elements. Subsequently, the process can be simulated and validated in VR. However, this approach is specific for a given purpose and cannot be adapted to other use cases.</p><p>Ruiz-Rube et al. presented a tool that focuses on a metamodeling approach for the creation of AR editors for domain-specific languages (DSL) <ref type="bibr" target="#b6">[7]</ref>. Their main contribution lies on creating AR model editors for mobile devices. The metamodel is based on ECORE and extended with AR concepts. However, it lacks several concepts typically used in enterprise modeling such as decomposition, ports, or attribute specifications for nodes, edges, and model instances.</p><p>Metzger et al. designed and implemented a system for interacting with virtual process models by using smart glasses <ref type="bibr" target="#b4">[5]</ref>. Their approach permits to create and modify process models in virtual reality, other modeling languages are however not directly supported.</p><p>In summary, there are several previous approaches that target the use of augmented and virtual reality in conceptual modeling. However, to the best of our knowledge no publications so far address this topic on the meta-metalevel in a generic way.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">A Meta-Metamodeling Framework for AR and VR</head><p>For developing a meta-metamodeling framework for AR and VR, we followed an exploratory and experimental research approach. We first investigated existing meta-metamodels as described for example in <ref type="bibr" target="#b3">[4]</ref>. This permitted us to identify the relevant concepts typically used in traditional 2D metamodeling. Subsequently, we derived the concepts necessary for AR and VR representations in 3D space. This was largely influenced by the technical requirements for realizing AR and VR applications using a state-of-the-art technology stack that would run on arbitrary AR and VR devices using a web-based environment. This resulted in the meta-metamodel shown in Figure <ref type="figure">1</ref>. The innovative aspect of this meta-metamodel is, that it can be simultaneously used for 2D and 3D modeling. Unlike previous meta-metamodeling approaches, it is however natively based on 3D space. It must be noted that the  <ref type="figure">1</ref> only contains an excerpt of the actual constructs due to limitations of space. It is composed of a meta layer and an instance layer. This is to show the relation between the definition of a modeling language and the instantiation of the specific objects when defining a model. The main classes in the meta-metamodel inherit the general properties from the superclass metaobject. The core classes inheriting from metaobject are class, role, scene type, attribute and attribute type.</p><p>The core part comprises classes and relationclasses that are contained in one or multiple scene types. A scene type represents the closed 3D space of a model. Classes, relationclasses and scene types have attributes that are further detailed with exactly one attribute type. Classes, relationclasses and scene types can be set in relation to each other by relationclasses. Each relationclass has exactly two roles assigned. A from role and a to role. Further, each role has at least one reference to a class, relationclass or scene type, that defines, to what this role can connect. Further classes and scene types can have ports. Also to these ports, roles can be assigned.</p><p>All constructs inheriting from metaobject have a visual representation. This representation is defined with a domain-specific language called VizRep, that defines the 3D representation and behavior of an object. This information is stored in the geometry attribute in metaobject. Further, each visual object has 2D coordinates for the positioning in a 2D modeling environment, as well as relative 3D coordinates (relativeCoordinates3D) for positioning objects in AR and VR environments relative to the user position. These positions may differ from the coordinates used for the 2D screen representation. Further, each metaobject may have absolute 3D coordinates (absoluteCoordinates3D) for the positioning of objects using real world coordinates like GPS coordinates or indoor positioning information.</p><p>In the lower half of Figure <ref type="figure">1</ref> the instance layer of the meta-metamodel is depicted. It shows the constructs for holding information of the instances of the metamodel when instantiating a model, for example instances for class, attribute and scene instances.</p><p>For evaluating the technical feasibility of the meta-metamodel, a prototypical implementation has been created using JavaScript and WebXR<ref type="foot" target="#foot_0">1</ref> via the ThreeJS<ref type="foot" target="#foot_1">2</ref> library. The resulting modeling tool works entirely in a 3D environment, and not like most other modeling tools on a 2D canvas. This has the advantage, that the models can be used in a traditional 2D environment by holding the depth coordinates constant, or without changes in a 3D mode for AR and VR. An example of the browser-based modeling tool is shown in Figure <ref type="figure">2</ref>. Further, we conducted tests by specifying subsets of BPMN <ref type="foot" target="#foot_2">3</ref> and ERD <ref type="bibr" target="#b1">[2]</ref> with the new tool. Examples of these tests by using an AR HMD (Head-Mounted-Display) in the form of an MS HoloLens2 can be seen in Figures <ref type="figure">3, 4</ref> and 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Requirements for Enterprise Modeling in AR and VR</head><p>With the insights gained above, we can formulate the following requirements for enterprise modeling in AR/VR. First, the technology stack underlying such modeling tools must support AR and VR, including according hardware devices. Further, the graphical representation and positioning of objects must be accomplished in 3D space. For the representation of 3D geometries, the already mentioned VizRep language can be used. This is a new and generic JavaScript function to define the visual representation of the different components, the according labels, the used attributes for the labels etc. This enables the definition of 3D objects and the specification of according labels. This has direct consequences for the interaction with models where novel types of user-machine interaction need to be used, as for example discussed in detail in <ref type="bibr" target="#b6">[7]</ref>.</p><p>Concerning the positioning of objects, an AR/VR-enabled modeling environment permits to place objects in virtual 3D space as well as attach them to real-world coordinates, e.g. to attach a task in a process or an entity type in an ER diagram to a physical machine. Thus, this information needs to be provided in addition to the traditional 2D coordinates. This leads to new types of enterprise modeling scenarios, e.g. for using enterprise models as a guidance in the style of a map in the real-world.</p><p>Further, one of the strengths of AR devices is to analyze the environment and recognize the situation of the user through different sensors. For integrating this in enterprise modeling, the properties of the context need to be inferred so that the models can be adapted to a specific situation. Again, this information about the context has to be made available for the objects in an enterprise model. As existing modeling languages do not consider such aspects, they will have to be adapted for this purpose, e.g. by a context attribute for a BPMN task.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion and Future Work</head><p>In this paper, a first design of an AR and VR enabled meta-metamodeling framework as well as a prototype were shown. With first tests we could verify that the use of the framework and the implementation of some basic modeling languages for AR and VR is feasible. In further work we will extend the framework and the implementation. In particular, we will address interaction techniques and the positioning of models and their elements using real world coordinates, as well as the integration of situational context in AR.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>1 Fig. 1 :</head><label>11</label><figDesc>Fig. 1: UML Diagram of the Meta-Metamodel enabling AR and VR</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 :Fig. 4 :Fig. 5 :</head><label>245</label><figDesc>Fig. 2: Visualization of the modeling tool in a 2D browser on an AR HMD Fig. 3: 3rd person view of a user wearing an AR HMD and looking at AR content</figDesc><graphic coords="4,137.73,115.84,157.35,134.78" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://www.w3.org/TR/webxr/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://threejs.org/docs/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">https://www.omg.org/spec/BPMN/</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">UBBA: Unity Based BPMN Animator</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">M</forename><surname>Abdul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Corradini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Re</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Rossi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Tiezzi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Information Systems Engineering in Responsible Information Systems</title>
				<editor>
			<persName><forename type="first">C</forename><surname>Cappiello</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Ruiz</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="9" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">The entity-relationship model -toward a unified view of data</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">P</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Database Syst</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="9" to="36" />
			<date type="published" when="1976">1976</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Visualisation for Semantic Information Systems</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">G</forename><surname>Fill</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>Springer/Gabler</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Towards a comparative analysis of metametamodels</title>
		<author>
			<persName><forename type="first">H</forename><surname>Kern</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hummel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kühne</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">SPLASH &apos;11</title>
				<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="7" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">The next generation? design and implementation of a smart glasses-based modelling system</title>
		<author>
			<persName><forename type="first">D</forename><surname>Metzger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Niemöller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Jannaber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Berkemeier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Brenning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Thomas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Enterp. Model. Inf. Syst. Archit. Int. J. Concept. Model</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="1" to="25" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Towards embedding legal visualizations in work practices by using augmented reality</title>
		<author>
			<persName><forename type="first">F</forename><surname>Muff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">G</forename><surname>Fill</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Jusletter IT</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<date type="published" when="2021-05">May 2021. 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Model-driven development of augmented reality-based editors for domain specific languages</title>
		<author>
			<persName><forename type="first">I</forename><surname>Ruiz-Rube</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Baena-Pérez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Mota</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">A</forename><surname>Sánchez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IxD&amp;A</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page" from="246" to="263" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">From expert discipline to common practice: A vision and research agenda for extending the reach of enterprise modeling</title>
		<author>
			<persName><forename type="first">K</forename><surname>Sandkuhl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">G</forename><surname>Fill</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hoppenbrouwers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Krogstie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Matthes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Opdahl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Schwabe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Uludag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Winter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">BISE</title>
		<imprint>
			<biblScope unit="volume">60</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="69" to="80" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
