Practically Applicable Enterprise Models: A Research Project Toward a User-oriented Design Method Merijn van den Oever1 , Ben Roelens1,2 and Dominik Bork3 1 Open Universiteit, Valkenburgerweg 177, 6419 AT, Heerlen, The Netherlands 2 Ghent University, Tweekerkenstraat 2, 9000, Ghent, Belgium 3 TU Wien, Favoritenstr. 9-11, 1040, Vienna, Austria Abstract Enterprise Modeling is far from its maximum potential. An important reason is that opposing stakeholder concerns lead to the existence of context-dependent models that are not mutually related, resulting in an inconsistent enterprise model landscape. This causes problems regarding unsustainable model utilization, since models are not used across different focal areas and over a longer period of time. The aim of our research project is to increase the value of Enterprise Modeling by creating a design method to support users in designing practically applicable models and a model integration method to integrate locally created models into an overarching enterprise model landscape that maintains model consistency. Keywords Enterprise Modeling, Conceptual Modeling, Design method, User orientation 1. Problem Identification & Motivation Even though its benefits and its contribution to organizational tasks are largely undisputed, Enterprise Modeling (EM) is far from its maximum potential. The focus of EM is oriented towards โ€œthe systematic analysis and modeling of processes, organization and product structures, IT-systems and any other perspective relevant for the modeling purposeโ€ [1, p.70] and is invaluable for analyzing existing and creating new information systems [2]. In this respect, Sandkuhl et al. [1] have formulated a vision to solve two particular issues: (๐‘–) the co-existence of contradictory stakeholder concerns, and (๐‘–๐‘–) unsustainable model utilization. Exemplary for the opposing stakeholder interests is the fact that enterprise models are often developed by a limited group of organizational stakeholders to achieve a specific goal [3]. This leads to a situation-specific modeling purpose: either to change the world with (i.e., for intervening) or by (i.e., for bringing about reality) a model; or to change the mind via model construction (i.e., for understanding), model manipulation (i.e., for problem-solving), or communication (i.e., for documenting) [4]. Based on the purpose, stakeholders have specific interests (i.e., concerns) with respect to enterprise models. Therefore, most models are suitable for one specific situation [3] and only Proceedings of the 16th International Workshop on Value Modelling and Business Ontologies (VMBO 2022), held in conjunction with the 34th International Conference on Advanced Information Systems Engineering (CAiSE 2022), June 06โ€“10, 2022, Leuven, Belgium $ merijnvdoever@hotmail.com (M. van den Oever); ben.roelens@ou.nl (B. Roelens); dominik.bork@tuwien.ac.at (D. Bork)  0000-0002-0672-8612 (M. van den Oever); 0000-0002-2443-8678 (B. Roelens); 0000-0001-8259-2297 (D. Bork) ยฉ 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) have value for the stakeholder group involved [4]. As a consequence, enterprise models are not practically applicable for other business stakeholders, because they are difficult to understand [5] and do not provide information about the areas of interest that are important to them [6]. That is why stakeholders end up making their own models, during which they experience a multitude of difficulties related to the required technical knowledge that is needed as well as the inability to design personalized models due to stringent requirements usually associated with EM [1]. When stakeholders manage to design their own models, this results in the existence of context- dependent models with a scope that is not necessarily mutually coherent [1]. This leads to an inconsistent enterprise model landscape [7], which is fragmented and recorded in multiple tools without connections between the constituting models [5]. Hence, it is not easy to gain access to and integrate locally created models [8]. This results in unsustainable model usage, since models are neither used across different focal areas nor over a longer period of time [9]. To embed modeling into the everyday work more easily, it is necessary that users are able to design models based on their specific requirements with less technical knowledge and without being constrained by stringent requirements. In addition, to ensure long-term added value in the future, it should be possible to integrate and combine these locally designed models in an overarching enterprise model landscape so that modeling by experts and non-experts eventually will exist in synergy [1]. This leads to the following research question (RQ): โ€ข ๐‘…๐‘„: How can practically applicable models be created that could easily be integrated into the enterprise model landscape of an organization? To solve this problem, the subsequent sub-questions should be answered: โ€ข ๐‘…๐‘„1 : How can we support users in formulating requirements for models that are adapted to their specific work context? With this, we learn more about how we can collect user requirements that serve as input for the design of local models. The actual translation of these requirements into local model design is being studied in RQ2: โ€ข ๐‘…๐‘„2 : How can we support users in designing models based on their requirements? Now, we know how local models can be created based on user requirements. However, these models are not connected to enterprise models and thus do not provide long-term added value yet. To overcome this challenge, we need to seek for possibilities to link these local models with enterprise models and eventually foster synergistic modeling of experts and non-experts. This results in the final research question: โ€ข ๐‘…๐‘„3 : How can we combine and integrate locally created models within an overarching enterprise model landscape? This paper is structured as follows. Sect. 2 describes the proposed solution for a user- oriented design method for practically applicable enterprise models, consisting of a requirements documentation method (Sect. 2.1), a design method for local models (Sect. 2.2), and a model integration method (Sect. 2.3). Finally, Sect. 3 gives an outlook on the timing and communication in this research project. 2. Proposed Solution Figure 1 shows the proposed solution encompassing RQ1, RQ2, and RQ3. To enable users to design personalized models, in RQ1 we create a requirements documentation method to collect their requirements. Then, in RQ2, we focus on a design method to convert these requirements into a specification for personalized models (i.e., viewpoints) and subsequently realize a modeling technique based on it. Thereafter, the user is able to use the personalized modeling technique, resulting in a local model, which helps to solve the intended business problem. In RQ3 we concentrate on a model integration method that enables to integrate these local models within an overarching enterprise model landscape that maintains a consistent architectural overview. Afterwards, users are able to retrieve and reuse the local model. If the retrieved model is sufficient to solve the business problem, then the user can use the model directly. Otherwise, the modeling technique will have to be adapted using the modeling design method of RQ2. Figure 1: Proposed solution This research follows the Design Science Research paradigm, which aims at building and evaluating artifacts that both address real world problems and produce outcomes that contribute to the academic body of knowledge [10]. In particular, three artifacts will be developed to address the several RQs: (๐‘–) a requirements documentation method (Sect. 2.1), (๐‘–๐‘–) a design method for local models (Sect. 2.2), and (๐‘–๐‘–๐‘–) a model integration method (Sect. 2.3). 2.1. Requirements Documentation Method Design. To design a requirements documentation method, we need to look for (๐‘–) the most important requirements documentation methods that are able to derive conceptual models (i.e., viewpoints, see Sect. 2.2) from user requirements as well as (๐‘–๐‘–) the relevant contextual factors that affect its task performance. Then we need to search for (๐‘–๐‘–๐‘–) a suitable comparison method to evaluate which documentation method is most appropriate to solve our problem. After the actual comparison we have to complete the design principles with (๐‘–๐‘ฃ) insights from the related research areas: practice theory, computer-based EM tools, and gamification [1]. This will allow to develop an initial version of the requirements tool. Given the recency of the research and its relevance, we consider the work of Dalpiaz et al. [11] as a starting point for our literature research for (๐‘–) and (๐‘–๐‘–). Therefore, we start with backward snowballing on this work, if possible supplemented with forward snowballing. Simultaneously we build a keyword library that we can use to perform a literature search to ensure we do not miss any important techniques. For (๐‘–๐‘–๐‘–) we use the Quality User Story Framework (QUSF) [12]. This framework consists of thirteen criteria organized in the three quality categories for conceptual modeling of Lindland et al. [13]: syntactic, semantic, and pragmatic quality. Since the original framework is focused on user stories, we adjust the descriptions of the criteria to analyze to what extent the Requirements Engineering methods found in (๐‘–) are able to document requirements for Domain-Specific Modeling Languages (DSMLs). If it turns out that this framework is not feasible, we will search for a suitable alternative. As for (๐‘–๐‘ฃ) we are going to investigate these research areas by performing backward and forward snowballing on the references in [1]. Demonstration & Evaluation. We use the case study research method that encompasses both demonstration and evaluation. A case study consists of an in-depth inquiry into a specific and complex phenomenon (the โ€˜caseโ€™), which is set within its real-world context. The research design consists of five components that will be further elaborated below: (๐‘–) study question, (๐‘–๐‘–) proposition, (๐‘–๐‘–๐‘–) unit of analysis, (๐‘–๐‘ฃ) logic of linking data to propositions, and (๐‘ฃ) criteria for interpreting the findings. Study Questions. The study questions that we want to answer in this case study are: โ€ข In how far is the requirements documentation method practically applicable to support users in formulating requirements for models that are adapted to their specific work context? Propositions. Propositions (P) give direction to what will be studied within the scope of the study question. In our case we are interested in how effectiveness and efficiency criteria are evaluated by end users: โ€ข ๐‘ƒ1 : The requirements documentation method is efficient โ€ข ๐‘ƒ2 : The requirements documentation method is effective Units of Analysis. Since DSMLs are used to foster understanding and communication within a stakeholder group, the unit of analysis is an event in which a group of users formulate require- ments for a desired model (i.e., adapted to its specific work context) using the requirements documentation method. Since there is only one unit of analysis, a holistic case design is applied. As Yin [14] states that replicating the same findings in similar case studies can strengthen the generalization, we will conduct four case studies: the first case establishes a baseline that we will replicate with a second case study with a similar case type. If that was successful, we try to replicate the same results in two studies with other case types. The research will take place in a Dutch organization in the financial sector, which mainly focuses on financing mortgages, managing savings, and offering current accounts. The case studies are conducted at the operational, tactical, and strategic organizational level. The first and second case focus on the operational level, because we consider these stakeholders as non-experts who usually do not have the necessary technical knowledge to design conceptual models and who are most often constrained by the stringent requirements of regular enterprise models. Logic of Linking Data to Propositions. With respect to data, we will make use of the dimensions of method success from the Method Evaluation Model of Moody [15]: actual efficacy, perceived efficacy, and adoption in practice. We will measure the actual efficiency (D1) by estimating the cognitive effort as well as the time taken to complete the requirements documentation task. The actual effectiveness (D2) will be evaluated by analyzing in how far the result meets the quality criteria of the QUSF. The perceived ease of use (D3), perceived usefulness (D4) and intention to use (D5) are assessed with several items in a post-task survey as described in [15] and modified based on the specific objectives of the requirements tool. In addition to the items mentioned above, participants are also allowed to provide a qualitative explanation for a given answer. Criteria for Interpreting the Findings. With respect to interpreting the findings, we perform a quantitative data analysis on the results (i.e., D1 to D5), in which tests are used to assess the statistical significance of the propositions. Furthermore, we carry out a qualitative analysis (i.e., thematic analysis) on the textual explanations given by the participants. This method allows to identify themes and patterns by coding data with a similar meaning. Herewith, we can decide in how far there is agreement amongst the participants. 2.2. Design Method Design. A method to design models that are adapted to the specific work context of users is the viewpoint-oriented Enterprise Architecture (EA) approach. This is a flexible approach in which users are allowed to specify personalized views, based on their specific concerns, in order to see only the relevant aspects of the system of interest [16]. As shown in figure 1, we need several steps to design a desired viewpoint: first, we need to document the stakeholder requirements, from which we derive the relevant focus area. Then, the requirements should be converted into a viewpoint definition. From there, we actually create the modeling technique, by designing a modeling language and a modeling procedure. For the concerns, we use the requirements that we have collected in RQ1. These requirements give direction to the intentional focus area of the modeling technique that will be designed in our research. The next step is to transform requirements into a viewpoint definition. According to the IEEE 1471 standard, this definition should at least include a viewpoint name, the intended stakeholders, the intentional focus area and the method to construct a view based on the view- point [17]. In addition, the viewpoint classification framework of Steen et al. [16] supplements the viewpoint definition with the purpose and content dimensions. We can compare the stakeholder requirements with the 25 basic viewpoint definitions of the ArchiMate modeling language [18]. These viewpoint definitions provide useful combinations of layers and aspects for common EA perspectives. Furthermore, ArchiMate is a suitable intermediate language, because it includes several widely used organizational domains and aspects. In our project, we can compare the stakeholder requirements with the basic viewpoint definitions. After the creation of a viewpoint definition, we transform the chosen ArchiMate viewpoint into a local modeling technique. To design a modeling language with concepts (i.e., elements and relations) and a visualization that best fit with the stakeholder concerns, the different EM languages that are relevant for the viewpoint design need to be integrated. In this research, we use the indirect concept mapping method, where EM languages are mapped to ArchiMate as an intermediate modeling language [16]. A suitable approach for integrating concern specific, heterogeneous modeling languages is the Enterprise Modeling Integration (EMI) approach [19]. More specifically, two mappings need to take place: one for the concepts and one for the visualization. This is because one viewpoint can have multiple visualizations to serve different stakeholders [16]. An appropriate mapping method is the Query/View/Transformation (QVT) standard, a model-to-model (M2M) transformation method to transform source models (i.e., input models) into a target model (i.e., output model) [20]. To complete the modeling technique, a modeling procedure must be generated so that stakeholders can follow concrete steps to set up and use a model correctly. Inspired by the model-to-text (M2T) transformation method, we use the popular template-based approach [20]. A template includes a static text, which is fixed (e.g., to express the order in which meta-model elements must be added to a model) and placeholders for dynamic text that is based on the input model (e.g., to express consistency rules for the labels used in a model). Demonstration & Evaluation. As we did for RQ1, we also make use of case study research to demonstrate and subsequently evaluate the designed solution. Study Questions. The study question that we want to answer in this case study is: โ€ข To what extent is the design method applicable to support users in designing conceptual modeling techniques to solve their local business problem? Propositions. The propositions are related to how the effectiveness and efficiency of the design method are evaluated by end users: โ€ข ๐‘ƒ1 : The design method is effective โ€ข ๐‘ƒ2 : The design method is efficient Units of Analysis. The unit of analysis is an event in which a group of users employs the design method. Hereby, the users receive a desired model based on their requirements and use it to solve a particular problem in their work context. A holistic case design is used as there is only one unit of analysis. As we did in RQ1, we intend to conduct four case studies in the same financial organization. The first case establishes a baseline that we will replicate with a second case study with a similar design. Afterwards, we try to replicate the results in two additional studies. In the third study, we will vary with the business problem while the profile of the user group remains the same. In the fourth, we vary with the profile of the user group and use the same business problem as in the first two case studies. In each case, the event will be a group session in which the researcher acts as a โ€˜fly on the wallโ€™. This allows to observe the event, without disturbing an actual true-to-live performance [21]. In the assignment, the group needs to solve a business problem, based on a fictive interview in which a user explains the problem to be solved by using the design method that we provide. Afterwards, the group starts the assignment. The conceptual model and the proposed solution to the business problem are elaborated on paper. Logic of Linking Data to Propositions. With respect to the data collection, we make use of the dimensions of method success from the Method Evaluation Model [15]: actual and perceived efficacy and adoption in practice. As for the actual efficacy, we make use of the effectiveness and efficiency variables as described by Bernardez et al. [22]. These are based on semantic quality (SEMQ), which measures in how far a designed conceptual model is a faithful representation of a system. The actual effectiveness (D1) is a percentage, measured by dividing the SEMQ of the model under study by the total possible SEMQ score of the reference model. The actual efficiency (D2) is the percentage of the amount of time that participants need to solve the business problem compared to the time needed by experts. The perceived ease of use (D3), perceived usefulness (D4), and intention to use (D5) are assessed with several items in a post-task survey as described by Moody [15] and modified based on the specific effectiveness and efficiency objectives of the design method. In addition to the above mentioned items, participants are also asked to provide an explanation for a given answer, which allows to collect qualitative data. Furthermore, we observe and document the events and actions by making video recordings as well as live notes. Criteria for Interpreting the Findings. With respect to the interpretation of the findings, we perform a quantitative data analysis on the results (i.e., D1 to D5) and a qualitative analysis (i.e., thematic analysis) on the textual explanations given by the participants. 2.3. Model Integration Method Design. For the integration of locally created conceptual models into an overarching enterprise model landscape, we can partly (re)use the knowledge and experience of the EMI approach that we gained in RQ2. However, the EMI approach is not sufficient to solve our model consistency problem. Therefore, we need a more comprehensive method that not only integrates models, but also allows to analyze and handle possible model inconsistencies. A suitable approach is the model consistency method of Lucas et al. [23], which allows to handle inconsistencies between any types of models. The method is based on three elements: a transformation language, rewriting logic, and a computer-aided software engineering tool that is able to provide appropriate feedback during modeling. An alternative approach based on semantic technologies and a model-to-graph transformation approach has been proposed in [24, 25]. A suitable model transformation language that we have already used in RQ2 is the QVT Relations Language which allows to specify a set of relations that must hold between modeling languages [26]. For the specification of the mandatory relations, Lantow et al. [7] propose to use ontologies. Ontologies provide a way to represent knowledge about object sorts, object properties, and relations between objects within one or between several focal areas and are therefore considered as suitable for consistency checking in the EM domain [7]. Rewriting logic allows to verify model consistency by its specification through a rewrite theory, which consists of a static and a dynamic part. The static part involves Equational Logic, the dynamic part comprises rewriting rules to manage consistency problems. These rules check the extent to which consistency relationships are present, and if not, provide possible solutions to handle inconsistencies. Subsequently, the local models can be integrated into an organizationโ€™s central repository. For this, we can make use of an Architecture Repository [27], which contains the information, the associated specifications and artifacts related to an organizationโ€™s EA. The Architecture Repository comprises architectural views of an organizationโ€™s state at different moments in time and is described at three different levels of granularity: strategic (i.e., long-term summary of the entire enterprise), segment (i.e., detailed models for specific areas within an enterprise), and capability (i.e., detailed models for the support of specific capabilities). Finally, if the model is integrated into an overarching enterprise model landscape, the model can be retrieved so that it can be reused by others. Therefore, we can (re)use our knowledge and experience of EMI again. Demonstration & Evaluation. We can compare the performance of a benchmark method with our developed version by an experiment, in which participants need to semantically integrate inconsistent models in an experimental context. Variables & Measures. In the experiment, we want to gain insight into the extent to which the resulting model integration method is effective and efficient. For that, we want to com- pare the independent variables effectiveness and efficiency of the developed model integration method with a benchmark method. The specific benchmark model integration method will be determined based on a literature review. The effectiveness variable relates to how well a participant can perform tasks (i.e., solve inconsistencies) by using a model integration method. This can be measured objectively by the number of inconsistencies resolved by participants in an experimental task [28]. The efficiency variable can be seen as the required effort to complete tasks, which can be measured by the time required by participants to resolve the inconsistencies [28]. The subjective measures will be collected by means of a questionnaire after the experiment. Hypotheses. Since we expect the developed model integration method to be more effective and efficient compared to a benchmark we define the following alternative hypotheses: โ€ข ๐ป1 -effectiveness: Participants resolve more model inconsistencies using the developed model integration method compared to using the benchmark method. โ€ข ๐ป1 -efficiency: Participants need less time to resolve model inconsistencies using the developed model integration method compared to using the benchmark method. Instrumentation, Experimental Tasks, and Participants. We will use a between-subjects design, in which one group is given an assignment to solve model inconsistencies using a benchmark model integration method and the other group is using the developed model integration method. To preserve the external validity, we simulate a real-life setting by using a realistic assignment and offer it in a way to participants that fits with their work practice: digitally in an online environment. Further, we conduct the experiment with actual end users, which will be identified based on the targeted profiles that we used for RQ1 and RQ2. If we do not reach the desired sample size, we supplement the participants with MSc students. In this case, we will test the results of the different groups for confounding factors. 3. Outlook This research project will be executed during the coming five years. Research results will be communicated as scholarly and professional publications to share the resulting knowledge. More particular, answers to the specific RQs will be elaborated as conference and journal papers in the Information Systems field. References [1] K. Sandkuhl, H. Fill, S. Hoppenbrouwers, et al., From expert discipline to common practice: A vision and research agenda for extending the reach of enterprise modeling, Bus Inf Syst Eng 60 (2018) 69โ€“80. doi:10.1007/s12599-017-0516-y. [2] D. Karagiannis, H. Mayr, J. Mylopoulos, Domain-specific conceptual modeling, Springer International Publishing, Cham, 2016. [3] M. Lankhorst, Enterprise Architecture at Work, Springer, Berlin, Heidelberg, 2017. doi:https://doi.org/10.1007/978-3-662-53933-0. [4] G. Guizzardi, H. A. Proper, On understanding the value of domain modeling, in: G. Guizzardi, T. P. Sales, C. Griffo, M. Furnagalli (Eds.), Value Modelling and Busi- ness Ontologies (VMBO 2021), volume 2835, CEUR-WS.org, paper6, 2021. URL: http: //ceur-ws.org/Vol-2835/paper6.pdf. [5] McKinsey, Ten practical ideas for organizing and managing your enterprise architecture, 2015. URL: https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/ ten-practical-ideas-for-organizing-and-managing-your-enterprise-architecture. [6] Capgemini, Enterprise architecture management, 2019. URL: https://www.capgemini.com/ de-de/resources/enterprise-architecture-management-en. [7] B. Lantow, K. Sandkuhl, M. Fellmann, Visual language and ontology based analysis: Using owl for relation discovery and query in 4em, in: W. Abramowicz, R. Alt, B. Franczyk (Eds.), Business Information Systems Workshops, Springer International Publishing, Cham, 2016, pp. 23โ€“35. doi:10.1007/978-3-319-52464-1_3. [8] BiZZdesign, The state of enterprise architecture, 2021. URL: https://go.bizzdesign.com/ lp/white-paper-state-of-EA-report-p?utm_source=google&utm_medium=cpc&utm_ term=bizzdesign&utm_content=504901685464&utm_campaign=Search-randed&gclid= EAIaIQobChMI7pDx3tX39QIVzJ13Ch3kNgqDEAAYASABEgIZPPD_BwE. [9] J. Krogstie, V. Dalberg, S. M. Jensen, Process modeling value framework, in: Y. Manolopoulos, J. Filipe, P. Constantopoulos, J. Cordeiro (Eds.), Enterprise Infor- mation Systems, Springer Berlin Heidelberg, Berlin, Heidelberg, 2008, pp. 309โ€“321. doi:10.1007/978-3-540-77581-2_21. [10] A. Hevner, S. Gregor, Envisioning entrepreneurship and digital innovation through a design science research lens: A matrix approach, Information & Management (2020) 103350. doi:10.1016/j.im.2020.103350. [11] F. Dalpiaz, P. Gieske, A. Sturm, On deriving conceptual models from user requirements: An empirical study, Information and Software Technology 131 (2021) 106484. doi:10.1016/ j.infsof.2020.106484. [12] G. Lucassen, F. Dalpiaz, J. van der Werf, et al., Improving agile requirements: the quality user story framework and tool, Requirements Eng 21 (2016) 383โ€“403. doi:10.1007/ s00766-016-0250-x. [13] O. Lindland, G. Sindre, A. Solvberg, Understanding quality in conceptual modeling, IEEE Software 11 (1994) 42โ€“49. doi:10.1109/52.268955. [14] R. Yin, Case Study Research: Design and Methods, Sage Publications, Thousand Oaks, CA, 2013. [15] D. Moody, The method evaluation model: a theoretical model for validating information systems design methods, in: C. Ciborra, R. Mercurio, M. De Marco, M. Martinez, A. Carig- nani (Eds.), New Paradigms in Organizations, Markets and Society: Proceedings of the 11th European Conference on Information Systems (ECIS 2003), Department of Information Sys- tems, London School of Economics, 2003, pp. 1 โ€“ 17. URL: https://aisel.aisnet.org/ecis2003/. [16] M. Steen, D. Akehurst, H. ter Doest, M. Lankhorst, Supporting viewpoint-oriented en- terprise architecture, in: Proceedings. Eighth IEEE International Enterprise Distributed Object Computing Conference, 2004. EDOC 2004., 2004, pp. 201โ€“211. doi:10.1109/EDOC. 2004.1342516. [17] IEEE, Ieee recommended practice for architectural description for software-intensive systems, IEEE Std 1471-2000 (2000) 1โ€“30. doi:10.1109/IEEESTD.2000.91944. [18] The Open Group, Archimateยฎ 3.1 specification, 2019. [19] S. Zivkovic, H. Kรผhn, D. Karagiannis, Facilitate modelling using method integration: An approach using mappings and integration rules, in: ECIS, 2007, pp. 2038โ€“2049. URL: https://aisel.aisnet.org/ecis2007/122. [20] N. Kahani, M. Bagherzadeh, J. R. Cordy, J. Dingel, D. Varrรณ, Survey and classification of model transformation tools, Softw. Syst. Model. 18 (2019) 2361โ€“2397. doi:10.1007/ s10270-018-0665-6. [21] F. Shull, J. Singer, D. I. K. Sjรธberg, Guide to Advanced Empirical Software Engineering, Springer, London, 2008. doi:10.1007/978-1-84800-044-5. [22] B. Bernรกrdez, A. Durรกn, J. A. Parejo, N. Juristo, A. Ruizโ€“Cortรฉs, Effects of mindfulness on conceptual modeling performance: A series of experiments, IEEE Transactions on Software Engineering 48 (2022) 432โ€“452. doi:10.1109/TSE.2020.2991699. [23] F. J. Lucas, F. Molina, A. Toval, A systematic review of uml model consistency management, Information and Software Technology 51 (2009) 1631โ€“1645. doi:10.1016/j.infsof. 2009.04.009, quality of UML Models. [24] D. Karagiannis, R. A. Buchmann, D. Bork, Managing consistency in multi-view enterprise models: an approach based on semantic queries, in: 24th European Conference on Information Systems, ECIS 2016, Istanbul, Turkey, June 12-15, 2016, 2016, p. Research Paper 53. [25] M. Smajevic, D. Bork, From conceptual models to knowledge graphs: A generic model trans- formation platform, in: ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion, MODELS 2021 Companion, Fukuoka, Japan, October 10-15, 2021, IEEE, 2021, pp. 610โ€“614. doi:10.1109/MODELS-C53483.2021.00093. [26] The Open Group, Meta object facility (mof) 2.0 query/view/transformation specification, 2017. [27] The Open Group, TOGAF v9.2, 2018. [28] A. Gemino, Y. Wand, A framework for empirical evaluation of conceptual modeling techniques, Requir. Eng. 9 (2004) 248โ€“260. doi:10.1007/s00766-004-0204-6.