Report on the 1st International Workshop on Debugging in Model-Driven Engineering (MDEbug’17) Simon Van Mierlo∗ , Erwan Bousse† , Hans Vangheluwe∗‡§ , Manuel Wimmer† , Clark Verbrugge‡ , Martin Gogolla¶ , Matthias Tichyk and Arnaud Blouin†† ∗ University of Antwerp, Belgium Email: simon.vanmierlo@uantwerpen.be Email: hans.vangheluwe@uantwerp.be † TU Wien, Austria Email: erwan.bousse@tuwien.ac.at Email: wimmer@big.tuwien.ac.at ‡ McGill University, Canada Email: clump@cs.mcgill.ca § Flanders Make vzw, Belgium ¶ University of Bremen, Germany Email: gogolla@informatik.uni-bremen.de k University of Ulm, Germany Email: matthias.tichy@uni-ulm.de †† INSA Rennes, France Email: arnaud.blouin@irisa.fr Abstract—System developers spend a significant part of their has been observed, it is then necessary to identify why the time debugging systems (i.e., locating and fixing the cause of failure occurs (i.e., the defect that caused the failure), and how failures observed through verification and validation (V&V)). to modify the models to remove the cause of the failure. These While V&V techniques are commonly used in model-driven engineering, locating and fixing the cause of a failure in a two tasks constitute the core of the debugging activity [5], [6]. modelled system is most often still a manual task without tool- To illustrate this activity, Figure 1 presents an overview of a support. Although debugging techniques are well-established for typical system design process. First, a set of properties that the programming languages, only a few debugging techniques and system has to satisfy is defined. Then, a system is designed tools for models have been proposed. Debugging models faces var- ious challenges: handling a wide variety of models and modelling as a collection of models that must satisfy these properties. languages; adapting debugging techniques initially proposed for To check that the properties are satisfied by the models, a programming languages; tailoring debugging approaches for the wide range of verification and validation (V&V) techniques are domain expert using the abstractions of the considered language. available, such as theorem proving, symbolic execution, model The aim of the first edition of the MDEbug workshop was to checking, real-time simulation, and testing. Depending on the bring together researchers wanting to contribute to the emerging field of debugging in model-driven engineering by discussing new approach, it might be found that a property is satisfied (“pass”), ideas and compiling a research agenda. This paper summarizes not satisfied1 (“failure”), or that the result is inconclusive the workshop’s discussion session and distils a list of challenges (“unknown”)—for example when the chosen technique cannot that should be addressed in future research. prove the property in a reasonable time frame. At that point, however, the cause of the failure (also called the defect) must I. I NTRODUCTION still be identified (i.e., which parts of the models are causing Model-Driven Engineering (MDE) aims at coping with the the observed failure). A failure may also be observed if the complexity of systems by separating concerns through the use properties were wrongly specified, in which case the properties of models, each representing a particular aspect of a system. themselves have to be fixed. Once they are identified, the cause In the past years, significant effort has been directed towards of the failures must be fixed by changing either the models providing early verification and validation (V&V) techniques or the properties. Locating and fixing the cause of a failure to determine whether or not a set of models fulfils a set of can be accomplished manually given a good understanding properties (e.g., [1], [2], [3], [4]). By identifying the properties that are not satisfied, such techniques are able to discover and 1 Note that it is also possible to consider potential failures when the observe the failures (or bugs) of a system. Yet, once a failure considered V&V technique may give false positives (e.g., static code analysis) Requirements System Modelling Design natural language, conforming to DSLs, temporal logic, UML, SysML, etc. test cases, etc. ⊨ properties models slicing, testing, Verification & Fault interactive debugging, commenting code, model checking theorem proving, Validation Localization manual search, etc. automated search, etc. model element(s), pass failure � failure line number, cause event sequence, etc. unknown Fixing Debugging Figure 1. The debugging activity in the system design process. of the models. A wide range of debugging techniques can be • define the scope of debugging within model-driven engi- used to assist developers in finding the cause of the problem. neering; For example, interactive debugging techniques [7], [8], [9] can • identify gaps in the current body of research. be used to observe and control the execution of behavioural III. P ROGRAM models in an interactive fashion (e.g., using breakpoints, step- ping operators, or by inspecting properties) Other techniques The full-day workshop took place on September 17, 2017 as may provide (semi-)automated fault localization, for instance part of the satellite events of the ACM/IEEE 20th International using symbolic execution [10], or model slicing [11]. More Conference on Model Driven Engineering Languages and recently, omniscient debugging techniques [12], [13] allow Systems (MODELS 2017) conference in Austin, Texas. The one to explore execution states both backwards and forwards, workshop started with a keynote presented by Andrei Chis, and live modelling [14] allows one to change the model and from feenk on the topic of Moldable Debugging. We received immediately observe the effect. six submissions, and five submissions were accepted after a review process in which each paper was reviewed by at II. W ORKSHOP C ONTEXT AND G OALS least three members of the program committee. One of the accepted submission was a full research paper, while two were While debugging techniques and tools are very common tool demonstration papers, and two were position papers— for programming languages, very few debugging tools and including one by the keynote speaker that summarized his techniques are available when it comes to models. Hence, keynote presentation. The complete list of papers can be found modellers often have to resort to ad-hoc methods, such as below, in the order of their presentation at the workshop: inspecting and debugging the code generated from models. 1) “Moldable Debugging (Position paper)” by Andrei Chis, Although this allows the reuse of established and well- and Tudor Girba researched program debugging techniques, it is not ideal since 2) “Transformations Debugging Transformations” by Maris the developer has to switch contexts and must understand Jukss, Clark Verbrugge and Hans Vangheluwe both the mapping to and the semantics of the underlying 3) “Towards Debugging the Matching of Henshin Model implementation language. Dedicated debugging support for Transformations Rules (Position paper)” by Matthias modelling languages has potential to reduce or eliminate the Tichy, Luis Beaucamp and Stefan Kögel need for this kind of context switching, and is an essential part 4) “Domain-Level Debugging for Compiled DSLs with of allowing a developer to remain in the modelling paradigm the GEMOC Studio (Tool demonstration)” by Erwan throughout the full development process. Bousse, Tanja Mayerhofer and Manuel Wimmer In this context and scope, the goals of the MDEbug work- 5) “Debugging Non-Determinism: a Petrinets Modelling, shop were to: Analysis, and Debugging Tool (Tool demonstration)” by • bring together interested researchers to optimize the re- Simon Van Mierlo and Hans Vangheluwe search effort and establish collaborations; The afternoon session of the workshop was reserved for • provide a forum for researchers to share new experiences, an interactive discussion on the wide topic of debugging in ideas and early results on the topic of debugging in model-driven engineering. We summarize these discussions in model-driven engineering; Section V. All information can be found on the workshop web- A. Defining the state of the art in software debugging site: https://msdl.uantwerpen.be/conferences/MDEbug. This Debugging is not limited to MDE. Debugging embraces includes the slides of all presentations given at the workshop. various activities, techniques, and application domains in software engineering prior to MDE, such as debugging object- IV. P ROGRAM C OMMITTEE oriented programs. Yet, to the best of our knowledge no recent state of the art has been published on debugging in software The program committee of MDEbug 2017 was composed engineering. The interested reader has to explore the literature of 27 researchers and experts in the domains of modeling, of various techniques applied in a debugging context, such a debugging, and model execution, coming from 12 different program slicing. countries. We sincerely thank the program committee mem- A direction of research can focus on designing a compre- bers and external reviewers for their time in reviewing and hensive survey on debugging in software engineering. The discussing the submitted papers. outcome(s) of this work would permit to clearly identify the • Mauricio Alférez, Siemens, Belgium challenges shared or specific to MDE debugging. • Shaukat Ali, Simula Research Laboratory, Norway • Vasco Amaral, Universidade Nova de Lisboa, Portugal B. Clarifying debugging terminology, classifying approaches • Reda Bendraou, Université Paris Ouest Nanterre la Due to a rather rich history in the field of computer science, Défense, France the words bug, debugging, and debugger have been used in a • Francis Bordeleau, CMind, Canada large number of contexts with diverse meanings. For instance, • Benoit Combemale, IRISA, Université de Rennes 1, some might call “bug” the wrong behaviour of a program, France while some others consider that the “bug” is the piece of code • Jonathan Corley, University of West Georgia, USA responsible for the misbehaviour. Likewise, while “debugging” • Andrea d’Ambrogio, University of Rome Tor Vergata, literally means “removing a bug”, in some contexts it means Italy “finding the bug”, while in some others it means “executing • Julien Deantoni, University of Nice-Sophia Antipolis, the model/program in an interactive debugger”. Lastly, the France word “debugger” is used on a daily basis to designate an • Davide Di Ruscio, University of L’Aquila, Italy interactive debugger (e.g., gdb, or the Eclipse Java debugger), • Holger Giese, Hasso-Plattner-Institut, Germany while more generally it can mean “a tool that can be used • Martin Gogolla, University of Bremen, Germany for debugging”. Overall, to avoid confusion, it appears that • Jeff Gray, University of Alabama, USA the community must either agree on a precise terminology, or • Robert Heinrich, Karlsruher Institute of Technology, Ger- at least acknowledge that these words may convey different many meanings depending on the context. • Sebastian Herzig, Caltech/Jet Propulsion Laboratory, In addition to vocabulary issues, there is some confusion USA on the different kinds of debugging approaches that exist, and • Levi Lucio, Fortiss, Germany what qualifies as debugging, understanding, or neither. For • Tanja Mayerhofer, TU Wien, Austria instance, using an interactive debugger to explore the different • Tim Molderez, Vrije Universiteit Brussel, Belgium states of a behavioural model step by step can be useful both • Patrizio Pelliccione, Chalmers University of Technology for debugging (i.e., finding and fixing a bug), or simply to and University of Gothenburg, Sweden better understand the model. Likewise, a model slicer can be • Arend Rensink, Universiteit Twente, The Netherlands used to debug a model but is commonly not called a debugger, • Bran Selic, Malina Software Corporation, Canada nor is model slicing classified as a debugging technique since • Eugene Syriani, University of Montreal, Canada it can be used for other purposes as well. It would be of great • Jérémie Tatibouët, CEA, France benefit for the community to reach a classification of possible • Massimo Tisi, Institut Mines-Telecom Atlantique, France debugging techniques and to understand the similarities or • Javier Troya, University of Seville, Spain differences between them (e.g., interactive or not, automated or • Antonio Vallecillo, Universidad de Málaga, Spain manual, static or dynamic, language-specific or generic). The • Tijs van der Storm, Centrum Wiskunde & Informatica consensus at the workshop seemed to be that any technique (CWI), The Netherlands used during the “debugging phase” (denoted by the dashed orange line in Figure 1) can be called a debugging technique. V. S UMMARY OF THE D ISCUSSIONS Among many others, this includes print statements, interactive debugging, and model slicing. It is not so much the technique, This section summarizes the content of the discussions that but rather the intent of the use of the technique that defines it took place during the workshop, in the form of a list of topics. as a debugging technique. Each topic covers an aspect of debugging in model-driven engineering that has open research challenges. Therefore, these C. Abstraction gap and translational semantics challenges can be used as a basis for future research, or for Semantics for modelling languages can be defined in a organizing discussions during a future workshop. variety of ways [15]. One approach is to develop a trans- formation that maps models of the language onto another state, thereby debugging entirely backwards. This is related language with known semantics. A popular example is a to program or model slicing, where the debugger produces a code generator that generates program code implementing the set of statements that can affect the values of a run-time state. model’s semantics. Models can be debugged naively using an F. Debugging in the presence of black-box components interactive debugger for the programming language to debug the generated code or the model interpreter in case of an Debuggers may assume that the model source (a text doc- execution by interpretation. The “semantic gap” between the ument, diagram, etc.) is available. This might not always be source and target language, however, obscures the abstractions true. In certain cases the source of the model might be hidden of the source language; the debugging is not performed at the and provided as a compiled “black-box”. If so, the control an same abstraction level as the development. Certain approaches interactive debugger can exert over such a black box is limited: reuse the target language’s interactive debugger and translate in program interactive debugging, these components are often debugging operations on the source language forward onto ignored (or “stepped over”) and assumed to be correct. Failures debugging operations of the target language, and results of the that only appear when composing or integrating black-box target language interactive debugger backward onto results of components can be difficult to debug: the failure might actually the source language (e.g., [16]). The semantic gap, however, be caused by an interaction in one of the components that makes it difficult to translate debugging results back onto is impossible to detect without observation of the state and abstractions of the source language. control of the execution. If it is impossible to provide the full In a general sense, the mapping between source and target source (e.g., in case of intellectual property protection), a grey- languages is not always one-to-one. One open problem is box approach might provide an interface to the component that how to handle such situations. Should the target language allows a debugger to interact with it. This requires finding a concepts be available to the modeller? Should additional balance between exposing enough detail and protecting the domain concepts be defined? Or should one consider that there full details of the model. is a bug in the translational semantics themselves? G. Usability and fitness to the domain D. Impact of the observation on the execution Debuggers must be developed for developers and modellers, While some approaches reuse a target language interac- which are often domain experts with no experience with the tive debugger to provide interactive debugging for modelling technologies used to implement the modelling environments languages with translational semantics, instrumentation offers that they use. The usability of a debugger is therefore a key another way of debugging models. For instance, it is possible issue and may cover many aspects. In the case of an interactive to instrument the model with specific “traps” that interrupt debugger, such aspects may include the following: the normal flow of execution to provide source language in- • The representation of the current execution state has to fit teractive debugging operations [17]. In the case of operational the mental model of the developer. For debugging visual semantics, interactive debugging or generic control operations languages, this representation can be close to the visual can also directly be added to the interpreter [8], [18], [9]. abstractions provided by the language. In the presence In both cases, however, the instrumentation might change the of additional run-time information, however, additional semantics in a way that interferes with the normal execution abstractions may have to be provided. semantics of the model. This means bugs can appear and/or • Similarly, not only the debugging state but also definitions disappear after instrumentation, or due to the instrumentation of breakpoints need to align with the language. Here, itself. In this case, bug-preserving instrumentation methods domain-specific condition languages on states and state appear to be necessary [19]. sequences need to be provided which enable the domain expert to easily specify conditions on the state according E. Debugging forward or backward to the domain-specific needs. Interactive debugging is usually performed forwards: from • The debugging operations provided have to fit the needs an initial state, one explores the execution trace according of the developers. They have to fit the developer’s mental to the semantics of the considered modelling language. It is model of the language’s semantics: a logical “step” does possible to extend this approach to allow the developer to not necessarily correspond to a debugger or simulator step backwards in an execution trace, leading to so-called “step”. “omniscient debugging” [12], [13]. Being able to interactively • Suitable representations for the execution traces and explore execution states both forward and backward, hence their states have to be found. Especially for concurrent, allowing to freely explore the state space, yields many benefits non-deterministic formalisms and/or acausal languages, regarding usability, as it may prevent from having to restart these might differ significantly from the usual sequential the complete execution to re-visit some suspect state. representation. To extend even further the possibilities of omniscient de- Furthermore, while the usual concepts of interactive debug- bugging, an interesting research direction is to study the ging (state, steps, breakpoints, etc.) are obvious to software potential for starting from a faulty model state to then compute engineers and can be easily used also for models (e.g., [16], the possible past execution paths that can lead up to that [20], [12]), domain experts might not be well-versed using interactive debuggers. For example, a debugger for a mod- and analysis. While we currently often transpose software elling language might provide interactive tutorials and similar debugging concepts, a valuable line of research can look at features to enable domain experts to quickly learn how to hardware debugging techniques and transpose them to models. debug the model. In essence, this usability aspect needs to be evaluated with respect to the needs of the domain experts. K. Debugging structural models While there is already existing and encouraging work on Debugging is often defined as finding the cause of some domain-specific debugging [21], [22], usability is a key chal- observed faulty behaviour of a system, and changing the lenge for all debugging approaches. system to avoid this behaviour in the future. This suggests that debugging only makes sense for behavioural models, and H. Reuse of debuggers among languages therefore for modelling languages with execution semantics Many formalisms have common aspects they share: syntax, implemented by a code generator or an interpreter. semantics, visualization. In language engineering, efforts are Yet, a wide range of models focus only on structural made to reuse different parts of language definitions [23], [24]. aspects of systems, and such models may also contain errors. Likewise, efforts can be made to share tools among different For example, a metamodel can be considered faulty and languages, such as debuggers [25], [12]. For instance, interac- too restrictive if a supposed valid model does not conform tive debuggers often share similar features and concepts, such to this metamodel. Another example is a structural model as steps, breakpoints, state inspection, or state manipulation. of a building that does not conform to a regulation. “De- One research direction is therefore to investigate how different bugging” such models would consist in searching for static parts and the logic of debuggers can be reused and shared constraints that are not well defined. This might require query to avoid the effort of implementing a new debugger from and expression support to ask the conformance checker why scratch for each new language. General requirements for a or why not certain constraints were violated. Analysis for common debugger interface are also interesting for example descriptive elements like constraints in terms of formulas must in situations when different debuggers for one modelling be provided (e.g., pointing to the failing subformula when language are available and a project wants to change the a conjunction fails). A possible research direction would be debugger. to study or define debugging approaches fully dedicated to structural models. I. Interactive debugging for declarative languages L. Debugging in the context of faulty semantics A declarative language generally provides concepts that do not explicitly show or define in which order a conforming While debugging commonly focuses on finding problems model will be executed. A prime example of this is a declar- inside models, another possible cause for failures are problems ative model transformation: it defines the relation between in the semantics of the considered language (e.g., errors in input elements and output elements, but does not define how the interpreter or in the code generator). Faulty semantics can the transformation is executed. The engine responsible for lead to invalid or inconsistent debugging results. This can have executing the transformation is defined operationally, however. different causes, such as: To debug such declarative languages interactively, one can ask • The semantics is correctly specified, but wrongly imple- whether we have to expose those operational semantics to the mented, which means that the semantics itself must be developer (in the form of steps, for example), and if so, to what debugged. extent. Since the language is declarative, the developer does • The semantics is incorrectly specified, which means that not necessarily know its operational semantics, and exposing the set of properties that specify the language must be them might require a mental leap. Alternatively, debugging clarified. operations that are far away from the operational semantics • The semantics is both correctly specified and imple- but closer to the mental model of the developer might be more mented, but is not well understood by the modeller. This appropriate (e.g., by showing all maximal parts of a declarative is a subtler problem, and one where debugging techniques model transformation which are still applicable to understand can be useful for “model understanding”, or in this case why the full declarative model transformation is not applica- “semantics understanding”, possibly by exposing certain ble). This would lead to debugging functionality that considers details of the semantics in the debugging process. the constituents of the declarative model (e.g., debugging the formulas being part of the declarative model). M. Identifying “innocent” model elements Debugging aims at searching the cause of some observed J. Debugging hardware systems failures within models or properties. In other words, the goal While many systems are deployed as software, synthesis is to identify “guilty” model elements that cause trouble. to hardware is also possible. When a failure occurs in a However, it is equally important in the debugging process to synthesized hardware part, it can be necessary to debug this identify the “innocent” model elements that definitively do part to identify the cause of the failure, using physical probes not contribute to the fault. For instance, when using general- to read the state and control of the system. Many tools purpose programming languages, it is common to comment and techniques exist for hardware monitoring, information, portions of code to see whether the failure is still present without these portions. In a more advanced fashion, unit testing [12] E. Bousse, J. Corley, B. Combemale, J. Gray, and B. Baudry, “Sup- aims at testing independently the different parts of a system— porting Efficient and Advanced Omniscient Debugging for xDSMLs,” in Proceedings of the 2015 ACM SIGPLAN International Conference on for instance using partial and trusted implementations called Software Language Engineering, ser. SLE 2015. New York, NY, USA: stubs—in order to mark as many units as possible “innocent”, ACM, 2015, pp. 137–148. and to eventually isolate the guilty unit responsible for a [13] J. Corley, B. P. Eddy, E. Syriani, and J. Gray, “Efficient and scalable omniscient debugging for model transformations,” Software Quality failure. Journal, pp. 1–42, 2016. [14] R. van Rozen and T. van der Storm, “Toward live domain-specific VI. C ONCLUSION AND ACKNOWLEDGEMENTS languages,” Software & Systems Modeling, Aug 2017. [Online]. While verification and validation is a necessary step to Available: https://doi.org/10.1007/s10270-017-0608-7 [15] B. Combemale, X. Crégut, P.-L. Garoche, and X. Thirioux, “Essay on identify the defects in models, debugging is a crucial activity Semantics Definition in MDE - An Instrumented Approach for Model when it comes to dealing with such defects to improve the Verification,” Journal of Software, vol. 4, no. 9, pp. 943–958, 2009. quality of models. The International Workshop on Debugging [16] H. Wu, J. Gray, and M. Mernik, “Grammar-driven generation of domain-specific language debuggers,” Software: Practice and Experi- in Model-Driven Engineering (MDEbug) aims at providing an ence, vol. 38, no. 10, pp. 1073–1103, 2008. event for the community to share ideas and results in this [17] S. Mustafiz and H. Vangheluwe, “Explicit modelling of statechart research area. This first edition was well attended throughout simulation environments,” in Proceedings of the 2013 Summer Computer Simulation Conference, ser. SCSC ’13. Vista, CA: Society the day, and the afternoon discussions were both lively and for Modeling & Simulation International, 2013, pp. 21:1–21:8. constructive, leading to a wide range of potential research [Online]. Available: http://dl.acm.org/citation.cfm?id=2557696.2557720 topics. In addition, we observed that debugging was a quite [18] Y. Laurent, R. Bendraou, and M. Gervais, “Executing and debugging UML models: an fUML extension,” in Symposium on Applied Comput- recurrent topic throughout the presentations of the MODELS ing (SAC). ACM, 2013, pp. 1095–1102. 2017 conference, which strengthens our belief that it is an [19] J. Denil, H. Kashif, P. Arafa, H. Vangheluwe, and S. Fischmeister, important research direction for the success of MDE. “Instrumentation and preservation of extra-functional properties of simulink models,” in Proceedings of the Symposium on Theory We thank everyone who took part in the success of the of Modeling & Simulation: DEVS Integrative M&S Symposium, workshop, including the program committee members, the ser. DEVS ’15. San Diego, CA, USA: Society for Computer authors, our keynote speaker, and everyone who attended the Simulation International, 2015, pp. 47–54. [Online]. Available: http://dl.acm.org/citation.cfm?id=2872965.2872972 workshop or took part in the discussions. [20] A. Krasnogolowy, S. Hildebrandt, and S. Wlitzoldt, “Flexible Debug- ging of Behavior Models,” in International Conference on Industrial R EFERENCES Technology (ICIT). IEEE, 2012, pp. 331–336. [1] S. Gabmeyer, P. Kaufmann, M. Seidl, M. Gogolla, and G. Kappel, [21] A. Chiş, M. Denker, T. Gîrba, and O. Nierstrasz, “Practical domain- “A feature-based classification of formal verification techniques for specific debuggers using the moldable debugger framework,” Comput. software models,” Software & Systems Modeling, Mar 2017. [Online]. Lang. Syst. Struct., vol. 44, no. PA, pp. 89–113, Dec. 2015. Available: https://doi.org/10.1007/s10270-017-0591-z [22] R. T. Lindeman, L. C. L. Kats, and E. Visser, “Declaratively Defining [2] F. Hilken, M. Gogolla, L. Burgueño, and A. Vallecillo, “Testing Domain-Specific Language Debuggers,” in International Conference on models and model transformations using classifying terms,” Software Generative Programming and Component Engineering (GPCE). ACM, & Systems Modeling, Dec 2016. [Online]. Available: https://doi.org/10. 2012, pp. 127–136. 1007/s10270-016-0568-3 [23] T. Degueule, B. Combemale, A. Blouin, O. Barais, and J.-M. Jézéquel, [3] K. Zurowska and J. Dingel, “Language-specific model checking of “Melange: A meta-language for modular and reusable development UML-RT models,” Software & Systems Modeling, vol. 16, no. 2, of DSLs,” in Proceedings of the 2015 ACM SIGPLAN International pp. 393–415, May 2017. [Online]. Available: https://doi.org/10.1007/ Conference on Software Language Engineering, ser. SLE 2015. s10270-015-0484-y New York, NY, USA: ACM, 2015, pp. 25–36. [Online]. Available: [4] E. Planas, J. Cabot, and C. Gómez, “Lightweight and static http://doi.acm.org/10.1145/2814251.2814252 verification of UML executable models,” Computer Languages, [24] M. Churchill, P. D. Mosses, N. Sculthorpe, and P. Torrini, Systems & Structures, vol. 46, no. Supplement C, pp. 66 – 90, “Reusable components of semantic specifications,” in Transactions 2016. [Online]. Available: http://www.sciencedirect.com/science/article/ on Aspect-Oriented Software Development XII, S. Chiba, É. Tanter, pii/S1477842415300361 E. Ernst, and R. Hirschfeld, Eds. Berlin, Heidelberg: Springer [5] Peggy Aldrich Kidwell, “Stalking the Elusive Computer Bug,” IEEE Berlin Heidelberg, 2015, pp. 132–179. [Online]. Available: https: Annals Of The History Of Computing, vol. 20, no. 4, pp. 3–7, 1998. //doi.org/10.1007/978-3-662-46734-3_4 [6] A. Zeller, Why Program Fail – 1st Edition. Elsevier, 2004. [Online]. [25] E. Bousse, T. Degueule, D. Vojtisek, T. Mayerhofer, J. Deantoni, and Available: http://www.whyprogramsfail.com/ B. Combemale, “Execution Framework of the GEMOC Studio (Tool [7] N. Bandener, C. Soltenborn, and G. Engels, “Extending DMM Behavior Demo),” in Proceedings of the 2016 ACM SIGPLAN International Specifications for Visual Execution and Debugging,” in Proceedings of Conference on Software Language Engineering, ser. SLE 2016. the Third International Conference on Software Language Engineering New York, NY, USA: ACM, 2016, pp. 84–89. [Online]. Available: (SLE’10), vol. 6563 LNCS. Springer Berlin Heidelberg, 2010, pp. http://doi.acm.org/10.1145/2997364.2997384 357–376. [8] T. Mayerhofer, P. Langer, and G. Kappel, “A runtime model for fUML,” in Workshop on Models@run.time (MRT). ACM, 2012, pp. 53–58. [9] S. Van Mierlo, Y. Van Tendeloo, and H. Vangheluwe, “Debugging Parallel DEVS,” SIMULATION, vol. 93, no. 4, pp. 285–306, 2017. [Online]. Available: http://dx.doi.org/10.1177/0037549716658360 [10] J. Schönböck, G. Kappel, M. Wimmer, A. Kusel, W. Retschitzegger, and W. Schwinger, “Tetrabox - a generic white-box testing framework for model transformations,” in 2013 20th Asia-Pacific Software Engineering Conference (APSEC), vol. 1, Dec 2013, pp. 75–82. [11] A. Blouin, B. Combemale, B. Baudry, and O. Beaudoux, “Kompren: Modeling and Generating Model Slicers,” Software and Systems Modeling, Oct. 2012. [Online]. Available: http://hal.inria.fr/hal-00746566/PDF/slicer.pdf