=Paper=
{{Paper
|id=Vol-1742/MRT16_paper_5
|storemode=property
|title=Incremental Runtime-generation of Optimisation Problems using RAG-controlled Rewriting
|pdfUrl=https://ceur-ws.org/Vol-1742/MRT16_paper_5.pdf
|volume=Vol-1742
|authors=Rene Schöne,Sebastian Götz,Uwe Aßmann,Christoff Bürger
|dblpUrl=https://dblp.org/rec/conf/models/SchoneGAB16
}}
==Incremental Runtime-generation of Optimisation Problems using RAG-controlled Rewriting==
Incremental Runtime-generation of Optimisation Problems using RAG-controlled Rewriting René Schöne∗ , Sebastian Götz∗ , Uwe Aßmann∗ and Christoff Bürger† ∗ Software Technology Group, Technische Universität Dresden (rene.schoene|sebastian.goetz1|uwe.assmann)@tu-dresden.de † Department of Computer Science, Faculty of Engineering, Lund University, Sweden christoff.burger@cs.lth.se Abstract—In the era of Internet of Things, software systems common solution is to manually implement a cache mech- need to interact with many physical entities and cope with new anism. However, this leads to other serious problems like requirements at runtime. Self-adaptive systems aim to tackle inconsistencies [9]. those challenges, often representing their context with a runtime model enabling better reasoning capabilities. However, those Building on reference attribute grammar controlled rewrit- models quickly grow in size and need to be updated frequently ing – a promising technology to accomplish automatic, incre- with small changes due to a high number of physical entities mental analyses of runtime models [10] – we will show its changing constantly. This situation threatens the efficacy of application in our case to generate ILP optimisation problems. analyses on such models, as they lack an efficient management of Reference attribute grammars (RAGs) [11], are a declarative those changes leading to unnecessary computation overhead. We propose applying scalable, incremental change management of formalism to specify semantics for tree structures. A RAG runtime models in the presence of a complex model to text trans- extends tree instances of a context-free grammar to abstract formation. In this paper, we present and evaluate an example of syntax graphs by superimposing a semantic overlay graph. In code generation of integer linear programs. In our case study contrast to ordinary attribute grammars [12], RAGs do not using synthesized models, we saved 35 - 83% processing time support incremental evaluation out-of-the-box. For that reason, compared to a non-incremental approach. Using our approach, future self-adaptive systems can handle and analyze large-scale we use RAG-controlled rewriting [10], which permits model runtime models, even if they change frequently. updates in terms of graph rewrites that in turn update analyses, thus automatically achieving incremental evaluation. Using I. I NTRODUCTION RAG-controlled rewriting, we are able to model software A major challenge regarding the integration of computer- systems including their context, incrementally reason on those based systems with the physical world is the development systems and still cope with runtime model changes. In this of convenient runtime models [1], [2]. Runtime states of paper, we approach the following research questions: cyber-physical systems [3] have to be represented in such a RQ1 Are reference attribute grammars a suitable modeling way, that they can be efficiently updated and analyzed for framework for large-scale runtime models? self-adaptation. This comprises monitoring of change-events, RQ2 Is it possible to incrementally propagate changes in the analysis of the resulting system state, planning of reactions runtime model to domain-specific code generated from and the actual execution of them – a feedback loop [2]. this model? A particularly challenging part of such feedback loops is RQ3 If yes, how beneficial is incremental change propagation the deduction of reactions to updates, i.e., the reasoning in the compared to non-incremental solutions? analyze and plan phase [4]. Often, logical or mathematical In the following Section II, we describe our previous formalisms are used to ensure only efficient and correct plans research building the foundation of this work. Subsequently, are derived, e.g., integer linear programming (ILP) for opti- concepts and advances of our approach are described in mizing self-adaptations [5] or logic-based structural reasoning Section III, followed by an extensive evaluation in Section IV. like Alloy [6]. Although such techniques ease the deduction We compare our approach to related work in Section V and of reaction-plans, they still require the derivation of an actual conclude in Section VI. problem specification suitable for their respective reasoning tooling. Since changes in runtime models typically happen II. P ROBLEM D OMAIN : M ULTI -Q UALITY AUTO -T UNING frequently and mostly affect only small model parts [7], planning problems should be deduced by incremental analyses. This section describes the domain of our case study and its Given a model update, analyses should only be re-evaluated associated evaluation: Multi-Quality Auto-Tuning (MQuAT). if influenced by the update. If not, previously cached results We selected this problem domain because it is a complex should be reused. The benefit of such an incremental derivation problem involving runtime models and reasoning on them. of planning problems increases with model size. Hence, in the Furthermore, our previous research [5] revealed, that incre- scope of the Internet of Things, i.e., in the presence of very mental runtime model reasoning is a major challenge and can large runtime models, it is of uttermost importance [8]. A not be easily integrated into existing systems. System Structure Model input-for The generated ILP comprises three types of constraints: (a) and Contracts produces structural constraints describing which software components Model to Text Transformation are required due to dependencies between them, (b) constraints Many Runtime Model covering the variants of software-hardware mappings as a Reasoning Logic small (abstract representation (as ILP) dynamic knapsack problem [15] and (c) constraints covering changes of current system state) Off-the-Shelf hardware resources, e.g., a maximum frequency of a CPU. On Reasoner a more abstract level, the ILP is an equation system Ax = b Figure 1: Overview of Multi-Quality Auto-Tuning. comprising a matrix of coefficients A, a vector of bounds b and the target variables x. Here A and b are the mentioned constraints, in x are Boolean variables representing the choice MQuAT is an approach for model-driven, component-based of one mode being deployed on a certain hardware component. development of self-adaptive systems. The principal idea and The two key problems of MQuAT and many other model- key contribution is to use models and a domain-specific driven approaches for self-adaptive software are (1) the scal- language enabling the developer to describe the self-adaptive ability of both the runtime model and the reasoning upon it system under construction and generate the reasoning logic for and (2) efficient change management, i.e., minor changes the analyze and plan phases from these models. Consequently, to the runtime model should avoid a complete re-generation the developer is enabled to focus on the problem specifica- and -evaluation of the reasoning logic. To realize the intercon- tion instead of the technical realization, which is automated nection between development models (e.g., models describing by code generation at runtime. Hence, a representation of the software architecture) with the runtime model, MQuAT the system’s runtime state is required – a runtime model. uses the Eclipse Modeling Framework (EMF) for both. There In other words, MQuAT adheres to the models@run.time are alternative, more lightweight modeling framework like paradigm [13], which promotes the use of modeling techniques the Kevoree Modeling Framework (KMF) [9] as EMF is not at runtime. Figure 1 depicts the general principle of MQuAT. intended to be used at runtime. In this paper, we propose the In [5], the generation of reasoning logic to alternative application of reference attribute grammar controlled rewriting optimization languages was investigated. Here, we focus on (RAG-controlled rewriting) as alternative modeling frame- the generation of an integer linear program (ILP) [14] as work, which in contrast to KMF offers incremental evaluation reasoning logic. In general, the optimization problem to be in addition to being more lightweight. In particular, we will solved by MQuAT is a selection and mapping problem. show how the reasoning logic can incrementally be gener- MQuAT models systems as sets of soft- and hardware com- ated as an ILP using RAG-controlled rewriting. Therefore, ponents. Each software component can have multiple im- our approach is situated in the analyze and plan phases of plementations, each with different modes. For example, the the feedback loop, as the optimization problem determines software component Compress has implementations like zip, whether and how the current configuration has to be changed. pigz and nanozip, to name but a few. The nanozip implementation can be used in different modes, which, e.g., III. RAG- CONTROLLED , I NCREMENTAL ILP-G ENERATION specify the amount of preallocated memory used for compres- Attribute grammars (AGs) are a well known technique for sion. Hardware components typically describe a hierarchical semantic analysis in the field of compiler construction [12]. structure, e.g., a component server rack comprising several AGs use a context-free grammar to specify the structure of servers, each containing memory, CPUs and network devices. tree-shaped data they operate on and attributes to analyse this Both, soft- and hardware components can be further detailed data. Attributes can have dependencies to one another and on by specifying their properties. For example, the hardware tree information, resulting in a statically known dependency component CPU has the property frequency and the software graph exploited by the attribute evaluator. Based on this graph, component Compress has a property compression-ratio. De- incremental evaluation can be realized [16]. pendencies between components are specified using contracts, Reference AGs (RAGs) are an extension, where attributes that describe a provision/requirement relation on properties may resolve to remote tree nodes such that an semantic of two interdependent components. For example, to guarantee overlay graph is superimposed on the spanning tree [11]. This a maximum processing time of a compression algorithm, a enables new scenarios, e.g., the specification of semantics minimum amount of memory is required. in metamodels like EMF Ecore [17]. However, dependencies To process a user request, a system described using the of reference attributes cannot be statically decided, impeding aforementioned concepts contains two dimensions of variabil- incremental evaluation. RAG-controlled rewriting overcomes ity: (1) each component, which is required to process the this limitation by tracking dependencies dynamically such request, has several implementations and (2) each implemen- that rewrites can incorporate them for invalidation [10]. It tation can be used in different modes on different hardware enables dynamic incremental attribute evaluation for RAGs. resources. Thus, the optimization problem denotes the selec- The RACR library [10], used in our case study to imple- tion of the best implementations and their best mapping to ment a runtime model for MQuAT, is a reference imple- the available resources for a given user request. To solve this mentation of RAG-controlled rewriting. It can be found at problem, it is transformed to an ILP as optimization language. https://github.com/christoff-buerger/racr. Root 1 Request deployed-on target 1 1 objective HWRoot SWRoot Component Implementation name name SubResources * * * * * reqcomps * Property Clause Mode name, unit comparator name * direction value * * * * Resource return-type name Constraints status * SubResources n AST structure n Reference Attribute Figure 2: Grammar overview. HWRoot and Resource are hardware elements; SWRoot, Component, Implementation and Mode are software specific. The non-terminals Property and Clause are cross-cutting, whereas the Request represents the expectations from a user and triggers an evaluation of the model. A. General Solution Overview example could be a user requesting to compress a video file, requiring a maximum execution time to finish this operation Figure 2 shows a simplified grammar using our approach and a maximum file size after compression. to describe a system subject to auto-tuning, i.e., possible Figure 3 shows a simplified structure of an example system types of model elements and their relationships. The notation using the described grammar to build an AST representing follows JastEMF [17], showing nonterminals as rectangles such a system. It includes three parts: a) two hardware with terminals of them listed beneath the name of its enclos- components, b) nine software components, implementations ing nonterminal. Containment relations model nonterminals and modes (indicated with C, I and M, respectively) and c) in production right-hands, thus, implying non-cyclic entity the request. With such an AST, the structure and properties of relations, i.e., subtrees. Arrows denote attributes which are hard- and software can be described. Further, their relationship known to be of a type the arrow is pointing to. The following is modeled with clauses defining requirements and provisions. three concerns are represented as nonterminals: All the aforementioned elements can be modeled using The hardware subtree with root HWRoot denotes an ar- RAGs. Hence, they are a suitable modeling framework for bitrarily deep hierarchy of Resources, which are grouped runtime models, which partially answers RQ1. To show that by common types (e.g., CPU). Resources have a set of RAGs are suitable for large-scale runtime models, we have Clauses describing their current state, e.g., “CPU frequency evaluated our approach in Section IV. is 1.4GHz”. Clauses are linked to Properties, e.g., repre- senting CPU frequency. B. Attribution used for ILP-Generation Software is represented as a hierarchy, too. The semantics To realize ILP-Generation using RAGs, attributes imple- of Components, Implementations and Modes are as menting a model-to-text transformation have to be specified described in Section II. Furthermore, implementations can for each nonterminal of the grammar. The computation of require components, forming a dependency tree. Modes are these attributes is local to this nonterminal and can be reused described by a quality contract – a set of Clauses describing by other attributes, as will be described later. As they are their non-functional behavior. With such clauses, properties computed incrementally, the whole model transformation is like an execution time t can be described depending on input incremental, thus tackling the problems revealed in Section I. parameters, e.g., t = n2 , where n denotes input file size. Figure 4 shows the AST introduced above in Figure 3 at Different modes of the same implementation can have either runtime. It includes the most important attributes involved in the same clauses, the same properties, but different values or the computation, their dependencies and whether they will be completely different clauses. evaluated in a certain situation described below. The syntax Finally, a Request denotes the requested functionality of attributes will be introduced in the next paragraph. An along with expected values for non-functional properties. An evaluation always begins at the attribute to-ilp, starting the Client Root Root o bv ti HWRoot SWRoot Request HWRoot SWRoot ti Request ti n C C CPU1 Compress DataProv nhw ti ti CPU1 Compress DataProv I I o nreqc nsw nhw n nsw nhw n RAM2 pigz Remote nhw ti ti M M RAM2 pigz Remote sequential Ethernet o nreqc nreqc nreqc M sequential Ethernet parallel nreqc nreqc Nonterminal parallel I nreqc nanozip Children ti Reference M nanozip Attribute cO nreqc cO Figure 3: Example AST, built using the grammar in Figure 2. nreqc Nonterminal Attribute Skipped attribute Children Cache hit (using incremental) generation process. Any other invocation is controlled by Skipped invocation Invocation RACR, ensuring that only those attributes are re-evaluated, whose values might have changed because they depend on up- Figure 4: Attribute computation and dependencies for an dated model information. Attributes are defined to fit naturally example AST. in the structure of the ILP. In the following, the most important attributes are briefly explained. Their syntax is denoted by underlines, e.g., to-ilp is represented as ti in Figure 4. Root, the required attribute values objective and binary-vars objective This attribute is defined on the nonterminals Root on Root and to-ilp on SWRoot and Request are read from and Resource, to compute the objective function globally cache, since they do not depend on any changed information. (Root) or partly for the given Resource. Those cached attributes are marked half-grey. Other attributes to-ilp Gathers the ILP constraints for each clause of the cur- which normally would be evaluated for those attributes – rent request. Both, software and architectural constraints e.g. to-ilp on Compress – will never be called. Such skipped result from the invocation of to-ilp on components and attributes are shown in white. They are another reason for implementations. Software constraints represent the de- the savings of incremental evaluation. However, the attribute pendencies of implementations and components, whereas nego on SWRoot will be re-evaluated, since some attributes architectural constraints ensure that only one mode of an it depends on – like nhw – need to be re-evaluated. Those implementation per component is deployed. attributes are shown in black. nego Models the dynamic knapsack problem (negotiation), Using this approach, we can incrementally propagate run- which was outlined in Section II. Its constraints comprise time model changes to text or code generated from it, thus property values required by components and provided by affirming RQ2. Showing the scalability of our approach, we components and resources. Thus, nego depends on nhw evaluated models of different sizes and change scenarios. (negotiation of hardware resources), nsw (negotiation of software modes) and nreqc (request constraints). binary-vars Computes all used variables and their bounds. IV. E VALUATION The attribute binary-vars is a good example to demonstrate the strength of incremental evaluation on a small scale. If, for To answer the research questions RQ1 and RQ3 posed in example, the value of a hardware entity has changed (e.g., section I, we created synthesized models of systems specified the CPU frequency has been changed), this attribute does not by the grammar described in Section III. More concrete, we need to be re-evaluated, because the change will not introduce strive to measure the benefit of incremental evaluation by new variables. Thus, its value will be read from cache, as all comparing our approach to non-incremental versions of it. By information it depends on, whether tree-structure or attribute using synthesized models it is possible to create arbitrary large values, are unchanged. systems and, thus, show the suitability and scalability of our Figure 4 shows, which attributes need to be re-evaluated in approach. The evaluation results and an executable test setup case of a property change of CPU1. Starting with to-ilp on are available at https://github.com/rene-schoene/racr-mquat. Change- ◦ incremental input-for Parameters • flushed Generator produces (succ) ◦ semiautomatic M measurement System- System model: M ILP solver: M ILP-Generator ILP Configuration Generator (init) EMF, RAG glpk, gurobi Figure 5: Evaluation setup for measuring ILP generation after initial model creation and successive changes. A. Test Setup likely manually cached and maintained in a handcrafted solu- Resources of the created models have two non-functional tion, e.g., attributes computing a filtered collection contain- properties. Software components build a “requirement chain”: ing available hardware resources. In contrast to the actual all but the last component require the next one. In other ILP generation, such auxiliary attributes are simple analyses, words, we generate a simplified variant of Pipe-and-Filter whose cache maintenance is simple enough for a handcrafted architectures [18]. Every clause has its values chosen such solution. The semiautomatic strategy is important for realistic that at least one valid solution of the generated optimization measurements, since one usually takes rigorous advantage of problem exists. To enable a close investigation, our test setup such attributes in a sense of calling them many times in provides variability in the following parameters: algorithm-based analyses. To compare our incremental solu- (a) number of hardware resources, ranging from 10 to 400, tion to completely non-cached analyses would be unrealistic (b) subresources, either 10 per resource or a flat subtree, and just confirm the well-known fact of exponential attribute (c) number of software components, ranging from 1 to 90, evaluation complexity in case of naïve evaluation [19], [20]. (d) implementations per component, 1, 2, or 10, and The flushed strategy has the cache enabled, but it gets (e) modes per implementation, either 2 or 10. invalidated after each change. Thereby the first invocation In total, 33 different parameter configurations were used. of each attribute needs to be computed again, but further Based on our experience, we identified three kinds of changes: invocations in the same step can rely on the attribute being Update-HW Update property values in clauses of hardware cached. Depending on the context, we will also use the names resources (continuous monitoring) of each strategy to refer to an execution using this strategy. Structure-HW Remove certain hardware resources, or add For each step, the time required to generate the ILP was new ones (hardware maintenance) measured, excluding disk IO and cache flushing. Additionally, Structure-SW Remove certain software modes, or add new to avoid dependencies of our results to a concrete hardware ones (software maintenance) used to measure, the numbers of called and actually com- The category Update-SW was left out, because we do not ex- puted attributes were counted. These are indicators, how well pect a changing behavior of existing implementations. Instead, the strategy makes use of incremental evaluation, because a new software modes or implementations will be added, which lower number of computed attributes implies less computation is already covered by Structure-SW. needed. Using an incremental approach, the set of computed To show the applicability of our approach under more attributes is always a real subset of those evaluated using realistic conditions, a fourth scenario “mixed” was measured. a non-incremental approach. Using the generation times, we All three kinds of changes were mixed testing the stability of are able to answer RQ1, i. e., prove the suitability of our our approach in contrast to specific, one-sided changes. Based approach. Taking all measured data into account, the benefit on our experiences, we weighted the kinds with 80% Update- of incremental ILP generation is visible, thus answering RQ3. HW, 10% Structure-HW, 5% Structure-SW. In the remaining Possible threats to the validity of our evaluation are the 5%, there is no change. following. We only investigated synthesized models due to Figure 5 explains the characteristics of one measurement. the need to generate models of many different sizes. Based First a model for a given set of parameters is generated. Then, on our measurements however, we are confident in future the model is transformed to an ILP, henceforth termed initial applications on realistic MQuat applications of significant size. step. Once the ILP has been generated the first time, a number Further, the number of steps for the first three scenarios is of changes of a selected kind is applied to the model. After very small, but we believe they are sufficient to show their each model change, the ILP transformation is repeated in characteristics. To further counter this point, we added the successive steps. fourth scenario using all kinds of changes and 100 steps. To show the advantage of incremental evaluation, three Finally, we provide no formal correctness proofs in terms of different strategies for change management are examined, finding an optimal solution. For ILP such proofs are well- namely incremental, semiautomatic and flushed. Incremental known however. A correctness proof therefore boils-down describes our approach and fully utilizes the cache. to show that the generated ILP specifications satisfy our Semiautomatic means enabling caching only for auxiliary intention, in particular in case of incremental generation after attributes unrelated to ILP generation and which are most model updates. B. Measured Generation Times not influencing the first measured time. The most obvious difference of Figure 7c compared to the other two strategies Figure 6a shows the generation time as box plots for is the small generation time for all non-structural changes eight consecutive hardware value changes and, thus, seven (updating resource properties and no change). For structural re-generations. Apparently, the incremental strategy performs changes, more computation time is needed, as stated above. All best for all successive steps, utilizing cached values. Using the effects lead to average times of 3.41, 22.43, and 27.79 seconds flushed, each step needs the same time, as flushing the cache for incremental, flushed and semiautomatic, respectively. implies a complete re-computation of all attributes. Disabling the cache for all generation-related attributes leads to a similar picture, however for different reasons: The initial step takes C. Results for Attribute Metrics longer on average than successive steps, as auxiliary attributes In addition to generation times, we evaluated the following are still cached. Those cache entries need to be created, but attribute metrics. Below, compN and calledN denote the can be used on successive steps to decrease execution time. number of computed and called attributes for strategy N , The incremental strategy clearly takes less time for generation, respectively. Notably, N ∈ {1, 2} denotes two strategies, as it is on average faster by a factor of 4.81 and 5.69 compared which are compared with each other. to semiautomatic and flushed, respectively. comp1 Figure 6b depicts the generation times when changing the Cache Miss Rate = called 1 . This shows the degree of “in- hardware structure. Each even step removes some resources crementality” for each strategy, i.e., how good the cache from the model, whereas each odd, successive step adds the entries are reused on average. Lower values are better. comp1 removed resources again. Flushed runs longer than semiauto- Cache Potential = called 2 . The second metric shows the matic for the second and third resource removing step, because potential increase of cache usage when using one strategy of cache updates of auxiliary attributes. For all other steps, over another and can be compared to the cache miss rate. comp2 comp1 2 −comp1 semiautomatic lags behind flushed for the same reasons as Speed-Up = called 2 − called 2 = comp called2 . This is the in the Update-HW case. With the incremental strategy all difference of the second strategy’s cache miss rate and successive steps can be completed faster compared to the other potential to the first. It shows the gain of using one strategies, because of the unrestrained use of cached values. strategy over another. A higher value indicates that fewer attributes need to be computed using the first strategy. Figure 6c unveils the shortcoming of current attribution, even when using the incremental strategy. In the second to Table I shows the average for each attribute metric, scenario fourth step, new modes are added, which are removed together and strategy (incremental, flushed, semiautomatic). Concern- with existing modes in the next three steps. For all strategies, ing cache miss rates, the incremental approach is obviously an increase of the generation time is visible when modes always ahead of the other strategies. This is because the cache are added. However, the order of measured generation times is maintained between changes (in contrast to flushed) and always stays the same: flushed is faster than semiautomatic used for each attribute (in contrast to semiautomatic). Flushing and slower than incremental. The higher computation times the cache between changes leads to a drop of about 7.8% on of structural changes compared to Update-HW are the result average. Using no cache apparently results in a cache miss of newly computed constraints for hardware resources and new rate of 1.0, as every called attribute is also computed. parts of constraints for new software modes. When using the incremental approach, fewer attributes get In Figure 7, the generation times for the mixed scenario called in the first place. This is because calls to dependent along with their averages are depicted for all three different attributes can be skipped, if the value of the calling attribute strategies. The observed execution time is shown using box is read from cache. As an example, in the biggest mixed- plots, as all different combinations of parameters are shown scenario attributes were called 10 135 309 times in total while in this figure. For semiautomatic and incremental, only the using incremental, 41 106 310 with flushed and 82 660 090 initial step takes longer, because a cache needs to be built up using the semiautomatic strategy. (auxiliary attributes for semiautomatic). Although all attributes Regarding cache potential of flushed to incremental, values are actually cached in the incremental strategy, this initial step range from 2.8% for mixed to 32.3% for Structure-SW. This needs less time compared to semiautomatic, because cache-hits indicates, that in the latter case, our approach has less impact. of ILP-related attributes occur in this initial step and reduce Values of semiautomatic to incremental are even lower, indi- computation. Those observations cannot be made for flushed. cating more computations can be saved using our approach. Generation times always stay about equal except after a change For speed-ups, one can observe two effects. The first in- adding new software modes leading to more computation and cludes speed-ups from semiautomatic, which in every case is increases all following generation times. They stay equal, on average between 82 and 90%. This can be read as a direct because the cache is flushed right after the change, leading reduction of computation, i.e., using either incremental or to a similar computation effort in each step. flushed saves those amounts of attribute computations. Hence, Note, that the initial step of flushed and incremental takes this also results in lower execution times. In the “mixed” the same amount of time, as exactly the same computation is scenario, which includes all kinds of changes, our technique done. The cache is flushed after generating the ILP, thereby still provides good results with average savings of 16%. 50 50 50 Semiautomatic Flushed Incremental Semiautomatic Flushed Incremental Semiautomatic Flushed Incremental 40 40 40 30 30 30 seconds seconds seconds 20 20 20 10 10 10 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 step step step (a) Update-HW (b) Structure-HW (c) Structure-SW Figure 6: Generation times for each step, i.e., after a change of the model, using different strategies and kinds of changes 50 50 50 40 40 40 30 30 30 seconds seconds seconds 20 20 20 10 10 10 01 10 20 30 40 50 60 70 80 90 100 01 10 20 30 40 50 60 70 80 90 100 01 10 20 30 40 50 60 70 80 90 100 step step step (a) Semiautomatic (b) Flushed (c) Incremental Figure 7: Generation times for each step, i.e., after a change of the model, and averages (bold line) for the mixed scenario Strategy inc flushed semi inc→flushed inc→semi flushed→semi Metric Cache Miss Rate (%) Pot Sp Pot Sp Pot Sp Update-HW 0.210 0.319 1.0 0.056 0.263 0.031 0.969 0.177 0.823 Structure-HW 0.195 0.321 1.0 0.154 0.166 0.086 0.914 0.179 0.821 Structure-SW 0.330 0.341 1.0 0.323 0.018 0.171 0.829 0.181 0.819 Mixed 0.116 0.335 1.0 0.029 0.307 0.014 0.986 0.167 0.833 Table I: Attribute metrics showing Cache Miss Rate, Potential and Speed-up D. Conclusion of the Evaluation was mandatory to prove scalability. We have shown the scalability of our approach, saving 35 V. R ELATED W ORK to 87% w.r.t. processing time and 1.8 to 16% w.r.t. cache This work combines several research areas, including self- usage. With that, we have shown the benefit asked by our adaptive systems and models at runtime, incremental compu- initial research question RQ3. tation, and model transformation. Besides the shown advantages of our presented solution, a) Self-adaptive Systems: There are plenty of approaches the usage of RAGs has further advantages, which are diffi- to design self-adaptive systems, providing frameworks and cult to quantize. First, the model transformation is specified middleware. Most of these systems, such as MUSIC [24], declaratively. This enhances maintainability and modularity, DiVA [6] or ConFract [25], use models to describe their as attributes can be seen as aspects that can be changed state and context. For a better distinction, we classify our independently [21]. Second, RAGs and term rewriting have a solution using a survey in the field of self-adaptive systems formal basis [22], [11], which can be used to verify properties [26], which aims at building a taxonomy for engineering like termination or complexity. Finally, like in our previous such systems. Our approach adapts reactively, i.e., it reacts to work [23], cache-consistency and efficiency is automatically previous events. The reason for adaptation is the user, managed ensured by our approach, as attributes are recomputed only if elements are software components and the controlling system the values they dependent on changed. is a middleware. While the kind of changes is currently A disadvantage using synthesized models is the loose link compositional, one could also think of parameter adaptation. to real examples, which is to some degree mitigated by the As already mentioned in Section II, the adaptation logic is diversity of created models and applied changes. However, it external, criteria for decision are models, and our approach is centralized, because RACR is not yet developed to be a and not specification. distributed application. Another approach is eMOFLON [35], which is also included In [9] and [27], the developers of KMF aim at gathering in the previous survey and based on Triple Graph Grammars requirements for runtime models and comparing their work to and the Eclipse Modeling Framework. It supports bidirectional EMF. They consider a small memory footprint, efficient model transformation, but without support for incremental evaluation. navigation and thread-safety. In [9], caching was proposed Another approach of Kusel [34] is EMF-IncQuery [36], as a promising solution which can cause problems such as which combines the query language VIATRA2 with EMF inconsistency. Using RAG-controlled rewriting, caching is models. It caches query results and updates those values lever- provided with the guarantee of being consistent. aging the notification mechanism provided by EMF. In [37], There are a few works on reasoning at runtime having a template-based M2T transformations are extended by adding specific focus on efficiency. One work is about time-distorted signatures to it, such that templates are only computed if their reasoning at runtime [28], where KMF is used to efficiently corresponding signature has changed. However, signatures can handle changes in runtime models. They do not follow the either be generated automatically, sometimes not covering the standard approach of having multiple snapshots of the model complete dependencies, or manually, which is time-consuming for different times, but instead all information is stored in and error-prone [37]. Using our approach, such dependencies one model. Every model element has special operations to are automatically calculated and ensured at runtime. shift between versions of it in time. The main difference Bergmann et. al showed incremental model transformation to our approach is, that they analyze using historical data [38] using a RETE-based pattern matcher storing matched left and a standard imperative language. In this work we only hand sides and update them incrementally upon changes. The known about the current state, but can use already computed main difference is, that our approach is only applicable to intermediate results from unchanged parts of the model. trees with overlay graphs, but provides much better support b) Incremental Evaluation: Incremental evaluation is the for complex analyses and not just plain transformations. “efficient recomputation in response to changes in input data” VI. C ONCLUSION AND F UTURE W ORK [29]. It is also called “self-adjustable computing” in the context In this work, we enumerated several challenges concern- of imperative programs tracking data and control dependencies ing the use of runtime models with a particular focus on [30]. Most works in this field agree on the advantages, such scalability. After a brief introduction of previous work, we as efficiency, gained at cost of higher memory consumption described our approach involving the use of references at- [31], [32]. However, they propose either language extensions tribute grammars to model and reason about the state of a or new languages with the need to explicitly wrap access to system. We showed, that our approach, RAG-controlled re- data. Using RAG-controlled rewriting, the way of accessing writing, is suitable to describe hierarchical runtime models and data is not changed, but caching is provided automatically. corresponding analyses using attributes. Furthermore, those c) Model Transformation: Model Transformation is an analyses are incrementally evaluated, as only those parts of the active field of research with many approaches mostly targeting analysis affected by model updates are re-evaluated. Hence, design time. For a better comparison of this work, we clas- unnecessary and possibly costly evaluations can be skipped. sify our approach using a taxonomy [33]: Source and target Using synthesized models of varying size, we achieved model in our approach are both hierarchical, from different a speed-up of 35 to 87% w.r.t. processing time and 1.8 to technical spaces and exogenous. The transformation is one- 16% w.r.t. cache usage. Thus, it is possible to efficiently use to-one, horizontal, syntactical, fully automatic, complex and runtime models within self-adaptive systems despite many information-preserving. Using RACR, the transformation is small changes affecting only parts of those models. In addition, CRUD-changeable, without suggestions and highly reusable. runtime models, their analyses and updates are declaratively The transformation is tested and has neither inconsistency specified using well-founded metacompiler technologies. handling, generality of rules nor bi-directionality. However, In the near future, we plan to investigate models of bigger rules are decomposable and support change propagation. It is and realistic use cases, evaluate the memory usage and use our designed to be scalable and extensible at design time, but not approach to develop practical MQuAT applications. Further- yet interoperable nor widely used within the community. more, we strive to implement existing benchmark cases, like Another survey covering model-to-text (M2T) transforma- the TTC 2015 Train Benchmark Case [39] to allow for better tions [34] comprises three categories for comparison: language comparison with other tools. In the long term, we envision coverage, execution phases and overhead. Racr-mquat sup- a heuristic approach for solving the optimization problem ports arbitrary input and output elements, NACs as well as described in Section 2 completely implemented using RACR, PACs, but no imperative parts. Source minimality, detection thus fully utilizing incremental evaluation. at every granularity, atomic inserts and deletes are ensured by RACR. As our approach exploits lazy evaluation, change ACKNOWLEDGMENTS log optimization is supported indirectly. Traces are automati- This work is partly supported by the German Research cally computed for Model2Transformation at runtime. Change Foundation (DFG) within the Collaborative Research Center propagation is partial, unidirectional, at the level of rules and 912 “Highly Adaptive Energy-Efficient Computing” and the sequentially. Overheads are mostly for memory, minor runtime cluster of excellence cfaed. R EFERENCES 7th International Conference on Aspect-Oriented Software Development. ACM, Apr. 2008. [1] G. Blair, N. Bencomo, and R. B. France, “Models@run.time,” Computer, [22] F. Baader and T. Nipkow, Term rewriting and all that. Cambridge vol. 42, no. 10, Oct. 2009. university press, 1999. [2] J. Kephart and D. Chess, “The vision of autonomic computing,” Com- [23] C. Bürger et al., “Using Reference Attribute Grammar-Controlled puter, vol. 36, no. 1, Jan. 2003. Rewriting for Energy Auto-Tuning,” in 10th International Workshop on [3] M. Broy, M. V. Cengarle, and E. Geisberger, “Cyber-physical systems: Models@run.time, Sep. 2015. Imminent challenges,” in Large-Scale Complex IT Systems: Develop- [24] M. Alia et al., “A Component-Based Planning Framework for Adaptive ment, Operation and Management, ser. LNCS, vol. 7539. Springer, Systems,” in On the Move to Meaningful Internet Systems 2006: CoopIS, 2012. DOA, GADA, and ODBASE, ser. LNCS, vol. 4276. Springer, Oct. 2006. [4] R. Lemos et al., “Software Engineering for Self-Adaptive Systems: A [25] H. Chang, P. Collet, A. Ozanne, and N. Rivierre, “From Components Second Research Roadmap,” in Software Engineering for Self-Adaptive to Autonomic Elements Using Negotiable Contracts,” in Autonomic and Systems II, ser. LNCS, vol. 7475. Springer, 2013. Trusted Computing, ser. LNCS, vol. 4158. Springer, Sep. 2006. [5] S. Götz, “Multi-Quality Auto-Tuning by Contract Negotiation,” PhD [26] C. Krupitzer et al., “A survey on engineering approaches for self- Thesis, Technische Universität Dresden, Apr. 2013. adaptive systems,” Pervasive and Mobile Computing, vol. 17, Part B, [6] F. Fleurey and A. Solberg, “A Domain Specific Modeling Language Feb. 2015. Supporting Specification, Simulation and Execution of Dynamic Adap- [27] F. Fouquet et al., An Eclipse Modelling Framework Alternative to Meet tive Systems,” in Model Driven Engineering Languages and Systems, the Models@Runtime Requirements, ser. LNCS. Springer, 2012, vol. ser. LNCS, vol. 5795. Springer, Oct. 2009. 7590. [7] Y. Chen, J. Dunfield, M. A. Hammer, and U. A. Acar, “Implicit [28] T. Hartmann et al., “Reasoning at Runtime using time-distorted Con- Self-adjusting Computation for Purely Functional Programs,” in ICFP. texts: A Models@run.time based Approach,” in 26th International ACM, 2011. Conference on Software Engineering and Knowledge Engineering [8] H. Sundmaeker, P. Guillemin, P. Friess, and S. Woelfflé, Vision and (SEKE’14). Knowledge Systems Institute Graduate School, USA, Jul. challenges for realising the Internet of Things. EUR-OP, 2010. 2014. [9] F. Francois et al., “Kevoree Modeling Framework (KMF): Efficient [29] P. J. Guo and D. Engler, “Towards Practical Incremental Recomputation modeling techniques for runtime use,” University of Luxembourg, Tech. for Scientists: An Implementation for the Python Language,” in Pro- Rep. TR-SnT-2014-11, May 2014. ceedings of the 2nd Conference on Theory and Practice of Provenance, [10] C. Bürger, “Reference Attribute Grammar Controlled Graph Rewriting: ser. TAPP’10. USENIX Association, 2010. Motivation and Overview,” in Software Language Engineering: 8th [30] S. Burckhardt et al., “Two for the Price of One: A Model for Parallel International Conference. ACM, 2015. and Incremental Computation,” in OOPSLA ’11. ACM, 2011. [11] G. Hedin, “Reference attributed grammars,” Informatica (Slovenia), [31] S. E. Hudson, “Incremental attribute evaluation: A flexible algorithm vol. 24, no. 3, 2000. for lazy update,” ACM Transactions on Programming Languages and [12] D. E. Knuth, “Semantics of context-free languages,” Mathematical Systems (TOPLAS), vol. 13, no. 3, 1991. systems theory, vol. 2, no. 2, 1968. [32] U. A. Acar, “Self-Adjusting Computation,” PhD Thesis, Carnegie Mel- [13] U. Aßmann et al., “A Reference Architecture and Roadmap for Mod- lon University, May 2005. els@run.time Systems,” in Models@run.time: Foundations, Applica- [33] T. Mens and P. Van Gorp, “A Taxonomy of Model Transformation,” tions, and Roadmaps, ser. LNCS, vol. 8378. Springer, 2014. Electronic Notes in Theoretical Computer Science, vol. 152, Mar. 2006. [14] L. A. Wolsey, Integer programming. John Wiley & Sons, 1998, vol. 42. [34] A. Kusel et al., “A Survey on Incremental Model Transformation [15] S. Martello and P. Toth, Knapsack problems: algorithms and computer Approaches,” in Models and Evolution Workshop. CEUR-WS.org, implementations. John Wiley & Sons, 1990. 2013. [16] T. Reps, “Generating language-based environments,” Cornell University, [35] A. Anjorin, M. Lauder, S. Patzina, and A. Schürr, “eMoflon: Leveraging Tech. Rep., 1982. EMF and Professional CASE Tools,” Informatik, 2011. [17] C. Bürger, S. Karol, C. Wende, and U. Aßmann, “Reference attribute [36] G. Bergmann, Z. Ujhelyi, I. Ráth, and D. Varró, “A graph query language grammars for metamodel semantics,” in Software Language Engineer- for EMF models,” in Theory and Practice of Model Transformations, ing, ser. LNCS, vol. 6563. Springer, 2011. ser. LNCS, vol. 6707. Springer, 2011. [18] M. Shaw and D. Garlan, Software architecture: perspectives on an [37] B. Ogunyomi, L. M. Rose, and D. S. Kolovos, “On the Use of Signatures emerging discipline. Prentice Hall Englewood Cliffs, 1996, vol. 1. for Source Incremental Model-to-text Transformation,” in Model-Driven [19] M. Jourdan, “An optimal-time recursive evaluator for attribute gram- Engineering Languages and Systems, ser. LNCS, vol. 8767. Springer, mars,” in International Symposium on Programming: 6th Colloquium, 2014. ser. LNCS, vol. 167. Springer, Apr. 1984. [38] G. Bergmann et al., “Incremental Pattern Matching in the Viatra Model [20] J. Paakki, “Attribute grammar paradigms—a high-level methodology in Transformation System,” in Proceedings of the Third International language implementation,” ACM Computing Surveys (CSUR), vol. 27, Workshop on Graph and Model Transformations. ACM, 2008. no. 2, 1995. [39] G. Szárnyas, O. Semeráth, I. Ráth, and D. Varró, “The TTC 2015 Train [21] P. Avgustinov, T. Ekman, and J. Tibble, “Modularity first: A case for Benchmark Case for Incremental Model Validation,” 8th Transformation mixing aop and attribute grammars,” in AOSD ’08: Proceedings of the Tool Contest (TTC 2015), 2015.