Towards Efficiently Running Workflow Variants by Automated Extraction of Business Rule Conditions Markus Döhring Christo Klopper Birgit Zimmermann SAP Research Darmstadt SAP Deutschland SAP Research Darmstadt Bleichstraße 8 Hasso-Plattner-Ring 7 Bleichstraße 8 64283 Darmstadt, Germany 69190 Walldorf, Germany 64283 Darmstadt, Germany markus.doehring@sap.com christo.klopper@sap.com birgit.zimmermann@sap.com ABSTRACT 1. INTRODUCTION Efficient workflow variant management is becoming crucial Workflow management systems (WfMS) are becoming an especially for enterprises with a large process landscape. Our essential part of most industrial IT system landscapes [19]. research fosters the combination of business rules for adapt- For some domains, traditional WfMS have already been de- ing reference workflows at runtime and tailoring them to termined as unsuitable to cover prevalent requirements w.r.t. many different situations. A main goal is to optimize the the flexibility of workflows [7]. In order to address the chal- performance of workflow instances w.r.t. different aspects, lenge of managing workflow variants (i.e. workflows with e.g., branching decisions, throughput time or compliance. slight deviations from a “reference workflow”) at design-time Having a data mining procedure at hand which can auto- as well as their dynamic adaptation at runtime due to chang- matically extract potentially useful conditions from execu- ing data contexts, we have proposed the integration of busi- tion logs to create new variants is therefore a very signifi- ness rules containing adaptation operations on adaptive seg- cant benefit. The extracted conditions could be conveniently ments in reference workflows [10]. reused within the business rules of our framework, which can In many practical scenarios, it is unrealistic that process handle the deviations at runtime for those special situations. analysts are able to define all variants and exceptions in However, most existing data-mining techniques do not de- a workflow. Especially when a WfMS is introduced in a scribe a continuous mining pipeline how to get from work- company, but also if workflow models are already mature, flow logs to problematic context conditions for new variant environmental changes may lead to shifts in the impact fac- creation or are difficult for business people to interpret. tors on process performance. A potential relief for making Therefore we present an integrated rule mining method- such blind spots in workflow execution visible is the applica- ology, starting with the semi-automatic discovery of “hot tion of process mining techniques. The goal is to find data- spots” within workflow instance logs. Then, data variables dependencies for weak spots in the workflows and making of instances related to these hot-spots are translated into a them available as conditions for additional business rules data mining classification problem. Other than related ap- leading to new workflow variants. Existing work has partly proaches, we employ a fuzzy rule learning algorithm, yielding addressed these issues each with a relatively isolated view easily interpretable and reusable conditions for variants. We on e.g. bottleneck detection or dependency mining. Results also provide first insights from a case study at a consulting w.r.t. to an integrated “mining pipeline” for a business user company and corresponding open research challenges. are however still quite unsatisfying. For example, prevalent approaches leave the user with a mined decision tree which, as we will show, might be hard to read for real-world work- Categories and Subject Descriptors flow logs. Instead, we aim at a pipeline from a workflow defi- H.2.8 [Database Management]: Database Applications— nition in an understandable notation over automated mining Data Mining; H.4.1 [Information Systems Applications]: application to interpretable business (variant) rules. Office Automation—Workflow Management; D.2.2 [Software Our approach is based on the general idea of rule-based Engineering]: Design Tools and Techniques workflow adaptation as described in Section 2. As a solu- tion to the above challenges, in Section 3 we present a min- ing methodology which we consider promising as a suitable Keywords mining pipeline for a business user. For each of the method- workflow, business rules, process mining, process perfor- ology’s three generic steps, concrete technologies and their mance, rule learning wiring are explicated, especially the employment of a fuzzy mining approach for ruleset extraction. We then present first learnings from a case study on real-world workflow ex- ecution data building upon our methodology in Section 4 and summarize challenges which have to be solved to fully implement our methodology in Section 5. In Section 6 we discuss related research, before we conclude in Section 7 and state remaining issues for future work. 23rd GI-Workshop on Foundations of Databases (Grundlagen von Daten- banken), 31.05.2011 - 03.06.2011, Obergurgl, Austria. Copyright is held by the author/owner(s). 49 2. FLEXIBILIZATION OF WORKFLOWS BY ADAPTATION RULES Workflow Logs Select Problem Category(s) Our methodology for condition extraction is motivated 2. Automatic Detection of by a general approach for workflow adaptation [10, 9]. It is •Control-Flow (BPMN Model) „Hot Spots“ •Transformation of hot- spots into classification problem considered essential to establish a basic understanding of the •KPIs (SCIFF or LTL) •Behaviour Constraints •BPMN to petrinet conversion. •Fuzzy rule-learning for (SCIFF or LTL) data-dependencies of hot- nature of business(variant) rules as targeted for being auto- •… •Conformance checking •Performance checking spots matically mined. Our framework as well as the examples (bottlenecks) •Aggregation of problems 3. Automatic Filtering of 1. Specify Expectations to hot-spots. in this paper rely on BPMN2 [1], because its notation is a Against your Process „responsible“ data dependencies de-facto industry standard which was designed to be under- Extract Adaptation Rule standable for business users. Basically, the framework con- for Workflow Improvement sists of three conceptual building blocks for workflow variant Workflow Model management and flexible workflow adaptation: Specify Rules 1. Adaptive Segments in BPMN2 Reference Work- flows: An adaptive segment demarcates a region of a work- Figure 1: Outline of the Rule Extraction Procedure flow which may be subject to adaptations at runtime when entering the segment. It corresponds to a block-structured part of the workflow, i.e. a subgraph which has only one 3. METHODOLOGY FOR VARIANT RULE incoming and one outgoing connection. In special cases, adaptive segments can also be “empty”. What matters is CONDITION EXTRACTION that they correspond to valid BPMN2 workflow definitions As already stated, we are interested in automatically ex- and not to a kind of white box which is left empty for later tracting condition constraints (the “IF-part”) for potentially filling. We have extended the BPMN2 metamodel to capture useful workflow adaptation rules within our framework. Use- the special semantics of adaptive segments [9]. ful in this respect means, that the condition constraints 2. Workflow Adaptations Defined in BPMN2: The should describe eventually problematic situations in work- actual definition of potential adaptations which can take flow instances by means of their data context values, such place at runtime have been proposed as a pattern catalogue that a timely adaptation of a workflow instance can eventu- [10] which also relies on BPMN2 notation, with the benefit ally prevent such a situation. Our proposed methodology is that adaptation patterns are comprehensible and extensible. illustrated in Figure 1 in a circular manner. The methodol- The catalogue contains basic adaptations like SKIP or IN- ogy is divided into three main phases explained in detail in SERT, but also more sophisticated event- and time- related the following subsections. For each phase, concrete concepts patterns, like “event-based cancel and repeat” or “validity and technologies for implementing the methodology are dis- period”. Every adaptation pattern has the block-structured cussed and open challenges are outlined where existing. adaptation segment as an obligatory input parameter. As such, patterns can be conveniently nested and combined. 3.1 Formulation of Log Expectations 3. Linking Adaptations to Data Contexts by Busi- The first phase of our methodology consists in the def- ness Rules: Business (variant) rules are used to apply suit- inition of expectations towards a set of workflow instance able adaptations for different situations expressed by data logs. Correspondingly, there are two obligatory input com- context conditions. The data context can be globally valid ponents for the extraction pipeline: a workflow model and (like a date) or workflow instance specific (like an order a sufficiently large set of workflow instance logs belonging value). A pseudo-syntax for variant rules, where ∗ stands to the model. The instance logs must contain workflow- for 0-n repetitions, can be defined as: ON entry-event IF relevant events like at least the start or finishing timestamp THEN APPLY []∗ Once the general relations of adaptive segments context variables1 . Since we want to target business users and potential adaptations have been established by a pro- with our rule extraction approach, we consider BPMN as an cess analyst, the conditions could be maintained by a busi- appropriate input format for the expected control-flow logic ness user e.g., via a domain-specific language. For automatic restricing the expected order of task executions and event rule extraction, in this work we therefore especially focus on occurrences in the input logs. the IF-part of potentially newly discovered variant rules and As an optional input, additional constraints w.r.t. work- aim at revealing data dependencies for variants which are flow execution can be provided in some form of logic. These not a-priori known, but have significant implicit impact on constraints may concern time-related interdependencies of the overall business performance of workflow execution. events within a workflow instance log, whereas typical key Figure 2 exemplifies the above concepts based on a ship performance indicators (KPIs) like throughput times can be engine maintenance workflow fragment. The actual conduc- understood as a subset of such time constraints. But also tion of engine tests for a ship may depend on the harbor other more sophisticated circumstances which are hard to in which it currently resides. Due to environmental re- model in BPMN2 graph structures can be provided as log- strictions, many different harbors impose specific time con- ical constraints, as for instance that a task A should be straints on ships conducting engine tests. In Hamburg for executed N times after the occurrence of task B. Suitable example, ships may only have 12h time, after which devices logics to formulate such process-related constraints can for need to be reset and the tests need to be restarted. For example be based on the SCIFF framework [5] or linear tem- adapting the workflow correspondingly, a generic parame- poral logic (LTL) [17]. Since a regular business user may not terizable template is used and weaved with the segment at 1 It is hard to give generally valid recommendations on data runtime. size characteristics, but from experience reasonable mining can start from 1000 instances with about 5 context variables. 50 be familiar or feel comfortable with such logics, it is recom- In contrast to the checking mechanisms for issues (1.) mended to provide constraint templates, i.e. small chunks of and (2.), a challenge consists in the spotting of the logic mapped to easily parameterizable pieces of restricted actual source for a constraint violation. For our KPI natural language for constraint maintenance. example (B 1h after A), if B is not executed at all, it has to be decided whether A or not B or both are to be 3.2 Automatic Discovery of “Hot Spots” considered as the actual error source and kept for fur- For the ability to apply established mining and analy- ther analysis. Potentials lie in the partly automated sis techniques on the instance logs in combination with the mapping of constraint predicates to places or transi- workflow model, it is useful to first transform the BPMN tions in the underlying model and the consideration of workflow definition into a pure formal representation, e.g. in “what happened first”. Research is still ongoing here. terms of petri net graphs which are backed by a long trail of As a final step of this phase, the user is confronted with is- research and corresponding toolsets. Transformation mech- sues which have a particular degree of “severity” (e.g. exceed anisms which are able to map a large part of BPMN con- a predefined fraction of instances which are non-conformant) structs to petri net constructs exist [8] and can be employed and gets the corresponding “hot-spots” based on average in- within our methodology. The next phase of our methodol- stance execution marked in the BPMN process model. The ogy then consists in the automatic discovery of problematic proper automatic accumulation and back-projection of is- spots in the instance logs, relating to different issues: sues to the BPMN workflow model remains an open issue. The user may then select one or several hot spots and one or 1. Non-conformance to defined workflow model: several problem types for these hot-spots for further analysis Using log-replay approaches on the petri net model by mining data dependencies as business rule conditions as as presented in [16], it can be determined whether described in the next subsection. instances behave exactly according to the underlying model or whether there are deviations. Provided the 3.3 Automatic Extraction of Rules for “Hot petri net has been suitably constructed, such devia- Spot Occurrences” tions can be structurally spotted as petri net places For the selected hot-spots and problem types, the instance where tokens are left over after an instance has been data from the workflow logs is transformed into a classifica- finished or where tokens often are missing when a tran- tion problem for machine learning algorithms. A classifi- sition should be fired. In most of the latter cases, a cation problem consists of a number of cases (=workflow distinct transition (=BPMN task) can be “blamed” for instances), each made up of a number of numeric or nomi- causing the non-conformance. Places and transitions nal data variable values (=workflow instance or task context, with a relatively high error-rate are kept for further e.g. order value, customer priority or shipment partner) and analysis within our methodology. a single class in terms of a category for a learning instance. The class can be determined in a binary manner as problem- 2. Disproportionate delays (bottlenecks): Similar atic or non-problematic from the problem types connected to the above petri net log-replay techniques, the so- to the hot spots, but also the distinction of finer-granular journ times of tokens in places and the times it takes problem classes can be considered. The variable values for to execute transitions can be stored [12]. Based on this a learning instance can be constructed by looking at their computed data, it can be determined where instances occurrence when an instance has reached a hotspot in the on average get stuck for a disproportionate amount of petri net. Special challenges in this conversion step concern time related to the average overall throughput time. the treatment of some control-flow constructs, as for exam- The corresponding threshold values can be computed ple a loop which may cause multiple visits of a hotspot in a automatically if they are not explicitly formulated as workflow, whereas the context variables may have changed KPI constraints, which is discussed below. Again, con- meanwhile. Such problems and solution approaches, for in- cerned places and transitions are kept for analysis. stance creating a separate training instance for each loop execution, are discussed, e.g., in [15]. 3. Non-conformance to execution constraints: SCIFF Having the training set for a machine learning classifier at or LTL constraints can be checked on the instances logs hand, established algorithms like C4.5 decision tree [14] or using approaches from [5] resp. [17] with respect to rule learners [6, 11] can be applied. In fact recent research their violation. The employment of constraint check- mostly favors decision trees for presenting mining results to ing allows for a very broad range of non-conformance the business user [18]. However, we have tested the C4.5 types being checked. Three of the most important ones decision tree learner on a real-world dataset (see Section 4) are: and found its results not interpretable for the business user to draw any reasonable conclusions from it mainly due to • The violation of KPIs by the use of time-related the size and complexity of the overall decision tree. Despite constraints (for example, task B has to be exe- ex-post global optimization heuristics in C4.5, local feature cuted 1h after task A latest). selection often leads to redundant splits in the initial decision trees. As rules can only be extracted one-by-one along paths • The deviation from expected routing decisions (for in the decision tree [11], they are of rather less use for di- example if orderValue>10.000 in a sales order, al- rectly extracting conditions for use in adaptation rules that ways choose the “priority shipment” branch after might eventually tackle the problematic situation at work- an exclusive gateway). flow runtime. The problem with established rule learners • Data- or organizational incompliance like the vi- like RIPPER [6] in turn is that they generate ordered rule- olation of the “four-eyes principle” for some tasks. lists, which means each rule in the list covers only those 51 learning instances which are not covered by the previous was conducted only after another task already was executed. rule. This characteristic makes the corresponding output Combining these information types, we would identify the rules also hard to read and interpret for an end user. Po- validation task as a “hot spot” in the process. tential relief consists in the employment of a fuzzy learning For our first analysis purpose however, we have concen- approach which generates globally valid rules that have a trated on the decision whether a request has been staffed or probabilistic certainty factor to hold on the dataset or not. not. Following [15], we turn the decision into a binary classi- We are currently evaluating a novel algorithm [13] w.r.t. fication problem using a manually selected subset of context the suitability for being employed within our methodology, variables that have occurred while instance execution. The which is subject to discussion in the following section. results are presented in the following. 4.3 Preliminary Results 4. CASE-STUDY Running a C4.5 decision tree (J48 implementation) learner The first feasibility study for our methodology was con- with standard parameters yields a decision tree of size 757 ducted at a large globally operating IT consulting company. with 644 leaves. It is quite obvious that this output type In the following, we report on the input dataset, the realiza- would need a considerable time to be interpreted for a busi- tion of our methodology in the ProM2 framework, and our ness user. Leaving aside the rule learning algorithms for or- preliminary results and findings. dered rule lists, we instead applied the fuzzy rule induction algorithm presented in [13]. Results were very promising, 4.1 Description of the Dataset for example generating the following output (some context The focus of the case study is on a staffing workflow for values changed for anonymization): serving customer and company-internal human resource re- (Remote = Y) and (ReqingSRegion = DUCKBURG) and (ReqType = Project) quests for different type of IT projects. A simplified cor- => class=Branch 4.1 { ROLE_Closed (Not Staffed)/complete } (CF = 0.61) (ReqingSRegion = NA) and (StartDateFlexible = Yes) and responding model in BPMN notation is shown in Figure 3. (ReqingLOB = FS__Consulting) and (CustIndustry = ) => class=Branch 4.1 { ROLE_Closed (Not Staffed)/complete } (CF = 0.71) The first three sequential steps are creating and submitting (Remote = N) and (ContractType = ) and (CustIndustry = UTILITIES) and (JobText = B) and (Requestor = ABC) and (StartDateFlexible = No) the request and then having it validated by an authorized => class=Branch 4.1 { ROLE_Closed (Not Staffed)/complete } (CF = 0.53) person. Resources can be found by three different strategies: (Remote = ) => class=Branch 4.2 { ROLE_Staffed/complete } (CF = 0.73) (Remote = Y) => class=Branch 4.2 { ROLE_Staffed/complete } (CF = 0.7) by company-internal broadcasts, by external broadcasts to (ReqingSRegion = GOTHAM_CITY) => class=Branch 4.2 { ROLE_Staffed/complete } (CF = 0.72) (StartDateFlexible = No) => class=Branch 4.2 { ROLE_Staffed/complete } (CF = 0.72) partner consulting companies or by directly contacting a po- tentially suitable resource. After at least one such search Manual inspection of the instances characterized e.g. by procedure has been triggered, different reactions can occur, the first two rules immediately showed that they in fact con- namely the acceptance, rejection, withdrawal or feedback of stitute problematic situations in the staffing workflows. In a non-availability for a particular resource. At anytime during flexible WfMS according to Section 2, these conditions could these search procedures, an initial proposition of currently now be reused as a condition for a variant rule with the click gathered resources can be made to the customer. After the of a button, for example inserting addtional activities in the request is closed, it is marked as either successfully or not workflow to handele the problematic situation or not even staffed. The input dataset consisted of 13225 workflow in- trying specific activities because of potential waste of time. stance logs each with up to 50 data context variable values attached. In this case, context variables concern for example 5. OPEN CHALLENGES the country a request is sent from, the concerned industry For a better overview and to motivate future work in this profile or the overall duration of the project. area, the main challenges we experienced while setting up 4.2 Realizing the Methodology based on ProM the mining pipeline are briefly recapitulated: For some basic analysis techniques, we rely on functional- • A petri net conversion most useful for mining purposes ity provided by ProM. The translation of the BPMN model has to be determined, as straight-forward mappings into a petri net was done manually, as automated mapping have problems with more advanced BPMN constructs approaches still generated too complex results which could or generate valid but overcomplex petri nets. make first mining and analysis efforts more difficult. The • The accumulation and aggregation of hot spots from resulting petri net is shown in the upper middle of Figure 4. the petri net-based and especially the constraint-based Black boxes indicate “silent” transitions which do not corre- checking methods has to be defined in more detail. spond to any task in the BPMN model. On the left upper This challenge is connected to linking back hot spots side, one of the additional constraints provided by the con- to the BPMN model for further investigation. sulting company for its staffing workflows is shown, i.e. that before or at least in parallel to an external broadcast, there • The conversion of hot spots to a classification prob- should also be an internal broadcast trying to gather the lem has to be advanced w.r.t. problematic control-flow required resources. The lower left window shows the evalu- structures as for example loop or special joins. ation results of these rules. In the right window, the petri • For the classification problem, the selection of context net-based bottleneck analysis indicates an overproportional variables and algorithm parameters has to be made waiting time between request submission and request vali- accessible for a business user. Experiments also showed dation (concrete values in the figure have been changed for that the rule output may vary significantly w.r.t. the anonymization purposes). In the lower middle window, we predicates used in the rules. We have to find a way see an instance marked with a conformance issue, namely for stabilizing the rule output, e.g. by modifying the that the request validation sometimes has been left out or learning algorithm w.r.t. this goal and not only taking 2 prediction accuracy into account. http://www.promtools.org/prom5/ 52 6. RELATED WORK 8. REFERENCES Due to space restrictions, we do not cover the broad range [1] Business Process Model and Notation (BPMN) - of general process mining approaches in this section, but Version 2.0 11-01-03, 2011. rather elaborate on selected approaches which tackle the is- [2] L. Bodenstaff, A. Wombacher, M. Reichert, and M. C. sue of dependency- or constraint-extraction in workflow logs: Jaeger. Monitoring Dependencies for SLAs: The The authors of [15] present the idea of decision point min- MoDe4SLA Approach. SCC’08, pages 21–29, 2008. ing in workflows by translating a routing decision into a [3] M. Castellanos, F. Casati, U. Dayal, and M.-C. Shan. classification problem for machine learning. In this work, A Comprehensive and Automated Approach to we generalize this idea also for problem domains in work- Intelligent Business Processes Execution Analysis. flow execution like bottlenecks or general rule compliance. DAPD, 16(3):239–273, Nov. 2004. In [18], a pipeline for analyzing influential factors of business [4] F. Chesani, E. Lamma, P. Mello, M. Montali, process performance is presented. Some of the steps resem- F. Riguzzi, and S. Storari. Exploiting Inductive Logic ble that of our approach, however e.g. decision trees are Programming Techniques for Declarative Process used for dependency analysis. The approach is evaluated on Mining, pages 278–295. Springer, 2009. a simulated dataset. As we have motivated, decision trees [5] F. Chesani, P. Mello, M. Montali, F. Riguzzi, and are rather unsuited for direct extraction of globally valid S. Storari. Compliance Checking of Execution Traces “hot-spot” conditions for a business user on real-world data. to Business Rules. In BPM’08 Workshops, pages An approach for learning constraints for a declarative work- 129—-140, Milan, 2008. Springer. flow model is presented in [4], however focusing on control- [6] W. W. Cohen. Fast Effective Rule Induction. In flow constraints and neglecting data-dependencies. In [3], ML’95, pages 115—-123, 1995. related to HP’s solution for business operation management, an overview on the suitability of different mining techniques [7] P. Dadam and M. Reichert. The ADEPT Project: A for specific analysis types are discussed. Rule extraction Decade of Research and Development for Robust and is mentioned, but only as rules derived from decision trees Flexible Process Support. CSRD, 23(2):81–97, 2009. which as discussed may get too complex for our purposes. [8] R. M. Dijkman, M. Dumas, and C. Ouyang. The approach in [2] focuses on dependencies of service-level Semantics and Analysis of Business Process Models in agreements for service compositions and analyzes reasons for BPMN. IST, 50(12):1281—-1294, 2008. SLA violations. In contrast to our approach, where depen- [9] M. Döhring and B. Zimmermann. vBPMN: dencies are extracted from historic data, the dependencies Event-Aware Workflow Variants by Weaving BPMN2 in [2] are identified at design time for later comparison with and Business Rules. In EMMSAD’11, London, 2011. monitoring results at runtime. Springer. [10] M. Döhring, B. Zimmermann, and L. Karg. Flexible Workflows at Design- and Runtime using BPMN2 Adaptation Patterns. In BIS’11, Poznan, 2011. Springer. 7. CONCLUSION [11] E. Frank and I. H. Witten. Generating Accurate Rule We motivated the need for automated extraction of con- Sets Without Global Optimization. In ICML’98, dition constraints for problematic “hot spots” in workflows Madison, 1998. by the initial uncertainty of a modeler when introducing a [12] P. Hornix. Performance Analysis of Business Processes flexible WfMS and by rapidly changing impact factors on through Process Mining. (January), 2007. workflow execution performance. Existing approaches for [13] J. Hühn and E. Hüllermeier. FURIA: an algorithm for data dependency extraction have turned out not to deliver unordered fuzzy rule induction. DMKD, conveniently interpretable results on real-world datasets and 19(3):293–319, Apr. 2009. were considered generally hard to employ for business users. [14] J. R. Quinlan. C4.5: Programs for Machine Learning. Therefore in this work we have proposed a methodology Morgan Kaufmann Publishers Inc, 1993. which starts from a BPMN workflow definition with a set [15] A. Rozinat and W. van Der Aalst. Decision mining in of additional template-based constraints and transforms the business processes, 2006. workflow into a petri net for automatic hot-spot discovery [16] A. Rozinat and W. M. P. van der Aalst. Conformance according to rule-conformance, control-flow-conformance and checking of processes based on monitoring real bottleneck detection. The hot-spots in turn are transformed behavior. IS, 33(1):64–95, 2008. into a classification problem for further mining algorithms [17] W. M. P. van der Aalst, H. T. de Beer, and B. F. van which should explain the data-dependencies characterizing Dongen. Process Mining and Verification of the problem. One key differentiator to other approaches Properties. In OTM Conferences (1), pages 130—-147, is the use of a fuzzy rule induction approach, which deliv- Agia Napa, 2005. Springer. ers globally valid and interpretable rules. Our approach es- [18] B. Wetzstein, P. Leitner, F. Rosenberg, I. Brandic, pecially aims at providing the corresponding conditions for S. Dustdar, and F. Leymann. Monitoring and reuse in adaptation rules which improve the overall workflow Analyzing Influential Factors of Business Process performance by circumventing critical situations. Performance. EDOC’09, pages 141–150, 2009. However, some integration steps between the phases of our [19] P. Wolf, C., Harmon. The State of Business Process methodology, like a BPMN to petri net translation suitable Management 2010, 2010. for mining purposes, the aggregation of problem situations to hot-spots or the guided parameter selection for the rule mining algorithm remain subject to future work. 53 IF dockyardStation==Hamburg THEN APPLY NonFailableTimedHandler(measurements, time=12h, handlerTask=ResetDevices) <