A Concept for Self-Explanation of Macro-Level Behaviour in Lifelike Computing Systems Martin Goller1 and Sven Tomforde1 1 Intelligent Systems, Christian-Albrechts-Universität zu Kiel, Germany goller.cau@gmail.com / st@informatik.uni-kiel.de Abstract takes up and continues the original motivation of the OC and AC initiatives. The basic idea of developing future ’lifelike’ systems is to In this article, we note that in addition to the obvious transfer qualities in technical utilisation that go beyond well- established mechanisms such as self-adaptation, learning, lifelike properties such as evolution and continuous adap- and robustness. In this paper, we argue that the resulting sys- tation to the ’living space’ or ’environmental niche’ as well tems will need components to self-explain their behaviour - if as focused response mechanisms (to name only the obvious we want to avoid acceptance issues that result from surpris- examples), another prominent challenge comes to the fore ing and irritating system behaviour. Such self-explaining be- in the acceptance of such systems by the human users (or haviour needs to answer the questions of when, what and how explanations should be provided to the user. We review exist- better: stakeholders or influenced persons). This raises the ing metrics, outline a concept of how to address the ’when’ question of how such systems can explain their behaviour to question and identify corresponding research challenges to- humans in an automated way, which in turn leads directly wards an automated generation of explanations. to the two crucial questions: When are explanations of be- haviour necessary? And: What needs to be explained. From a developer’s point of view, this primarily means I. Introduction that we need a concept to answers the question of ’when’, Recent trends in information and communication technol- which then enables us to answer the ’what’. Therefore, ogy entailed increasingly autonomous systems that adapt this article explains a concept to measure system behaviour, their own behaviour and try to optimise it over time – result- whereupon abnormal behaviour of these measurements will ing in so-called self-adaptive and self-organising (SASO) then serve as an answer to the question ’when’. systems. Initiatives such as Organic (Müller-Schloer and Building on recent work, we present a measurement Tomforde (2017)) and Autonomic Computing (Kephart and framework for system behaviour that forms the basis for Chess (2003)) are concrete manifestations and pioneers of such an explanation framework (Section II). In addition, this trend, which is supported by new applications such as we discuss possible further variables that can be relevant autonomous driving (Levinson et al. (2011)) or the Internet for lifelike behaviour and can therefore be integrated into of Things (Weber and Weber (2010)). SASO technology the framework. On this basis, we discuss a concept to is understood as an approach to keeping the complexity of automatically detect events that serve as triggers for self- increasingly integrated, open and dynamic systems manage- explanations, which is combined with a principled, possible able, as it is no longer possible to plan all possibilities in use to answer the question ’what’ (Section III). Since this advance at design time. At the same time, the integration of is intended as a first concept, we highlight the most urgent machine learning methods is intended to create novel possi- research challenges to automatically generate the resulting bilities to react appropriately to the unknown and at the same self-explanations. The article concludes with a summary and time continuously strive for better behaviour. an outlook on how the defined concepts can be explored and Even though SASO technology already had its roots implemented. in cybernetics and has been drastically strengthened again in the last two decades (perhaps starting from Tennen- II. A Measurement Framework for house’s Proactive Computing, see (Tennenhouse (2000)), and Weiser’s vision of Pervasive Computing, see (Weiser Macro-Level Behaviour Assessment (1999))), we can state that controllability and reacting or The basis of our approach to self-explanatory mechanisms adapting to the unknown remain the central challenges. This of lifelike technical systems is the possibility of quantifying realisation leads, among other things, to the approach of system behaviour by means of (external) observation. To making technical systems even more lifelike, which e.g. this end, in this section, we first present our system model, Copyright ©2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). which we currently assume - and which can form the ba- sis for future lifelike systems. Using this system model, we then explain existing and potential approaches for quantify- ing system properties. System Model In this article, we refer to a technical system S as a collection A of autonomous subsystems ai that are able to adapt their behaviour based on self-awareness of the internal and ex- ternal conditions. We further assume that such a subsystem is an entity that interacts with other entities, i.e., other sys- tems, including hardware, software, humans, and the phys- Figure 1: Schematic illustration of a subsystem ai ical world with its natural phenomena. These other entities from (Tomforde et al. (2011)). The arrows from the sen- are referred to as the environment of the given system. The sors and PS to the CM indicate observation flows, while the system boundary is the common frontier between the system arrows from the CM to the PS and the actuators indicate con- and its environment. trol flows. Dashed arrows emphasise a possible path that is Each ai ∈ A is equipped with sensors and actuators (both, typically not used. Not shown: The CM is able to communi- physical or virtual). Internally, each ai consists of two parts: cate with other CMs in the shared environment to exchange The productive system part PS, which is responsible for the information such as sensor reading and to negotiate policies basic purpose of the system, and the control mechanism CM, using the communication bus. which controls the behaviour of the PS (i.e., performs self- adaptation) and decides about relations to other subsystems. In comparison to other system models, this corresponds to system, and are the result of the self-adaptation process of the separation of concerns between System under Observa- the CM . tion and Control (SuOC) and Observer/Controller tandem This system model describes an approach based on the (Tomforde et al. (2011)) in the terminology of Organic Com- current state-of-the-art in the field of self-adaptive and self- puting (OC) (Tomforde et al. (2017b)) or Managed Resource organising systems. We assume that ongoing research to- and Autonomic Manager in terms of Autonomic Comput- wards more lifelike systems will shift the boundaries in ing (Kephart and Chess (2003)). Figure 1 illustrates this con- terms of the underlying technology as well as the possibility cept with its input and output relations. The user describes to alter higher-levelled design decisions – but it will most the system purpose by providing a utility or goal function U likely not result in entirely new design concepts. In turn, we which determines the behaviour of the subsystem. The User assume that fundamental questions will arise about how the usually takes no further action to influence the decisions of CM evolves according to the characteristics of its ’environ- the subsystem. Actual decisions are taken by the productive mental niche’, for instance, but the separation of concerns system and the CM based on the external and internal con- between CM and PS remain visible. ditions and messages exchanged with other subsystems. We model each subsystem to act autonomously, i.e., there are Standard Measures no control hierarchies in the overall system. Please note that Traditionally, the success and the behaviour of a technical for the context of this article an explicit local configuration system is quantified using the primary purpose of the appli- of the PS is necessary – which in turn limits the scope of cation. In the first place, this directly refers to the system the applicability of the proposed method. Furthermore, each goal. Based on the categorisation proposed by McGeoch in subsystem must provide read-access to the configuration. McGeoch (2012), we distinguish between the two perfor- At each point in time, the productive system of each ai is mance aspects quality of the solution and time required for configured using a vector ci . This vector contains a specific the solution. The latter defines how much time the system value for each control variable that can be altered to steer the required to solve the given purpose or application – where behaviour, independently of the particular realisation of the the time can be given in CPU cycles, in real-time, or even in parameter (e.g., as real value, boolean/flag, integer or cate- a logical time. Intuitively, such a time-based measure comes gorical variable). Each subsystem has its own configuration with high precision depending on the underlying resolution space, i.e. an n-dimensional space defining all possible real- but it does not include any statements about the quality or isations of the configuration vector. The combination of the generality. In particular, it may depend on the specific hard- current configuration vectors of all contained subsystems of ware equipment that has been used in the experiments. In the overall system S defines the joint configuration of S. turn, the quality itself (which may be expressed in a degree We assume that modifications of the configuration vectors of goal achievement) says nothing about the time required are done by the different CM only, i.e. locally at each sub- to accomplish the goal. These overarching aspects of system behaviour are aug- De Wolf and Holvoet (2004). mented with a more theoretical analysis of the applicabil- c) Self-adaptation refers to the ability of systems to ity. In particular, given techniques such as the O(n) nota- change their behaviour according to environmental condi- tion, runtime and memory complexity are quantified. This tions, typically with the goal to increase a utility func- can be extended with verification of processes, i.e., guaran- tion. The degree of adaptivity can be measured using static tees that may be quantified in terms of coverage or degree (Schmeck et al. (2010)) and dynamic (Tomforde and Goller of guarantee-able behaviour. As an alternative, the ’restore- (2020)) approaches. invariant approach’ by Nafz et al. Nafz et al. (2011) estab- d) Scalability is a property that defines how far the under- lishes a formal framework for self-organisation behaviour lying mechanisms are still promising if the number of partic- that may serve as a quantifiable basis. ipants grows strongly. Quantitatively, this can be expressed In addition to these considerations, the robustness and re- as an exponent for the control overhead, for instance. silience of systems, as well as their behaviour, can be quanti- e) Stability is to a certain degree a meta-measure applied, fied using specific metrics. One recent example from the do- e.g., to the degrees of self-adaptation and self-organisation. main of Organic Computing systems can be found in (Tom- It determines how far the metrics are static allowing for stan- forde et al. (2018)). dard changes and identifying deviations from the expected behaviour. An example can be found in (Goller and Tom- Self-Adaptation-based Measures forde (2020)). Due to the shift in responsibility as visualised by our system f) Variability or Heterogeneity are terms referring to pop- model depicted in Figure 1 – a CM is added to the PS that ulation of individual subsystems as they focus on the dif- performs (semi-)autonomous decisions – research on SASO ferences in the behaviour, the capabilities or the strategies systems entailed augmenting the measurement framework followed by the subsystems. Examples can be found in by several SASO-specific metrics. For instance, Kaddoum et (Schmeck et al. (2010)) and (Lewis et al. (2015)). al. (Kaddoum et al. (2010)) discuss the need to refine classi- g) Mutual influences among distributed autonomous sub- cal performance metrics to SASO purposes and present spe- systems indicate that the decisions of one have an impact cific metrics for self-adaptive systems. They distinguish be- (e.g., on the degree of utility achievement) of another sub- tween “nominal” and “self-*” situations and their relations: system (Rudolph et al. (2019)). An example for a quan- The approach measures the operation time about the adapta- tification technique based on the utilisation of dependency tion time to determine the effort. This includes aspects such measures is given inRudolph et al. (2016). as the adaptation speed to detected changes. Some of the developed metrics have been investigated in detail by Ca- Possible Lifelike-oriented Measures mara et al. for software architecture scenarios (Cámara et al. Considering the concept of lifelike technical systems and (2014)). Besides, success and adaptation efforts and ways to their desired capabilities, the set of existing metrics is prob- measure autonomy have been investigated, see e.g. (Gronau ably not sufficient enough to cover the entire behaviour. In (2016)). particular, we will have to investigate novel measurement In addition to these goal- and effort-based metrics, several techniques that explicitly cover lifelike attributes. Although further measurements indicate a macro-level behaviour of a there is currently no exact definition of what lifelike com- set of autonomous subsystems. The most important are: puting systems are, we can approach the question of what a) Emergence is basically described as the emergence is missing in the measurement framework by considering of macroscopic behaviour from microscopic interactions of ’qualities of life’ that we aim to transfer to technical usage self-organised entities (Holland (2000)). In the context of and that go beyond the SASO-based scalability, adaptation, SASO systems, this refers to the formation of patterns in the organisation, or robustness questions. In particular, we iden- system-wide behaviour, for instance. Examples for quantifi- tified the following aspects as primary options based on the cation methods are (Mnif and Müller-Schloer (2011)) and considerations of how we consider lifelike systems outlined (Fernández et al. (2014)). in Section I: b) Self-organisation can be expressed as a degree to First, lifelike system will evolve over time which may in- which the autonomous subsystems forming an overall SASO clude an adaptation of its primary usage. Consequently, a system decide about the system’s structure without ex- first measure should aim at quantifying the evolution be- ternal control, where the structure is expressed as in- haviour itself and a second one the coverage of the primary teraction/cooperation/relation among individual subsystems purpose. The latter case continues the ideas formulated in (Müller-Schloer and Tomforde (2017)). Examples of quan- the Organic Computing initiative when defining the property tification methods are using a static approach (see (Schmeck of ’flexibility’, i.e. how far a SASO system can react appro- et al. (2010)) and methods using a dynamic approach, see priately to changing goal functions (Becker et al. (2012)). (Tomforde et al. (2017a)). An alternative discussion of Second, this evolution corresponds to an adaptation to the self-organisation and its relation to emergence is given by niche in which the system survives. This may be expressed with a measure of ’fitness in the niche’ or ’degree of niche (2020) (based on a multi-level reasoning approach using appropriateness’. temporal models). Third, such an evolution implies that the system is some- In contrast to these approaches, we propose to develop how converted (or better: converts itself). Besides the de- a self-explanation component for lifelike technical systems scription of this process of time, a more static measure based that builds upon the metrics outlined above and establishes on the design can aim at determining a ’degree of converta- an observation and explanation loop. The idea of such a self- bility’, i.e. the freedom to which the system can evolve dur- explanation is that this should cover the following aspects: ing operation. • It should only be provided if the system recognises unan- Fourth, this may include a transfer to an entirely different ticipated behaviour or abrupt shifts that are perceived by niche, or in other words to another problem domain. This humans that interact with the system (otherwise we face can be expressed in a static manner by comparing the current an attention problem of users) problem space with the initial one or in a dynamic manner by a degree of transfer that the system has undergone. • The explanations should contain information about what Fifth, such a lifelike, evolutionary behaviour is done in changed and why this change happens, which includes the the context of the environmental conditions, which includes triggers that have been identified as root causes (e.g. a the presence of other subsystems in open system constella- failure of a component, abnormal external effects or be- tions. As a result, parts of the decisions of a lifelike system haviour change of other systems) are about the current integration into such a constellation, re- sulting in ’self-improving system integration’ (Bellman et al. • This may be augmented with an estimation of the impact (2021)). Although there is currently no integration mea- and severity as well as a prediction of the upcoming de- sure available, recent work suggests that such an integration velopments. state is probably a multi-objective function that builds upon • Further, the self-explanation should come in a human- metrics mentioned in the context of SASO measures (Gruhl understandable format, i.e. using human-interpretable ter- et al. (2018)). minology (e.g., ’Device X became too hot due to overload Finally, such a lifelike character obviously has implica- that was caused by new component Y’) tions on the way we design and operate systems. In con- trast to current practices that take design-related decisions • Finally, these explanations have to be generated in a and provide corridors of freedom for the self-* mechanisms, timely and accurate manner and become subject to a design-time decisions themselves need to become reversible learning process that optimises the self-explanation per or changeable by the systems, resulting in a degree of re- user. In particular, this can consider direct (i.e., approval versibility (in a static manner) or changes (in the sense of or intervention at goal level) and indirect (i.e., recognition how strong the design has already been altered). and no following action by the user) feedback for optimi- Obviously, this list is not meant to be complete. It il- sation purposes. The result will then be a user-specific lustrates the need for further techniques that are suitable to degree of explanation behaviour. quantify the lifelike-based properties. We are convinced that Figure 2 illustrates the envisioned process that works as a necessary path in lifelike research is to fill this gap with follows: an integrated measurement framework that provides a basis for comparison and assessment of the observed runtime be- 1. An observation loop is established at the macro-level that haviour. gathers all externally visible variables of the contained subsystems. We aim at the maxro- or system-wide level III. Self-Explanations based on Macro-Level for explanations as we consider the autonomous subsys- Behaviour Assessment tems as components for the overall functionality. How- Within the last year, several contributions proposed steps to- ever, this system boundary choice depends on the purpose wards a self-explanation of technical systems, particularly and the perception of the user. focusing on aspects of self-adaptation. The most prominent 2. The resulting data is pre-processed, brought into an ap- examples can be found in Fähndrich et al. (2013) (with a propriate representation and analysed to determine the Bayesian reasoning approach), Guidotti et al. (2018) (with key figures. This includes static and dynamic indicators. a focus on black-box classification), Bencomo et al. (2012) (with a software engineering approach considering the satis- 3. Based on novelty/anomaly/change detection such as faction of the requirements of a self-adaptive system), Welsh Gruhl et al. (2021), unexplainable or unanticipated be- et al. (2014) (also from a software engineering point of view haviour of these key figures is recognised and assessed. with a focus on accomplishing runtime goals), Klös (2021) In particular, this should come up with scores for the de- (based on an integrated design and verification framework gree of uncertainty of the observed behaviour (with uncer- – and the corresponding deviations), or Parra-Ullauri et al. tainty being defined as ’explainable from previously seen 1) Observation 2) Identification of 3) Root cause of key figures abnormal events determination Dynamic measures • Adaptation behaviour and stability • Organisation behaviour • Evolution and flexibility • … 4) Generation of Static measures • Transferability self-explanations • Variability • … … Figure 2: Schematic illustration of the self-explanation process using an external monitoring approach. Based on continuous observation of static (i.e. design properties) and dynamic (i.e. time series) key indicators, unexpected or abnormal behaviour is detected. For such events, possible root causes are identified and ranked according to their plausibility. This serves as input to automatically generate self-explanations to the user that are human-interpretable and indicate i) What happens, ii) what the root causes are and iii) what impact this has on the system behaviour. This may further be enriched with statements about the expected impact or predicted future developments if possible. behaviour’). Such an abnormal event answers the ques- Challenge 1 - Metrics: The first challenge is concerned tion ’when should a self-explanation be generated?’ with the metrics briefly summarised in Section II. In partic- ular, we have to answer the questions, which of these metrics 4. As soon as this trigger is found, the states of the con- is relevant? This includes answers to the question of what do tained subsystems and their sequences are analysed to metrics for the quantification of lifelike qualities look like? identify possible root causes. Again, this may make use Based on this, we have to define a mechanism to pre-process, of anomaly detection techniques that consider the differ- and represent the incoming data – which defines a standard ent state variables and provide uncertainty values again. time-series analysis problem. These possible root causes are collected, aggregated, cor- related, prioritised, resulting in an ordered list of possible Challenge 2 - Types of metrics and availability: As out- causes. lined above, we distinguish between static and dynamic measures. Considering the inherent heterogeneity, we have 5. Based on this event-to-cause mapping, a self-explanation to find the concept of how to fuse the measures by integrat- is generated and provided to the user. ing both, static and dynamic measures. This further results in questions of how to augment the pure scores, i.e. if pre- Please note that the integrated quantification framework dictions of upcoming behaviour are required and in which to assess the system behaviour and the corresponding ex- resolution to allow for proactive actions. planation loop is assumed to work at the macro-level (i.e., without any insight on the specific mechanisms and repre- Challenge 3 - Anomaly/Novelty detection: The core of sentations of the individual subsystems), since some of the our concept lies in a sophisticated detection of triggers, metrics only occur at macro-level (e.g. emergence). How- which is defined as unexpected behaviour of the key in- ever, this can be turned into a hybrid system approach, where dicators. Technically, this should be realised in terms of each subsystem cooperates with the system-wide loop to fil- anomaly, novelty or change detection techniques. Conse- ter, augment, and customise the explanations. quently, the questions arises which techniques are most ap- Considering this envisioned process towards automated propriate and how we can provide online methods. Here, we self-explanations in lifelike systems, we face several re- can make use of approaches from the field of self-integrating search challenges. In the following paragraphs, we outline systems (Gruhl et al. (2021)) that already focus on the de- the most urgent ones and provide first ideas on how to solve sired capabilities. them. Challenge 4 - Root cause detection: Given that a trig- ger for self-explanation is detected, we have to identify the References root causes that have been responsible for the observed be- Becker, C., Hähner, J., and Tomforde, S. (2012). Flexibility in or- haviour. This means to provide techniques that are able ganic systems - remarks on mechanisms for adapting system to detect possible root causes (also as sequences of inter- goals at runtime. In Proc. of 9th Int. Conf. on Inf. in Control, connected events and not just as isolated events) and rank Automation and Robotics, pages 287–292. them? Based on such an approach, we have to investigate Bellman, K. L., Botev, J., Diaconescu, A., Esterle, L., Gruhl, C., how to select the most likely root cause or set/sequence of Landauer, C., Lewis, P. R., Nelson, P. R., Pournaras, E., Stein, root causes that explain the behaviour. A., and Tomforde, S. (2021). Self-improving system integra- Challenge 5 - Definition of explanations: Above, we al- tion: Mastering continuous change. Future Gener. Comput. Syst., 117:29–46. ready mentioned which information an explanation to the user should contain. This needs to be formalised and inves- Bencomo, N., Welsh, K., Sawyer, P., and Whittle, J. (2012). Self- tigated in detail. In particular, this results in the challenge of explanation in adaptive systems. In 2012 IEEE 17th Inter- how to generate explanations and which aspects they should national Conference on Engineering of Complex Computer Systems, pages 157–166. IEEE. comprise. Challenge 6 - Presentation of explanations: Finally, ex- Cámara, J., Correia, P., de Lemos, R., and Vieira, M. (2014). planations have to be automatically presented to the user in a Empirical resilience evaluation of an architecture-based self- human-understandable manner. This implies that we have to adaptive software system. In Pro. of 10th Int. ACM Sigsoft develop concept of combining human-based terms (such as Conf. on Quality of Softw. Architectures, pages 63–72. cold/warm or fast/slow) with system-based measures. Pos- De Wolf, T. and Holvoet, T. (2004). Emergence versus self- sible concepts could establish joint input spaces and use hu- organisation: Different concepts but promising when com- man feedback to learn the resulting mapping. Using such bined. In International workshop on engineering self- a joint representation, concepts from deliberative abductive organising applications, pages 1–15. Springer. reasoning (Dessalles (2015)) may serve as a basis for these Dessalles, J.-L. (2015). A cognitive approach to relevant argument approaches. generation. In Principles and Practice of Multi-Agent Sys- tems, pages 3–15. Springer. IV. Conclusions Fähndrich, J., Ahrndt, S., and Albayrak, S. (2013). Towards self- In this paper, we outlined our notion of what lifelike tech- explaining agents. Trends in Practical Applications of Agents nical systems should be - or better which qualities of life and Multiagent Systems, pages 147–154. we aim at imitating in technical systems that go beyond the Fernández, N., Maldonado, C., and Gershenson, C. (2014). well-established field of self-adaptive and self-organising Information measures of complexity, emergence, self- systems. Based on this, we reviewed approaches to quantify organization, homeostasis, and autopoiesis. In Guided self- system behaviour mainly at the macro-level using a standard organization: Inception, pages 19–51. Springer. system model for self-adaptation. This review also included an identification of possible measurement approaches that Goller, M. and Tomforde, S. (2020). Towards a continuous assess- ment of stability in (self-)adaptation behaviour. In 2020 IEEE close the gap for observation and behaviour assessment of International Conference on Autonomic Computing and Self- future lifelike systems. Organizing Systems, ACSOS 2020, pages 154–159. In our notion, an important property of these lifelike systems will be to allow for self-explanation of decisions Gronau, N. (2016). Determinants of an appropriate degree of au- tonomy in a cyber-physical production system. Proc. of 6th and resulting behaviour, otherwise the acceptability of even Int. Conf. on Changeable, Agile, Reconfigurable, and Virtual more autonomous and evolving systems will most likely Production, 52:1 – 5. face acceptance problems. We outlined that such self- explanation has to answer two major questions: i) When Gruhl, C., Sick, B., and Tomforde, S. (2021). Novelty detection in continuously changing environments. Future Gener. Comput. to provide self-explanations to the user and ii) what is ex- Syst., 114:138–154. plained (including ’how’). This paper proposed to address the first question by using a measurement framework. Gruhl, C., Tomforde, S., and Sick, B. (2018). Aspects of measur- Future work will investigate possible metrics to quantify ing and evaluating the integration status of a (sub-)system at especially the evolution aspects of lifelike behaviour, in- runtime. In 2018 IEEE 3rd International Workshops on Foun- dations and Applications of Self* Systems, pages 198–203. cluding the properties of reversibility or transferability of the system purpose. By using selected applications as use Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., cases, we aim at analysing how the identification of events and Pedreschi, D. (2018). A survey of methods for explaining or conditions that need explanations to the users can be es- black box models. ACM computing surveys (CSUR), 51(5):1– 42. tablished. Following this, the final goal of this research is to provide mechanisms and techniques that actually derive Holland, J. H. (2000). Emergence: From chaos to order. OUP human-understandable self-explanations. Oxford. Kaddoum, E., Raibulet, C., Georgé, J.-P., Picard, G., and Gleizes, Tomforde, S., Kantert, J., Müller-Schloer, C., Bödelt, S., and Sick, M.-P. (2010). Criteria for the evaluation of self-* systems. B. (2018). Comparing the effects of disturbances in self- In Pro. of ICSE Works. on Softw. Eng. for Adaptive and Self- adaptive systems - A generalised approach for the quantifi- Managing Sys., pages 29–38. cation of robustness. Trans. Comput. Collect. Intell., 28:193– 220. Kephart, J. and Chess, D. (2003). The Vision of Autonomic Com- puting. IEEE Computer, 36(1):41–50. Tomforde, S., Kantert, J., and Sick, B. (2017a). Measuring self- organisation at runtime - A quantification method based on Klös, V. (2021). Safe, intelligent and explainable self-adaptive sys- divergence measures. In Proc. of 9th Int. Conf. on Agents and tems. PhD thesis, Technical University Berlin, Germany. Art. Int., pages 96–106. Levinson, J., Askeland, J., Becker, J., Dolson, J., Held, D., Kam- Tomforde, S., Prothmann, H., Branke, J., Hähner, J., Mnif, M., mel, S., Kolter, J. Z., Langer, D., Pink, O., Pratt, V., et al. Müller-Schloer, C., Richter, U., and Schmeck, H. (2011). Ob- (2011). Towards fully autonomous driving: Systems and al- servation and Control of Organic Systems. In Müller-Schloer, gorithms. In 2011 IEEE Intelligent Vehicles Symposium (IV), C., Schmeck, H., and Ungerer, T., editors, Organic Comput- pages 163–168. IEEE. ing - A Paradigm Shift for Complex Systems, Autonomic Sys- tems, pages 325 – 338. Birkhäuser Verlag. Lewis, P. R., Esterle, L., Chandra, A., Rinner, B., Torresen, J., and Yao, X. (2015). Static, dynamic, and adaptive heterogeneity Tomforde, S., Sick, B., and Müller-Schloer, C. (2017b). Organic in distributed smart camera networks. ACM Transactions on computing in the spotlight. CoRR, abs/1701.08125. Autonomous and Adaptive Systems (TAAS), 10(2):1–30. Weber, R. H. and Weber, R. (2010). Internet of things, volume 12. Springer. McGeoch, C. C. (2012). A guide to experimental algorithmics. Cambridge University Press. Weiser, M. (1999). The computer for the 21st century. ACM SIGMOBILE mobile computing and communications review, Mnif, M. and Müller-Schloer, C. (2011). Quantitative emergence. 3(3):3–11. In Organic Computing—A Paradigm Shift for Complex Sys- tems, pages 39–52. Springer. Welsh, K., Bencomo, N., Sawyer, P., and Whittle, J. (2014). Self- explanation in adaptive systems based on runtime goal-based Müller-Schloer, C. and Tomforde, S. (2017). Organic Computing – models. In Transactions on Computational Collective Intelli- Technical Systems for Survival in the Real World. Autonomic gence XVI, pages 122–145. Springer. Systems. Birkhäuser Verlag. Nafz, F., Seebach, H., Steghöfer, J.-P., Anders, G., and Reif, W. (2011). Constraining self-organisation through corridors of correct behaviour: The restore invariant approach. In Organic Computing—A Paradigm Shift for Complex Systems, pages 79–93. Springer. Parra-Ullauri, J. M., Garcı́a-Domı́nguez, A., Garcı́a-Paucar, L. H., and Bencomo, N. (2020). Temporal models for history-aware explainability. In Proceedings of the 12th System Analysis and Modelling Conference, pages 155–164. Rudolph, S., Hihn, R., Tomforde, S., and Hähner, J. (2016). Com- parison of dependency measures for the detection of mutual influences in organic computing systems. In Architecture of Computing Systems - ARCS 2016 - 29th International Con- ference, Nuremberg, Germany, April 4-7, 2016, Proceedings, pages 334–347. Rudolph, S., Tomforde, S., and Hähner, J. (2019). Mutual influence-aware runtime learning of self-adaptation behavior. ACM Trans. Auton. Adapt. Syst., 14(1):4:1–4:37. Schmeck, H., Müller-Schloer, C., Cakar, E., Mnif, M., and Richter, U. (2010). Adaptivity and self-organization in organic com- puting systems. ACM Trans. Auton. Adapt. Syst., 5(3):10:1– 10:32. Tennenhouse, D. (2000). Proactive computing. Communications of the ACM, 43(5):43–50. Tomforde, S. and Goller, M. (2020). To adapt or not to adapt: A quantification technique for measuring an expected degree of self-adaptation. Comput., 9(1):21.