=Paper=
{{Paper
|id=Vol-1742/MRT16_paper_10
|storemode=property
|title=Reflective Systems Need Models at Run Time
|pdfUrl=https://ceur-ws.org/Vol-1742/MRT16_paper_10.pdf
|volume=Vol-1742
|authors=Christopher Landauer,Kirstie Bellman
|dblpUrl=https://dblp.org/rec/conf/models/LandauerB16
}}
==Reflective Systems Need Models at Run Time==
Reflective Systems Need Models at Run Time Christopher Landauer Kirstie L. Bellman Topcy House Consulting, Thousand Oaks, California Topcy House Consulting, Thousand Oaks, California Email: topcycal@gmail.com Email: bellmanhome@yahoo.com Abstract—We think that run-time models are a minimum modeling systems, and provide some suggestions for alleviat- necessity for significant self-awareness in Computationally Re- ing their effects. flective systems. For us, a Computationally Reflective system is not only “self-aware” (it can collect information about its own A. Self-Modeling Systems behavior and that of its environment), but also “self-adaptive” (it can reason about that behavior and make adjustments that Self-modeling systems [24] are self-aware systems that change it). Moreover, when the system is also “self-modeling”, have the capability to build and analyze models of their own that is, defined by the models it creates and maintains, then it behavior, and use those models to produce as much of that can modify those models to better align its behavior with changes in its operating environment. behavior as the designers intend. We have already shown In this paper, we show that building and managing these [20] [22] that we can build systems for which all of the models presents some difficulties that are largely independent system’s components are models, including the model inter- of the modeling mechanisms used. We also provide some mecha- preter. While these Computationally Reflective systems are nisms for coping with these difficulties, from a kind of “Behavior usually originally deployed with an initial set of models and Mining” and “Model Deficiency Analysis” for current models to “Dynamic Knowledge Management”, of which major components model construction guidelines, their main self-modeling work are “Knowledge Refactoring” and “Constructive Forgetting”. is to adjust those models according to their observations of None of these methods eliminate the problem, but each alleviates their operational external environment and their own internal some of the difficult aspects of the problem, which allows bigger environment, and produce their behavior by interpreting the and more complex systems to exist in more complex and diverse models that they create. An engineering decision to be made environment for longer periods of time. by the developers is to decide how much of these behavior Keywords: Self-Aware Systems, Self-Adaptive Systems, Self- Modeling Systems, Computational Reflection, Computational generating models are fixed and which ones can be variable Semiotics, Run-Time Model-Building, Wrapping Integration In- (and the boundary can often usefully be changed at run time, frastructure, Problem Posing Interpretation depending on the situation). Because these models are intended to be interpreted by I. I NTRODUCTION the system, the common “4+1” view [16] is not entirely We argue that run-time models are a minimum necessity appropriate: we must mainly be concerned with process and for significant and effective self-awareness in Computationally interconnection models (behavior and interaction), and not Reflective systems. A Computationally Reflective system [28] much with developmental or use case views. Moreover, the [29] [4] must not only be“self-aware” (it can collect and system needs to build and maintain models of its operational organize information about its own behavior and that of its environment also, not just its own structure. environment), but also “self-adaptive” (it can reason about that The questions then become: how does a system behavior and make adjustments that change it). We are primar- • learn about its own structure and behavior? ily interested in systems that are also “self-modeling”, which • make decisions about its own structure and behavior? extends the notion to systems that construct and maintain their • decide on changes to its own structure and behavior? own models, and use those to generate their behavior. We think • record and use the decisions and their consequences over that such systems must be able to change not only the models time? but also the modeling mechanisms they use at run-time, not just adapt parameters or switch between operating modes. For us, all of these are about models or involve or require models. The corresponding engineering design problems are: A Computationally Reflective system can collect informa- tion about its own structure and behavior, and it can also ex- • how extensive these reflective mechanisms need to be for amine that information to decide how to modify the behavior, a given application, retain and analyze it over extended time periods, adjust its • how much we want the system itself to construct and decision processes to “optimize” it towards selected success modify the models, and criteria, and even change the success criteria to respond to • how much of the system behavior is to be driven by the changes detected in the environment (which may also include models. newly imposed purposes or goals). Models cost time for development and execution, which In this paper, we explore some seemingly far-fetched the- makes this set of engineering problems central to the success oretical notions, show that they are eminently likely in self- of self-adaptive systems in complex environments. If we make this extensive use of models, we are brought to the notion of C. Structure of Paper using the models to define the behavior. The rest of this paper is organized as follows. In the next These are self-modeling systems. Section II, we describe more of the structure of self-modeling B. Representational Issues systems. In our prior work on self-modeling systems, we have In Section III, we describe our own approach to constructing identified two fundamental representational issues that must self-modeling systems, using activity loops driven by knowl- be addressed by any self-aware system of this kind: edge bases. In Section IV, we describe some fundamental results from 1) the need for identifying repeated patterns and inventing Computational Semiotics that have relevance to model build- new symbols for them, to reduce the computational load; ing at run time. 2) the need for occasionally re-inventing the representa- In Section V, we show how some partial approaches can tional mechanism itself, to avoid the inevitable increase alleviate the problem of getting stuck, and finally, in Sec- in rigidity in a complex knowledge-intensive system. tion VI, we describe some conclusions and prospects for These are both issues of Computational Semiotics [18] further development. [21], since they involve the use and meaning of symbols in computing systems. The first issue is a generalization II. S ELF -M ODELING S YSTEMS of some well-known optimization techniques, analogous to training, that replace the computation of appropriate responses Our goal is to design and build systems that are able to to recognition of the appropriate situations (which is usually operate in hazardous or remote environments that cannot be much faster). The second issue follows from our “Get Stuck” completely known. The assumption is that they are too far Theorems [18] [21], which essentially show that any increas- away or too dynamically variable for a human controller, so ing knowledge structure will eventually become unwieldy and they will need to operate largely autonomously. extremely difficult to extend (even if humans are putting in To that end, we want these systems to build models of that the new knowledge, so it is not a computability problem). The environment, so they can reason about it and their potential theorems are formalizations of the long experience of failure activities. We expect the models to help them interact with in building extensible languages, systems, and programs. their environment more effectively. Moreover, if we also make These theorems are not insurmountable, but they do pose a the systems build models of their own behavior, external and serious design consideration. In addition, it can be seen that internal, then they can reason about that behavior and its they are inevitable in any systems that build increasingly fine- connection to the environment. This approach to reflective grained models. That means that they are not unlikely; they systems therefore expects the system to use processes that must be addressed. build and use models of the system’s external environment Helpfully, there are several processes that can be used to and processes, and also its internal environment and processes alleviate their effects to some extent. We think that these (see Figure 1). We have argued that this modeling is an helpful processes are necessary in any case for a self-modeling system to operate in a complex environment for any non-trivial perform processes actions amount of time. “Behavior Mining” is the process of recording and an- use external of environment alyzing the system’s own behavior to make descriptive or of prescriptive models. models “Model Deficiency Analysis” is the process of evaluating the effectiveness of a model (not just success and failure, of build but also performance, resource implications, and interference make processes observations with other models), and analyzing where improvements can be made. “Knowledge Refactoring” is the process of rearranging the Fig. 1. Reflective System in Environment hierarchical knowledge structures (e.g., ontologies) to reduce dependencies, duplication and unused elements. important enabler for the flexibility required to adjust behavior “Constructive Forgetting” is the process of simplifying to external conditions [20] [22]. knowledge structures by conflating and even removing some During the course of their activities, these systems will build terms and relationships. models of common situations and appropriate responses, so Each of these will be discussed in the rest of the paper. they can subsequently recognize a situation to help decide None of these methods entirely avoids the problems pre- on a response more quickly. Then they can separate effective sented by the theorems, of course; the theorems are still true. and ineffective responses, identify environmental or internal They do, however, push the bounds farther out, which allows characteristics that distinguish these two classes, and extend bigger and more complex systems to exist in more complex the relevant situation descriptions to include the newly identi- and diverse environments for longer periods of time. fied property. Therefore, these systems are in a continual state of making new distinctions, thereby increasing the knowledge Compilers and interpreters can always tell the difference, so base size and interconnectivity. there is no reason to require them to use the same name (which However, this process cannot continue indefinitely, as we is what we usually do). shall show in Section IV. That separation immediately allows (and requires) a mech- We have shown [24] how to build systems that consist anism to reconnect requests to providers. This change of entirely of models, including the model interpreter. These interpretation allows many interesting properties: systems build models of system behavior (or are provided them • Code reuse without modification; by the designers), and use the model interpreter to execute • Delaying language semantics to run-time, them. The “Moby Hack” [33] [20] shows how to make this • Migration to standards; circular process well-defined. Because it seems unusual, we • Run-Time flexibility. have a short explanation here. This last property is our interest here. We start with the interpreter written in the modeling lan- guage (which does put constraints on how simple the modeling B. Wrapping Properties language can be), and an external program (that will be Our choice for mapping problems to resources in context is used exactly once), written in any convenient language, that to use a Knowledge Base that defines the various uses of each compiles the modeling language into an executable process. resource in different contexts; it allows different resources to Then any model subsequently produced can be interpreted, be used for the same problem in different contexts. including changes to the interpreter (see Figure 2). Changes This is a kind of implicit invocation. There are five essential properties of Wrappings: bootstrap • ALL parts of the system are resources; compiler • ALL activities in the system are resource applications; • Relevant system state is given by context; • Wrapping Knowledge Bases (WKBs) contain the map- pings; • Problem Managers (PMs) interpret the Wrappings to interpreter interpreter perform all of the infrastructure and application functions. model process C. Wrapping Processes The processes in the Wrapping approach are as important as the data in the WKBs (it must be remembered that declarative programs do not DO anything without an interpreter). The PMs all models all processes organize the computations and interpret the Wrappings. Two basic classes of PMs are the Coordination Manager (CM) and the Study Manager (SM). These two PMs separate the control Fig. 2. Interpreter Bootstrap loop at the center of our approach into the looping “heartbeat” to the modeling language are more problematic, and are left that keeps the system moving forward (the Coordination for further studies. In many applications, the interpreter will Manager or CM) from the basic planning steps within the not change at run time, simplifying this process somewhat. loop (the Study Manager or SM). 1) Coordination Manager: The CM is a class of resources III. O UR A PPROACH that are analogs of a system’s main program. The simplest one consists of the following steps: To manage the large collection of necessary models, we use the Wrapping integration infrastructure [17] [23], which allows Find Context Collect initial context. us enormous flexibility in selecting and employing models. It loop : was first designed to help study infrastructure decisions. What Pose Problem Select one problem to study. follows in this section is a short description of Wrappings, Study Problem Find and use one resource with its focus on resource management. More details can be for this problem. found in the cited references. Present Results Make adjustments to context (and possibly Wrappings). A. Problem Posing Interpretation The Problem Posing Interpretation is a different way to This is the analog of an extremely simple sequential main interpret programs that provides an enormous amount of program (and also clearly of the classic “read-eval-print” style flexibility consider in the assignment of semantics to syntax. of basic LISP interpreter). There is a CM corresponding to any The basic idea is to separate information service requests, “main program” style (parallel, cooperative, etc.). Remember such as function calls, message sends, etc. from information that each of these steps is also a posed problem, using service providers, such as function and method definitions. Wrappings to map them to appropriate resources in context. 2) Study Manager: The Study Manager is a class of D. Wrapping Knowledge Bases resources that are designed to address the problem “Study The WKB contains entries called Wrappings. An individual Problem” (from the CM). It reads and interprets the WKB Wrapping is a map from a problem specification to a resource (which is described in the next subsection). The simplest one application in context. Its main information components are: consists of the following steps: PROB problem name Match Resources Match resources to problem in context COND context condition (may be repeated) (simple WKB filter). RES resource name Resolve Resources Resolve resources via negotiation SYM symbol name between problem and Wrappings FILE file name to produce candidate resources. Select Resource Choose one of the remaining candidate resources. The problem specification consists of a problem name and a Adapt Resource Adapt resource to problem and context list of actual parameters. This is the form in which problems (finish instantiating all necessary are posed. When we write about a problem, we are usually invocation parameters). referring to a class of problems identified by the problem name Advise Poser Record (announce) resource application (e.g., integrate, optimize, infer, etc.). The context condition is (what is about to be done), together with relevant context a sequence of boolean conditions (assumed conjunctive) on (that was used for its construction). the problem parameters and the context variables (this allows Apply Resource Use the resource for the problem the same problem to map to different resources if there are (invoke the resource application). different problem parameters). The resource application is the Assess Results Evaluate success or failure. fully bound invocation of the resource (all required parameters have values assigned). The symbol and file names are used As always, each of these steps is a posed problem, using to find the executable code for the resource. Most of our Wrappings to map them to resources. This is an extremely implementations use dynamic loading and unloading to reduce simple-minded planner. The main power of Wrappings is that the size of the executable and retain the flexibility to add new any improvement in planning processes can be incorporated as or different resources for the same or different problems at other resources that can address these posed problems. Other run time. SMs apply all relevant resources in turn until one succeeds, The actual format of a WKB can vary (even heteroge- or accumulate the results of all successful applications, etc.. neously); it is constrained only by the fact that it is read only The interaction of the two main PMs is shown schematically by a few of the SM steps (minimally Match, Resolve, Apply), in Figure 3, which we find to be a useful mnemonic for which and as the resources for those steps can be selected differently steps occur when. The interaction of these two processes is the at run time, so can the WKB have different syntax at run time. E. Wrapping Summary Find Context The power of Wrappings comes from the notion that each of the steps is a posed problem [19], for which the entire Assimilate Results Wrapping infrastructure is available to identify an appropriate resource to apply. CM It should also be clear that any system with such an explicit central activity loop (whether or not it is organized in the way Pose Problem Study Problem we described) has a ready-made identifier for the events that Match Resources it performs. The event labels can be considered as the basic representational language for modeling behavior. Resolve Resources F. Use of Wrappings SM Select Resource The use of Wrappings provides several conceptual unifica- tions in system design: Adapt Resource • ALL activity is requested as posed problems. • ALL activity is performed by resources, ALL processing Advise Poser is resource application. • ALL activity can be recorded (Advise Poser in SM). This step invokes Apply Resource the resource to do Wrappings help with some of the organization processes whatever it does Assess Results in self-modeling systems. They do not solve all of the hard problems, but they do help organize the solutions. Fig. 3. CM and SM Steps They allow a system • To manage its collection of models; foundation of our self-modeling systems [22] [17]. • To coordinate its activities with other systems. IV. G ETTING S TUCK eventually “Get Stuck”. In this Section, we describe some As our self-modeling systems, or indeed any systems that processes that will help alleviate the problem, but not, of constructs situational models at run time, behave in their oper- course, eliminate it. The processes are: ational environment, the natural tendency for us as designers • Behavior Mining, is to assume that they will build more detailed models to • Model Deficiency Analysis, make more detailed distinctions among the phenomena they • Dynamic Knowledge Management, and its main compo- encounter. This clearly could be a problem, and the following nent processes results show that it is definitely a problem. – Knowledge Refactoring and The “Get Stuck” Theorems [18] [21] show some fundamen- – Constructive Forgetting. tal limits on representing knowledge in symbolic structures in These processes help delay the problem, and we believe that computing systems, in particular, that systems operating in more such alleviations are necessary. complex environments will eventually need to change their own internal representations. These theorems formalize the A. Behavior Mining decades of observations that have the common theme that extending existing systems becomes more difficult and error- We use the term “Behavior Mining” for any analysis of prone over time (e.g., Lehman’s laws of software evolution the sequence of events performed by a system (there may of [26] [27] [12] for programs that operate in the real world, course be multiple parallel analyses at different resolutions among other informal notes), and in many cases has simply and / or scopes). The intention is to identify and encapsulate precluded anything but minor maintenance. common sequences, so that they may not need to be computed The first theorem is based on the fact that we can count separately repeatedly. This is akin to encapsulating sequences how many structures of a given size can exist, so representing of operations into “macros” to reduce computation (the target behavior in a sufficiently complex environment will eventually computation is the same, but the decision processes that con- need to use so many expressions that they will become too struct the sequence are eliminated). The system will invent new large for the system to use either effectively or soon enough. symbols as event labels for these common event sequences, The second theorem is based on the fact that each new element as well as other recurring structures (repetitions, alternations, in a complex knowledge structure will need to attach to etc.), and use them in a continual analysis. more of the other elements, and that therefore the practical The technical area of interest for this process is grammatical constraints on adding new knowledge become more and more inference [7] [9] [3] [5] [13], which is about discovering difficult to satisfy. higher-level structure among sequences of relatively low-level The first theorem is relatively straightforward to prove. If events. For different sets of assumptions about the high-level the computing system has a fixed finite symbol system (finitely structure of these common sequences, different algorithms many basic symbols, finitely many fixed arity combination can be used to infer the higher-level organization of those rules), then we can count the number of expressions of any structures. This is an enormous field, and we have included given size, using the well-known mathematical technique of some references to illustrate some of the different approaches generating functions [36]. The point is not that the values and theoretical results. One of the theorems is that most are large or small, it is that there are only finitely many interesting classes of structures cannot be effectively inferred expressions of any given size. That means, as the system without having both positive and negative examples [10] [1] tries to represent more and more complex phenomena or [2] (see [25] for a summary of these results). relationships, or make more detailed distinctions, it will need Our Wrapping infrastructure allows for both positive and more and more expressions, so they will get larger and larger. negative examples [22], since it makes available not only the The system will eventually run out of expressions small context and reasons for choosing a particular resource to per- enough to process effectively. form a computing step, but also the conditions that eliminated Similarly, if we are building and extending a knowledge other possible resources. It therefore seems possible that these base, with references from entities to other entities, then the systems can infer more complex behavioral structures than interconnection density eventually becomes large enough that other mechanisms can provide. the implications of potential changes take too much time to Once the system has new labels for some collection of analyze. The important point here is that it will happen even common event structures, it becomes interesting to consider if humans are doing the extensions. the possibility that all events can be placed into one of the These two theorems have implications for our computing identified commonalities (“macros”). When this happens, there systems: they will have to change their representational mech- is the possibility of defining the events solely in terms of the anisms. In the next Section, we describe some processes that macros. That is, the event succession can be re-expressed in can at least delay the problem. terms of the new symbols. If the system can actually express its behavior using these new symbols only, then we write that V. G ETTING U NSTUCK the new symbol set is “semiotically closed” [18] [21], which We have shown that if our system is effective at discovering we believe is an essential component of emergence. This line new distinctions and making more refined models, then it will of reasoning is important, but beyond the scope of this paper. B. Model Deficiency Analysis Management, time-based forgetting and context-based search The processes we collectively call Model Deficiency Analy- reduction. sis are intended to detect problems with the models in use and We envision this knowledge base with meta-knowledge on provide suggestions for reducing the problems and improving each element that contains a measure of its “popularity”, that the models. The system needs to be able to determine that is, how often it has been used in recent reasoning processes. a model has a problem, isolate the problem to particular These values are adjusted as the system reasons, and the sections of the model, and determine potential changes that data structures used for the knowledge base should reflect will improve the model. that ordering. This kind of “dynamic rule promotion” keeps There are three key issues: commonly used rules near the front of the search system, and is known to reduce search times. Then the system can occa- • How to monitor and validate the models, sionally discard some of the oldest and least used elements, • How to decide which model (if any) has failed or (prefer- as another way to do Constructive Forgetting. ably) is about to fail, Context describes the current and intended state of affairs • How to use the failure to improve the model. of the system (taking goals and capabilities into account), We can view these collected process as one step in a learning and its best guess for the current and expected state of mechanism that is intended to improve the effectiveness of the affairs of the environment (taking its predictive models of model behavior. However, most learning applications assume the environment into account). Then the available knowledge a fairly class of parameterized models, and their process is can be searched in an order determined by expected relevance to adjust the parameters to improve the behavior (which can (this ordering involves the indexing mechanisms, and does not be quite effective in some circumstances). Very few treat the need to move the knowledge around). In some cases, this re- models as explicit objects of study, and attempt to diagnose ordering will delay the discovery of unexpected paths to a what is wrong when the models fail (as opposed to performing goal, or even prevent the system from finding a good course less well than they could). of action, but it is helpful on the whole. We clearly need better A key property to this kind of analysis is that in Wrapping- indexing and ordering mechanisms, relevance computations, based systems, there is a ready-made class of symbols that and dynamically steerable search algorithms. can be used for event names (the problem names), and a ready-made class of events that can be labeled (the resource D. Knowledge Refactoring applications announced in the SM). This property allows the Even if the system cannot define away its problems, it might system to construct models of its own actual behavior, which be able to re-organize its knowledge structures to require much can be compared to the behavior intended by the designers shorter expressions. Because this is a more complicated oper- (as specified, inevitably, by another model). Precise point ation than the previous two, we spend more time discussing of failure can be identified by incorrect or missing resource it. There are two parts to this kind of re-organization: the applications, and the event context (also reported by the SM in first is a simple change of emphasis for all of the models the Advise Poser step) and the WKB can be studied directly and rules, and the second is an actual re-arrangement. This for errors and omissions in the context that was used to create re-arrangement has come to be known as refactoring, and is the incorrect resource application (or not select the correct very popular in object-oriented coding circles [8] [15] [30]. one). There are many different kinds of refactoring operations, and Further discussion from a different context can be found in which one to use first depends on some characteristics of the [23]. interconnection graph that have come to be called “smells” This kind of analysis is helpful for the infrastructure models, in the object-oriented coding world. For our application, we but not quite sufficient for creating and improving the domain need “smell” analogs for knowledge bases, which are relatively models. For that part of the analysis process, we rely on some easily adapted from [8], in addition to those that can be derived combination of simulation and evolutionary programming from first principles. techniques [6] [11] This area is under active investigation. As an example of the latter, we note the property of “tangleness”, that is, many elements all referring to each other. C. Dynamic Knowledge Management One simple measures of tangleness is the size of maximal Wrappings are one kind of knowledge base in these systems, cliques in the directed reference graph. Each maximal clique but there are certain to be many others to represent domain is a set of elements that all refer to each other, so it seems knowledge and other information resources. that there should be some partition of the set into subclasses, We have not specified what form the knowledge bases are or even some new element that they are all connected to. For to take, since these methods are largely independent of that, an n-clique, this changes the n * (n -1) / 2 edges between and can be applied or adapted to most kinds of declarative every pair of elements to n edges, from each of the elements knowledge. One of the most powerful time savers in accessing to the new one. We have a speculative notion that small world knowledge is context: the use of situational awareness to graphs may be useful for this kind of structure [34] [35]. greatly reduce the search space for any posed problem. We The notion of graph cliques brings up an entirely new describe two forms of context-based Dynamic Knowledge speculation that is interesting and may become important. It is well-known that the collection of cliques in an undirected translate old expressions into new ones. graph is a simplicial complex, and therefore many topological There are many other warning signs that the knowledge methods, and in particular persistent homology, can be applied base structure might be unwieldy, but none as strong as search to understand the structure of the knowledge base at all delays or mistakes, which the system may not be able to detect scales [14] [32]. Our interest is in the recent improvements in when they happen, but might be able to diagnose after the fact. computational speed [37] [38], which may allow these result This kind of refactoring is time-consuming, and very likely to be computed on the fly. The question is which of these too static for the “on-line” system, so we expect our run- methods provides useful information. time systems to rely on the continual rearrangement of rule Another intriguing possibility is about reconstructing the priorities according to context and situation, and to do more symbolic dynamics of a nonlinear system (a computable def- serious refactoring during down times (if there are any). inition of what the system does and will do). Mischaikow et al. [31] show that, given a sequence of measurements E. Constructive Forgetting of the system, a symbolic model with a finite number of Since we know that the system will eventually run out states can be computed, and that model used for predictions of space, the system needs ways to reclaim that space. Our and other analyses. Its utility is in proving that a certain notion of “Constructive Forgetting” is about how to reduce dynamical system is chaotic (which is largely of theoretical the knowledge base without sacrificing performance. For this interest), in separating noise from essentials (which is of great purpose, we think that the system can make simplifications practical interest), and in determining invariant sets (sets that by grouping similar clusters or sequences, and simply treating the dynamics preserves, which include all attractors, and gives them all the same. If differences are eventually identified, then hints about long-term behavior). either the system can recover some of the distinctions, or it Each of these possibilities warrants much more study. can recognize that it does not have the resources to cope. We close with some of the refactoring “smells” and cor- This is an application of grammatical inference, and iden- rective actions that may apply to knowledge based systems, tifying classes of structures that have the same or a similar whether they are embodied as rules or some other declarative effect, or to which the system responds nearly identically. In mechanism. We describe such a knowledge base as having those cases, the distinction is irrelevant to the response and it several sets of element definitions that define what objects are can be discarded. Of course, if subsequent experience shows being considered, how they relate to each other and how they that the distinction matters, the normal elaboration process will change. The entities and attributes define the objects, and there make that refinement. are relationships among the objects, and activities that change It is extremely important for managing the balance among those objects. The key interconnection is the reference from these processes that the system record all of the elaborations one of these entities to another. and collapsings, so they and their preconditions can be studied. An entity with too many attributes should be separated into For example, when the system operation seems to be getting a hierarchy of multiple entities that group the attributes in slow (or the stored knowledge large), the threshold for approx- a sensible way. An entity with too few attributes could be imate matching can be lowered, so that more responses are too general, needing some new attributes to make appropriate collapsed by this process. When system operation is relatively distinctions (of course, the system needs a reason to make fast, the elaboration mechanism can be turned up a little bit, those distinctions). Two entities with too many relationships so that finer distinctions can be considered. should be better modeled as subordinate to a more general combined entity. Many different entities with the same set of VI. C ONCLUSIONS AND F UTURE P ROSPECTS attributes should be better modeled using a common subordi- In this paper, we have described an application of self- nate containing those attributes. These and other changes are modeling systems to autonomous systems that we expect to relatively easy. The old element references can be adjusted to operate in hazardous or remote environments, and to adapt these new ones (though the system will have to make up some their behavior to the external (and internal, e.g., low battery) names). conditions they encounter. These systems refine designer- Note that this kind of refactoring is almost entirely experi- provided knowledge bases that describes what they should ence driven, and has essentially nothing to do with simplifying expect to encounter from the environment, from themselves, different aspects of original models, since they usually do not and from the complexities of their interactions. We have also exist. described some methods for reducing the size and complexity The more radical restructurings of the knowledge base of the knowledge bases in an attempt to stave off the “Get depend on the semantics of the elements and their connections, Stuck” Theorems, which will allow these self-modeling sys- to which the system has little or no access, so these are tems to operate autonomously for longer periods of time or in generally left for the designers, for now (though there are more dynamic environments. certainly claims in the code refactoring world that many of In general, the question of how to balance expansion and these small changes can add up to a large one). Eventually, contraction, and tune them to the complexity and dynamics of we expect to go farther along the path of having the system the environment, is the main sustainable system question, but identify new core concepts, adjust the knowledge base, and it is our recommendation to keep the system near the higher boundary, to maximize the available size of the semantic space. [17] Christopher Landauer, “Infrastructure for Studying Infrastructure”, A more general answer requires experimentation with some Proc. ESOS 2013: Workshop on Embedded Self-Organizing Systems, 25 June 2013, San Jose, California; part of 2013 USENIX Federated engineering applications, and we expect the result to be quite Conference Week, 24-28 June 2013, San Jose, California (2013) application specific. [18] Christopher Landauer, Kirstie L. Bellman, “Situation Assessment via The important design question is how much of this variation Computational Semiotics”, pp.712-717 in Proc. ISAS’98: The 1998 International MultiDisciplinary Conference on Intelligent Systems and machinery we actually need for a given application, and Semiotics, 14-17 Sep 1998, NIST, Gaithersburg, Maryland (1998) how much of the system structure can remain fixed (and [19] Christopher Landauer, Kirstie L. Bellman, “Generic Programming, unmodeled). This engineering question can only be answered Partial Evaluation, and a New Programming Paradigm”, Chapter 8, pp.108-154 in Gene McGuire (ed.), Software Process Improvement, in the context of a particular application. Idea Group Publishing (1999) The important implementation question is how to make the [20] Christopher Landauer, Kirstie L. Bellman, “Self-Modeling Systems”, analysis fast enough to be useful, or how much of the model pp.238-256 in R. Laddaga, H. Shrobe (eds.), “Self-Adaptive Software”, Springer Lecture Notes in Computer Science, vol.2614 (2002) update process the system can afford to do. This question also [21] Christopher Landauer, Kirstie L. Bellman, “Semiotic Processing in is dependent on the particular application, because that will Constructed Complex Systems”, Proc. CSIS2002: The 4th Interna- determine how much computing power is available for these tional Workshop on Computational Semiotics for Intelligent Systems, JCIS2002: The 6th Joint Conference on Information Sciences, 08-13 sustainability functions. Mar 2002, Research Triangle Park, NC (2002) We know that we can build these self-modeling systems. We [22] Christopher Landauer, Kirstie L. Bellman, “Managing Self-Modeling think that we can learn to tune the balance rules effectively. Systems”, in R. Laddaga, H. Shrobe (eds.), Proc. Third International Workshop on Self-Adaptive Software, 09-11 Jun 2003, Arlington, We expect that they will be useful for difficult applications. Virginia (2003) [23] Christopher Landauer, Kirstie L. Bellman, “Model-Based Cooperative R EFERENCES System Engineering and Integration”, Proc. SiSSy 2016: The Third [1] Dana Angluin, “Finding Patterns Common to a Set of Strings”, J. Workshop on Self-Improving Self-Integrating Systems, 19 Jul 2016, Computer and System Sciences, vol.21, pp.46-62 (1980) Würzburg, Germany (2016) [2] Dana Angluin, “Inductive Inference of Formal Languages from Positive [24] Dr. Christopher Landauer, Dr. Kirstie L. Bellman, Dr. Phyllis R. Data”, Information and Control, vol.45, pp.117-135 (1980) Nelson, “Wrapping Tutorial: How to Build Self-Modeling Systems”, [3] Dana Angluin, Carl H. Smith, “Inductive Inference: Theory and Meth- Proc. SASO 2012: The 6th IEEE Intern. Conf. on Self-Adaptive and ods”, Computing Surveys, vol.15, no.3, pp.237-269 (Sep 1983) Self-Organizing Systems, 10-14 Sep 2012, Lyon, France (2012) [4] Dr. Kirstie L. Bellman, Dr. Christopher Landauer, Dr. Phyllis R. [25] Lillian Lee, “Learning of Context-Free Languages: A Survey of the Nelson, “Managing Variable and Cooperative Time Behavior”, Pro- Literature”, Technical Report TR-12-96, Harvard U. (1996) ceedings SORT 2010: The First IEEE Workshop on Self-Organizing [26] Meir M. Lehman, “Programs, Life Cycles, and Laws of Software Real-Time Systems, 05 May, part of ISORC 2010: The 13th IEEE Evolution”, Proc. IEEE, vol.68, No.9, pp.1060-1076 (1980) International Symposium on Object/component/service-oriented Real- [27] M. M. Lehman and L. A. Belady (eds.), Program evolution. Processes time distributed Computing, 05-06 May 2010, Carmona, Spain (2010) of software change, Academic Press (1985)Professional, Inc., San [5] J. Berstel, L. Boasson, “Context-Free Languages”, Chapter 2, pp.59- Diego, CA, USA 102 in Jan van Leeuwen (Managing Editor), Handbook of Theoretical [28] Pattie Maes, “Computational Reflection”, technical report 87 2, Vrije Computer Science vol. B: Formal Methods and Semantics, MIT (1990) Universiteit Brussel, Artificial Intelligence Laboratory (1987) [6] C. A. Coello Coello, G. B. Lamont, and D. A. Van Veldhuizen, Evo- [29] Pattie Maes, “Concepts and Experiments in Computational Reflection”, lutionary Algorithms for Solving Multi-Objective Problems, Springer pp. 147-155 in Proceedings OOPSLA ’87 (1987) Science+Business Media (2007) [30] Q. Ethan McCallum, Bad Data Handbook, O’Reilly (2012) [7] J. M. Foster, Automatic Syntactic Analysis, American Elsevier (1970) [31] Konstantin Mischaikow, Marian Mrozek, J. D. Reiss, Andrzej Szym- [8] Martin Fowler, Refactoring: Improving the Design of Existing Code, czak, “Construction of Symbolic Dynamics from Experimental Time Addison-Wesley (1999) Series”, Phys. Rev. Lett. vol.82, pp.1144 (8 Feb 1999) [9] K. S. Fu, T. L. Booth, “Grammatical Inference: introduction and [32] Giovanni Petri, Martina Scolamiero, Irene Donato, Francesco Vac- survey”, IEEE Trans. Systems, Man, and Cybernetics, vol.SMC-5, carino. “Topological Strata of Weighted Complex Networks”, PLOS pp.95-111, pp.409-423 (1975) One, vol.8, no.6, e66506 (2013) [10] E. M. Gold, “Language Identification in the Limit”, Information and [33] Ken Thompson, “Reflections on Trusting Trust”, Comm. of the ACM, Control, vol.10, pp.447-474 (1967) vol.27, no.8, pp.761-763 (Aug 1984), also at URL http://www.acm. [11] D. Hadka, P. Reed, “Borg: An Auto-Adaptive Many-Objective Evo- org/classics/sep95/ (availability last checked 24 Jul 2016); see also lutionary Computing Framework”, Evolutionary Computation, vol.21, the commentary in the “back door” entry of “The Jargon Lexicon”, no.2, pp.231-259 (2013) which is widely available on the Web, and in particular, is findable [12] Israel Herraiz, Daniel Rodriguez, Gregorio Robles and Jesus M. by searching with Google for “back door moby hack” (availability last Gonzalez-Barahona, “The evolution of the laws of software evolution: checked 24 Jul 2016) A discussion based on a systematic literature review”, ACM Computing [34] Duncan J. Watts, Steven H. Strogatz, “Collective dynamics of ’small- Surveys, Vol.1, No.1 (June 2013) world’ networks”. Nature, vol.393, pp.440-442 (1998) [13] Colin de la Higuera, Grammatical Inference: Learning Automata and [35] Duncan J. Watts, Small Worlds: The Dynamics of Networks between Grammars, Cambridge U. Press (2010) Order and Randomness, Princeton U. Press (2003) [14] Danijela Horak, Slobodan Maletić and Milan Rajković, “Persistent ho- [36] Herbert S. Wilf, generatingfunctionology, Academic Press (1990) mology of complex networks”, J. Stat. Mech.: Theory and Experiment [37] Afra Zomorodian, “The Tidy Set: A Minimal Simplicial Set for arXiv 0811.2203 (2008) https://arxiv.org/pdf/0811.2203 (availability Computing Homology of Clique Complexes”, pp.257-266 in Afra last checked 24 Jul 2016) Zomorodian (ed.), Proc. SoCG10: 26th Symposium on Computational [15] Joshua Kerievsky, Refactoring to Patterns, Pearson (2004) Geometry, 13-16 Jun 2010, Snowbird, Utah (2010) [16] Philippe Kruchten, “Architectural Blueprints The 4+1 View Model [38] Afra Zomorodian, Topological Data Analysis: Proc. 2011 AMS Short of Software Architecture”, IEEE Software, vol.12, no.6, pp.42-50 Course on Computational Topology, 04-05 Jan 2011, New Orleans, (November 1995) Louisiana, Symposia in Applied Mathematics, v.70, AMS (2012)