=Paper= {{Paper |id=Vol-2019/mrt_1 |storemode=property |title=Active System Integrity Fences |pdfUrl=https://ceur-ws.org/Vol-2019/mrt_1.pdf |volume=Vol-2019 |authors=Christopher Landauer |dblpUrl=https://dblp.org/rec/conf/models/Landauer17 }} ==Active System Integrity Fences== https://ceur-ws.org/Vol-2019/mrt_1.pdf
                                          Active System Integrity Fences

                                                      Christopher Landauer
                                     Topcy House Consulting, Thousand Oaks, California
                                                Email: topcycal@gmail.com

Abstract—This paper describes the architecture of a Wrapping-       1.1. Example: Active System Integrity Fences
Based Self-Modeling System that includes all of the models
and processes we have mentioned in our previous papers on                In this Subsection, we describe our application example:
the subject of methods for mitigating the effects of the Get        “active system integrity fences”, which are modules charged
Stuck theorems. To make the description more concrete, we           with a single simple (if difficult) goal: explore system be-
have selected a particular application domain example: “active      havior and look for anomalies.
system integrity fences”, that protect and defend a complex              We use the term “fences” intentionally, to emphasize a
computing system or network (called the host system) from all       distinction in purpose: guards keep things out, wardens keep
enemies, foreign and domestic.                                      things in, watchers and monitors do neither, and fences do
     An “active integrity fence” sits on an interface between       both.
two system components, with an explicit notion of the char-              We consider complex engineered systems that are man-
acteristics expected in the traffic (in both directions), derived   aged or controlled by software, but that have essential
from expectations provided at design time and observations
                                                                    hardware components driven by the software. These sys-
collected at run time. We do not describe a particular host
                                                                    tems may be distributed, and operate in environments that
                                                                    are too large, too remote, too hazardous, or too rapid for
system, since our primary interest is in the protector system,
                                                                    direct human operation. They therefore need a great deal of
and we expect that it applies to any host system for which we
                                                                    autonomy, and even local adaptivity.
can specify the expected internal interactions.
                                                                         These systems are not entirely software. Hardware in
     We define the relevant models and processes, and how they      systems is usually accompanied by very specific operating
interact with each other and with the host system. We also          conditions, provided by the manufacturer. Complex hard-
provide a short description of the Wrapping infrastructure          ware often has a large and diverse set of commands that
that holds the system together and organizes and executes all       it interprets, and part of the role of the active fences here
of its behavior, both protective and mission.                       is to guarantee that the command streams do not drive the
     Keywords: Self-Aware Systems, Self-Adaptive Systems,           hardware outside its operating envelope without a certain
Self-Modeling Systems, Get Stuck Theorems, Behavior Min-            level of authorized over-ride (there are frequently multiple
ing, Dynamic Knowledge Management, Wrapping Integration             levels of envelopes for any hardware component ranging
Infrastructure                                                      from “not recommended” to “this will break it”.
                                                                         The purpose of an active system integrity fence is to
                                                                    protect and defend a complex computing system or network
1. Introduction to the Problem                                      (called the host system) from all enemies, foreign and
                                                                    domestic.
    This paper is a further continuation of and companion                An active integrity fence sits on an interface between two
to previous papers on Self-Modeling Systems [31] [26],              system components (or at the common entry interface of one
concerning new and extended methods for mitigating the              component), with an explicit notion of the characteristics
effects of the “Get Stuck” Theorems (see [31] for a more re-        expected in the traffic (in both directions, both temporal
cent explanation. and [26] for a discussion of issues). These       behavior and semantics), derived from expectations provided
theorems basically say that the system needs to be able to re-      at design time and observations collected at run time.
arrange its own knowledge structures for compactness and                 In the simplest case, it can throttle the data volume to an
efficiency, if it expects to survive for long periods of time       acceptable level (by silently ignoring or pointedly rejecting
in a demanding environment [30] [31]. The mechanism for             input), but it can also use constraints on the contents of data
doing that must include comparison of design-time expec-            to make similar decisions (e.g., for routing, acceptance, and
tation models with run-time observations of behavior. Self-         even explicit rejection or tacit dropping of data).
Modeling Systems will do these comparisons themselves,                   The criteria are a combination of “type” constraints on
as much as feasible. Though, strictly speaking, this is a           the content characteristics of the data and associated “al-
way to decide on adaptations, not an adaptation itself;             lowed data volumes” (more complex examples have explicit
the adaptation processes we use fall out of the Wrapping            state-machine protocol identifiers and constraints on their
infrastructure.                                                     allowed transition volume).
    The main purpose is to identify unexpected conse-                 The basic idea starts with the “Problem Posing” inter-
quences and prevent them from damaging system perfor-            pretation of programs [27], which replaces explicit invoca-
mance: for example, even after a system password compro-         tion of computationsl resources with an implicit request to
mise, the notion that certain external entities can access the   address a problem.
system is a failure that should be rejected.                          Thus, programs interpreted in this style do not “call
    What are the likely dangerous unexpected conse-              functions”, “issue commands”, or “send messages”; they
quences?                                                         “pose problems” (these are information service requests).
   •   system leaks (provides data content it should not)        Program fragments are not written as “functions”, “mod-
   •   system overloads                                          ules”, or “methods” that do things; they are written as
   •   system thrashes                                           “resources” that can be “applied” to problems (these are
   •   system forgets (loses data)                               information service providers).
   •   system breaks (still runs, but wrong answers)                  Because we separate the problems from the applicable
   •   system stops                                              resources, we can use more flexible mechanisms for con-
                                                                 necting them than simply using the same name.
    Available computational resources dictate whether the             We have shown that this approach leads to some interest-
fence is continuous (checking whenever anything in the           ing flexibilities, when combined with the “meta-reasoning”
interface changes) or sporadic (occasional, periodic, or other   approach of Wrappings [5], including such properties as
event based activity rule).                                      software reuse without source code modification, delaying
    Our focus here is on the computational mechanisms that       language semantics to run-time, and system upgrades by
enable this kind of protection.                                  incremental migration instead of version based replacement.
    We can also consider mobile fences, moving around in
                                                                      We specifically want to make the mapping from prob-
the system or network, either transferring from one machine
                                                                 lems to resources explicit, because implicit mechanisms are
to another, or just changing focus on different interfaces
                                                                 hard to study, so for Wrappings we use a Knowledge Base.
(which clearly needs a map of the interfaces to define the
space of movement.                                                    The Wrapping integration infrastructure is defined by
                                                                 its two complementary aspects, the Wrapping Knowledge
                                                                 Bases and the Problem Managers.
1.2. Structure of Rest of Paper
                                                                      The Wrapping Knowledge Bases (WKBs) contain the
    The structure of the rest of the paper is as follows. In     Wrappings that map problems to resources in context. They
Section2, we provide some background on Wrappings, to            define the entire set of problems that the system knows how
make the paper more self-contained.                              to treat (there are usually also default problems that catch the
    In Section3, we provide some comments on why we              ones otherwise not recognized). The mappings are problem-,
consider self-modeling systems, and how they relate to           problem parameter-, and context-dependent.
the self- world. We also briefly introduce the “Get Stuck”            The Problem Managers (PMs) are the programs that
theorems, which motivate this mitigation approach.               read WKBs and select and apply resources to problems. The
    In Section 4, we provide a notional system architecture,     meta-recursion follows because the PMs are also resources,
based on Wrappings, that is sufficiently flexible to allow       and are Wrapped in exactly the same way as other resources,
the “behind-the-scenes” knowledge management that is per-        and are therefore available for the same flexible integration
formed by our mitigation processes, and within which the         as any resources. These systems therefore have no privileged
mitigation models interact.                                      resource; anything can be replaced. Default PMs are pro-
    In Section 5, we provide a description of the models         vided with any Wrapping implementation, but the defaults
used in our application and in the mitigation processes, and     can be superseded in the same way as any other resource.
describe the mitigation processes in more detail,                These are the processes that replace the implicit invocation
    Finally, in Section 6, we present our conclusions and        process, allowing arbitrary processes to be inserted in the
prospects for further advances.                                  middle of the resource invocation process. This choice leads
                                                                 to very flexible systems [33] [34].
2. Background on Wrapping                                             The basic notion is the interaction of one very simple
                                                                 loop, called the “Coordination Manager”, and a very simple
    We provide a short description of Wrappings in this          planner, called the “Study Manager”.
Section, since there are many other more detailed descrip-            The default Coordination Manager (CM) is responsible
tions elsewhere [27] [25] [6], and especially the tutorials      for keeping the system going. It has only three repeated
[33] [34]. The Wrapping integration infrastructure is our        steps, after an initial FC = Find Context step as shown in
approach to run-time flexibility, with its run-time context-     Figure 1.
aware decision processes and computational resources. The             To “Find Context” means to establish a context for
basic idea is that Wrappings are Knowledge-Based inter-          problem study, possibly by requesting a selection from a
faces to the uses of computational resources in context,         user, but more often getting it explicitly or implicitly from
and they are interpreted by processes that are themselves        the system invocation. It is our placeholder for conversions
resources.                                                       from that part of the system’s invocation environment that
   Find Context                                                    means to apply the resource to the problem in the current
                                                                   context; and “Assess Results”, which means to evaluate the
                               Assimilate Results                  result of applying the resource, and possibly posing new
                                                                   problems. We further subdivide problem interpretation into
                      CM                                           five steps, which organize it into a sequence of basic steps
                                                                   that we believe represent a fundamental part of problem
   Pose Problem                Study Problem                       study and solution. These are implemented in the default
                                                                   Study Manager (SM).
                                    Match Resources                     To “Match Resources” is to find a set of resources that
                                                                   might apply to the current problem in the current context. It
                                    Resolve Resources              is intended to allow a superficial first pass through a possibly
                         SM                                        large collection of Wrapping Knowledge Bases.
                                    Select Resource
                                                                        To “Resolve Resources” is to eliminate those that do not
                                    Adapt Resource                 apply. It is intended to allow negotiations between the posed
                                                                   problem and each Wrapping of the resource to determine
                                    Advise Poser                   whether or not it can be applied, and make some initial
                                                                   bindings of formal parameters of resources that still apply.
   This step invokes                Apply Resource                      To “Select Resource” is simply to make a choice of
   the resource to do                                              which of the remaining candidate resources (if any) to use.
   whatever it does                 Assess Results                      To “Adapt Resource” is to set it up for the current prob-
                                                                   lem and problem context, including finishing all required
                   Figure 1. CM and SM Steps                       bindings.
                                                                        To “Advise Poser” is to tell the problem poser (who
                                                                   could be a user or another part of the system) what is about
is necessary for the system to represent to whatever internal      to happen, i.e., what resource was chosen and how it was
context structures are used by the system.                         set up to be applied.
    To “Pose Problem” means to get a problem to study from              To “Apply Resource” is to use the resource for its
the problem poser (a user or the system), which includes a         information service, which either does something, presents
problem name and some problem data, and to convert it into         something, or makes some information or service available.
whatever kind of problem structure is used by the system                To “Assess Results” is to determine whether the appli-
(we expect this is mainly by parsing of some kind).                cation succeeded or failed, and to help decide what to do
    To “Study Problem” means to use an SM and the Wrap-            next.
pings to study the given problem in the given context, and              Finally, we insist that every step in the above sequences
to “Assimilate Results” means to use the result to affect          is actually a posed problem, and is treated in exactly the
the current context, which may mean to tell the poser what         same way as any other, which makes these sequences
happened. Each step is a problem posed to the system by            “meta”-recursive [1]. That means that if we have any knowl-
the CM, which then uses the default SM to manage the               edge at all that a different planner may be more appropriate
system’s response to the problem. The first problem, “Find         for the context and application at hand, we can use it (after
Context”, is posed by the CM in the initial context of “no         defining the appropriate context conditions), either to replace
context yet”, or in some default context determined by the         the default SM when it is applicable, or to replace individual
invocation style of the program.                                   steps of the SM, according to that context (which can be
    The main purpose of the default CM is cycling through          selected at run time).
the other three problems, which are posed by the CM in                  Of course, we also have to have something to replace
the context found by the first step. This way of providing         or supersede. We have therefore provided default resources
context and tasking for the SM is familiar from many               for each of the CM and SM steps, to be used when no
interactive programming environments: the “Find context”           other is selected to supersede it (as the above SM is the
part is usually left implicit, and the rest is exactly analogous   default resource for the problem “Study Problem”). A simple
to LISP’s “read-eval-print” loop, though with very different       complication occurs with the default among many possible
processing at each step, mediated by one of the SMs. In            resources for the “Select Resource” problem: we want to
this sense, this CM is a kind of “heartbeat” that keeps the        allow other resources to be used, so we insist that the default
system moving.                                                     resource (which otherwise might just pick the first resource
    If the Coordination Manager is the basic cyclic program        on the list) not pick itself if there is another choice when it
heartbeat, then the Study Manager is a planner that organizes      is addressing the “Select Resource” problem.
the resource applications. The CM and SM interact as shown              In addition, since the resources that read the WKB are
schematically in Figure 1.                                         selected in context as is any other, the WKB can be het-
    We have divided the “Study Problem” process into three         erogeneous, with context determining which reader is used
main steps: “Interpret Problem”, which means to find a             for which format of Knowledge Base. This helps greatly for
resource to apply to the problem; “Apply Resource”, which          implementing improvements to programs, since the new and
old formats can exist simultaneously, unti the old format is          We are especially enamored of Self-Modeling Systems
no longer needed.                                                 [28] [29], which have models of their own behavior, derived
    We have used these algorithms many times to explain           from original specifications of intent (from the designers),
and implement autonomous and reflective agents and sys-           as modified by observation of actual behavior, because (in
tems [28] [29], and shown that they provide the appropriate       this author’s paraphrase):
level of manageable flexibility and auditable integration.             No plan ... extends with any certainty beyond the
The advantage in flexibility this approach provides over               first contact with ... [reality], reference [39], p.92
other activity loops that have been proposed is that the SM       These models are interpreted to produce the system’s be-
and CM steps are “meta”-steps, with posed problems for            havior. That is to say, the behavior, sometimes including
the activities, allowing one further level of abstraction and     the interpreter itself, is the interpretation of the models (this
indirection when it is useful. There are a number of other        is not as hard as it seems [42] [28] [31]).
activity loops that we have seen described in various places          The reason for self-modeling systems is to retain as
[5], especially the popular MAPE-K loop of autonomic              much flexibility in the operational system as possible, and to
computing [21] [24] [3] [40], and we have shown that our          allow the system to depart wildly from its original design
CM / SM meta-recursive interaction subsumes all of them.          specifications (under carefully controlled or appropriately
The meta-interpretation style [1] of Wrappings [27] can of        identified situations, of course). Additionally, it allows the
course be applied to any of them to make them much more           system to examine itself, looking for anomalies:
flexible.                                                              O wad some Power the giftie gie us To see oursels
    We have implemented several different kinds of CMs                 as ithers see us! [10]
in addition to the simple default CM defined above. There             This architectural approach also supports the processes
are CMs that short cut the reflection by calling the default      that mitigate the effects of the “Get Stuck” Theorems, which
step resources directly, and fully recursive versions that have   essentially say that any software-intensive system that makes
extra levels of problem posing. Some of them are described        models of its environment and behavior will eventually need
in other papers in the references.                                to reorganize its knowledge structures. To that end, we
    We have also used different SMs, beyond the default           defined several mitigation processes in [26] [32], and this
one that tries only one resource: one SM tries all applicable     paper is a description of a notional architecture in which to
resources and returns with the first success, another tries       implement the processes.
them all and evaluates them to return the best success,
and one collects all successes and summarizes. There are
                                                                  4. Architecture
also different kinds of SM steps. The Match and Resolve
resources that read XML WKBs are different from the ones              In this Section, we describe an architectural context for
that read text only WKBs. A different Match or Resolve            the mitigation processes, using a Wrapping infrastructure to
might invoke a more sophisticated planner if there are no         provide the flexibility of operations that we want. In the next
matches. A different Select might choose all compatible           Section, we describe the processes in a little more detail and
resources, then negotiate among them. Different versions of       show how they might interact.
apply, beyond the default function call, might send a request         A notional picture of the architecture under considera-
message, or invoke an interpreter or other process. Another       tion is in Figure 2.
one might simply add the resource to a configuration, instead         The top part of the picture is the usual Wrapping CM /
of invoking it.                                                   SM loop, accessing the WKB in context to apply resources
    Wrapping-based systems support run-time decisions             to posed problems, possibly also adjusting the context (and
about which resources to apply in the current context, both       allowing the context to be changed by the system environ-
at the application level (the resources that perform the          ment). This is the standard behavior of a Wrapping-based
task at hand) and at the meta-level (the resources that are       system.
used to select and organize the application level resources).         The other processes in the main system, that is, the
This flexibility does come with a cost, but there are also        ones in the application domain, are invoked by the SM, and
mechanisms based on partial evaluation [13] [20] [27] [41]        build and maintain the domain knowledge bases that are the
[15] for removing any decisions that will be made the same        focus of the mitigation effort (though, of course, the same
way every time, thus leaving the costs where the variabilities    mitigation processes apply equally well to the Wrapping
need to be.                                                       Knowledge Base. due to reflection).
                                                                      The mitigation processes BM, KnRef, and DKM collect
3. Self-Modeling Systems                                          information from the choices made in the SM and adjust
                                                                  the WKB, and they collect information from the domain
    Our approach is related to the architectures of robots        knowledge bases and adjust them also.
and autonomous vehicles [2] [37], but we add some features
not computationally feasible at the beginning. Our systems        5. Description of Models Used
are Self-Adaptive [6] [9] [23], which means that they can
observe their own behavior, reason about it, and use the              There are three classes of models (processes): the in-
results to adjust the behavior.                                   frastructure models, such as the CM, SM, and other PMs;
                                                                           or just randomly spot check); this process clearly
       CM
                                                                           needs a map of system interfaces;
                                                                      •    examination: build models of the data crossing the
            SM
                                                                           interface (in both directions), infer actual protocol
                                       context                             with performance measurements; in the simplest
                                                                           case, just measuring close approaches to acceptabil-
                                                                           ity boundaries is enough model;
                                   domain              behavior       •    retention: store these models in a system domain
                                   processes                               knowledge base;
                                                                      •    communication: announce current status and ob-
                                                                           served behavior, trends and variability distributions;
                                                                           interact with other fences to discover and isolate
            WKB                    domain              resources           problems; announce to the system that there is an
                                   knowledge                               issue in certain places (so that the context can be
                                                                           chand to avoid them);
                                                                      •    cooperation: interact with other fences to discover
                 KnRef                                                     and isolate problems; apply a requested data throttle
                                   DKM                                     at an interface;
                                                                      •    escalation: when a problem gets dangerous or large
                                                                           or widespread enough (criteria supplied by the de-
                   BM                                                      signers). ask for help.
                                                                      Each of these processes has relatively simple inputs, and
                                                                   only the examination process has any difficult algorithms,
                    Figure 2. Notional Architecture                Most of the difficulty lies in the criteria for data extraction,
                                                                   modeling , and escalation.

the application models that perform the system’s objective         5.3. Mitigation Models
activity, and the mitigation models that sit behind that
activity and keep it running.                                         These are the ones that do the mitigations [31] [26]:
5.1. Infrastructure Models                                            •    Behavior Mining
                                                                      •    Model Deficiency Analysis
    The infrastructure has been described before, in Sec-             •    Knowledge Refactoring
tion 2. It contains the default PMs, which are the various            •    Dynamic Knowledge Management
flavors of SM (default is straight line plan, others are try all      •    Constructive Forgetting
until success, try all and combine, and others. These can be          •    Continual Contemplation
fully recursive or not.                                            We describe the intent of each of these processes and how
    It contains the various flavors of CM used in the system,
                                                                   they might interact.
such as the default simple loop and the Mud CM for
distributed components, and these can be recursive or not.
                                                                   5.3.1. Behavior Mining. This process examines every de-
It contains the WKB and the context knowledge base, and
                                                                   cision in context, recording the following data:
the reflective nature of Wrappings allows both of them to
be heterogeneous, since the knowledge base readers are also           •    problem with parameters,
resources, selected according to posed problems in the usual          •    relevant context,
way.                                                                  •    resource application selected and constructed.
                                                                   The relevant context is the set of context conditions that
5.2. Fences Application Models                                     were examined and succeeded for the selection. In fancier
                                                                   cases, this will also include resources not selected for this
    These are the models that do the fences application
                                                                   problem, and why (according to the context conditions that
work. Each fence is responsible for one class of interfaces
                                                                   eliminated them).
(often just one interface).
                                                                        In general, problems are stratified into layers, based
   •    instrumentation: record summaries of data types and        on their resolution (in time, space, or concept). Strictly
        volume for all (or just representative) crossing the       speaking, this should be called semiotic content, but that
        interface in either direction; in certain cases, the       discussion leads beyond the scope of this paper.
        content matters also;                                           The BM process also examines changes to the domain
   •    movement: for mobile fences, decisions about when          knowledge base, looking for infelicities, which can be sta-
        and where to move (perhaps to track down a problem         tistical or even esthetic (unbalanced trees, different amounts
of detail in different areas, unique or very low frequency        to the esthetic criteria in the BM process: if we consider
references, which often results from specification errors). In    any knowledge representation mechanism as a graph with
this case, the models are “plausibility” models, since there      labeled nodes and directed labeled edges, then nodes with
is no available basis to declare them correct or incorrect.       an excessive number of edges may be too general, and nodes
    The BM process then feeds its results to the MDA              with too few edges may be too specific.
process for resolution.                                               This process is also sensitive to the access patterns
                                                                  of the knowledge elements. We want elements that are
5.3.2. Model Deficiency Analysis. This process attempts           accessed very often to have shorter access paths, which can
to determine where models have gone wrong (this is a              distort any nice a priori ontology (the goal in this case is
retrospective analysis, not predictive). The simplest form of     not readability; it is efficiency; other semantics-preserving
this begins with a behavioral assertion about model effects       refactorings may be needed for readability).
that has been violated.                                               This process is intended to be transparent to the users
    We expect the assertions to be provided initially by          of the knowledge bases, in the sense that their access mech-
the designers, since models are expected to provide some          anisms remain exactly the same; only the resulting search
information service, and the model creators decide what that      performance is affected.
is. After all, there was a purpose for creating the model in
the first place, and these assertions define the designers’       5.3.4. Dynamic Knowledge Management. This process
expectations for it.                                              organizes the knowledge for quick reasoning; it subsumes
    Every behavioral assertion involves some of the vari-         Constructive Forgetting (a mitigation from [31] [26]). We
ables within the model (or some performance parameters).          called that one out explicitly because it is not normally
In the most intrusive case, every change to any of those          acceptable to throw knowledge away, but we have shown
variables causes the assertion to be (it is possible to reduce    that it is inevitable.
this a little bit if if can be proven that the change cannot          The idea here is that there are different frequencies of
make an assertion go fro true to false).                          access for different parts of the knowledge base, and re-
    When an assertion fails, the hard part begins: the assign-    arranging the knowledge base can take that into account.
ment of blame, that is, how can the system decide where               The simplest version of this is to pushd the Least
the failure is and who did it. This is especially difficult       Recently Used (LRU) knowledge elements off to longer
for assertions in the usual kinds of first-order logic, since     access paths (this is much the same process as defining
mathematically there is no such culprit.                          Huffman codes [19] [36] [14]). This measure of recency
    However, the assertions are not the only information          can be weighted by importance of consequences and gath-
available to the MDA process. It also has access to the           ered by relevant context (big context change implies much
component behavior models constructed by the BM process,          knowledge re-ordering). There are also other relevant factors
and sometimes it can use those to decide which part of an         that will be different for each different application.
assertion is less likely to be wrong. This assessment can
often be done by maintaining a reliability “reputation” for       5.3.5. Continual Contemplation. This process is the gen-
each of the assertions, each of its components and each of        eral term for all the background concurrent examination
the processes that produce the variables tha occur in those       processes that examine everything: some examine as-is mod-
components. The reputation of an assertion component is           els (from Behavior Mining), and compare with as-designed
enhanced by its success frequency (tempered by a measure          models (from system definition). These functions are de-
of how well the input data to each variable producer fits the     scribed earlier in this Section.
input assumptions).
    In addition, the MDA process gets warnings from the
BM process about models that may be incorrect due to              6. Conclusions and Prospects
statistical or esthetic considerations. It tries to decide when
a strange structure is an error and when it is just strange.          We have argued that the flexibility of self-modeling
Of course, it can’t really do that, mainly for undecideability    systems make them well-suited to handle remote, complex,
reasons, but it can discover certain kinds of problems and        and / or hostile environments, but that flexibility comes with
announce the others to the system monitors to try to get          the obvious cost in performance, and a less obvious cost
help. If no help is forthcoming, then the issue is simply         in organization. How much of this infrastructure machinery
recorded as a problem that the MDA process cannot solve,          is used in any given application is an application-specific
and if enough instances of those occur, it escalates the issue    engineering decision that depends on the expected level of
to a deficiency warning.                                          hazard and the required level of performance.
                                                                      We have shown that the Wrapping infrastructure sup-
5.3.3. Knowledge Refactoring. This process is partly              ports a very flexible interweaving of domain resources and
housekeeping: re-arranging knowledge for efficiencies: it         infrastructure resources, and can now include processes that
can be applied to the context conditions for the same             mitigate the effects of the “Get Stuck” Theorems, which
problem (e.g., to get the shortest average time for decision,     limit the lifetime of any autonomous system that builds and
given the time distribution of conditions). It is also related    maintains models.
    One of the most exciting prospects is that the system                 [6]    Kirstie L. Bellman, Christopher Landauer, Phyllis Nelson, Nelly
has enough knowledge about its own behavior that it can                          Bencomo, Sebastian Götz, Peter Lewis and Lukas Esterle, “Self-
                                                                                 modeling and Self-awareness”, Chapter 9, pp. 279-304 in [22]
explain what it is doing, or what it is about to do, and why,
and most particularly, what it is not doing and why (the SM               [7]    Leopoldo E. Bertossi, Anthony Hunter, Torsten Schaub (eds.), In-
manages the selection, so it has the data to explain what it                     consistency Tolerance, Springer Lecture Notes in Computer Science,
                                                                                 Volume 3300, Springer Verlag (2004)
does not select). This requires the system to reason about
the situation it is in [4], so they can describe it.                      [8]    Jean-Yves Beziau, Walter Carnielli and Dov Gabbay (eds.), Hand-
                                                                                 book of Paraconsistency, King’s College (2007)
    A key advance in the state of the reasoning processes
would be to provide tools for them to reason about incom-                 [9]    Robert Birke, Javier Cámara, Lydia Y. Chen, Lukas Esterle, Kurt
plete and inconsistent information [7] [8] [12], since that                      Geihs, Erol Gelenbe, Holger Giese, Anders Robertsson and Xiaoyun
                                                                                 Zhu, “Self-aware Computing Systems: Open Challenges and Future
describes essentially all of the system’s knowledge. Simi-                       Research Directions”, Chapter 26, pp. 709-722 in [22]
larly, various kinds of advanced learning methods [11] [38]
                                                                          [10]   Robert Burns, “To a Louse” (1768); Robert Burns in Your Pocket,
[18] [17] could be applied in the model building processes,                      Waverley Press (2009); along with many other collections and web
to avoid needing to specify a model type or structure in                         sites
advance. These could be of great use to the model building
                                                                          [11]   Jaime G. Carbonell, “Learning by Analogy: Formulating and Gen-
processes in these systems.                                                      erating Plans from Past Experience”, pp. 137-161 in [38]
    However, learning methods such as XCS [43] [44] have
                                                                          [12]   Walter A. Carnielli, M.E. Coniglio and J. Marcos, “Logics of Formal
not much place here, since the behaviors in these systems                        Inconsistency”, pp. 15-107 in [16]
are not well modeled by MDP (Markov Decision Processes)
or even POMDP (Partially Observable MDP) in most cases,                   [13]   C. Consel, O. Danvy, “Tutorial Notes on Partial Evaluation”, p.493-
                                                                                 501 in Proceedings PoPL 1993: The 20th ACM Symposium on Prin-
and these methods are not model-free; they assume a state                        ciples of Programming Languages, 10-13 January 1993, Charleston,
transition model that can be described as a POMDP.                               SC (January 1993)
    We also described mobile fences, moving around in the                 [14]   Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
system or network, with the responsibility to explore, detect,                   Clifford Stein, Introduction to Algorithms, MIT Press (1990) Second
decide and act or escalate. If these fences are also knowl-                      Edition, McGraw-Hill (2001), Section 16.3, p.385392
edgeable about more of the system behavioral expectations,                [15]   Marcus Denker, Orla Greevy, Michele Lanza, “Higher Abstractions
then they should be able to detect certain software errors,                      for Dynamic Analysis”, pp.32-38 in Proceedings PCODA’2006: the
in addition to external anomalies.                                               2nd International Workshop on Program Comprehension through
    We can also imagine some physical existence for “touch                       Dynamic Analysis, Technical report 2006-11 (2006)
points” (like the little blue police / watchman boxes for                 [16]   D. Gabbay, F. Guenthner (eds.), Handbook of Philosophical Logic,
periodic checking in), so the fence can monitor and examine                      vol. 14, Reidel (2007)
its own progress.                                                         [17]   Ian Goodfellow, Yoshua Bengio, Aaron Courville, Deep Learning,
    We think that these systems can be made much safer                           MIT (2016)
than they are now, but that requires an engineering judg-                 [18]   Trevor Hastie, Robert Tibshirani, Jerome Friedman, The Elements
ment based choice of how much protective infrastructure to                       of Statistical Learning: Data Mining, Inference, and Prediction,
include.                                                                         Springer (2009), 2nd ed. Springer (2016)
                                                                          [19]   David A. Huffman, “A Method for the Construction of Minimum-
                                                                                 Redundancy Codes”, Proceedings of the IRE, v.40, no.9, p.1098-
References                                                                       1101 (1952)
                                                                          [20]   N. D. Jones, “Partial Evaluation”, Computing Surveys, Volume 28,
[1]   Harold Abelson, Gerald Sussman, with Julie Sussman, The Structure          No. 3 (September 1996)
      and Interpretation of Computer Programs, Bradford Books, now
      MIT (1985)                                                          [21]   J. Kephart, “Feedback on feedback in autonomic computing sys-
                                                                                 tems”, in Proceedings FC 2012: the 7th International Workshop on
[2]   James S. Albus, Alexander M. Meystel, Engineering of Mind: An              Feedback Computing, San Jose, California (2012)
      Introduction to the Science of Intelligent Systems, Wiley (2001)
                                                                          [22]   Samuel Kounev, Jeffrey O. Kephart, Aleksandar Milenkoski, Xi-
[3]   Paolo Arcaini, Elvinia Riccobene, Patrizia Scandurra, “Modeling            aoyun Zhu (eds.), Self-Aware Computing Systems, Springer (2017)
      and Analyzing MAPE-K Feedback Loops for Self-Adaptation”,
      Proceedings SEAMS 2015: The 2015 IEEE/ACM 10th Interna-             [23]   Samuel Kounev, Peter Lewis, Kirstie L. Bellman, Nelly Bencomo,
      tional Symposium on Software Engineering for Adaptive and Self-            Javier Cámara, Ada Diaconescu, Lukas Esterle, Kurt Geihs, Holger
      Managing Systems, 18-19 May 2015, Florence, Italy (2015)                   Giese, Sebastian Götz, Paola Inverardi, Jeffrey O. Kephart and
                                                                                 Andrea Zisman, “The Notion of Self-aware Computing”, Chapter
[4]   Jon Barwise, The Situation in Logic, CSLI Lecture Notes No. 17,            1, pp. 3-16 in [22]
      Center for the Study of Language and Information, Stanford U.
      (1989)                                                              [24]   Philippe Lalanda, Julie A. McCann, and Ada Diaconescu, Autonomic
                                                                                 Computing: Principles, Design and Implementation, Undergraduate
[5]   Dr. Kirstie L. Bellman, Dr. Christopher Landauer, Dr. Phyllis R.           Topics in Computer Science Series, Springer (2013)
      Nelson, “Managing Variable and Cooperative Time Behavior”, Pro-
      ceedings SORT 2010: The First IEEE Workshop on Self-Organizing      [25]   Christopher Landauer, “Infrastructure for Studying Infrastructure”,
      Real-Time Systems, 05 May, part of ISORC 2010: The 13th IEEE In-           Proceedings ESOS 2013: Workshop on Embedded Self-Organizing
      ternational Symposium on Object/component/service-oriented Real-           Systems, 25 June 2013, San Jose, California; part of 2013 USENIX
      time distributed Computing, 05-06 May 2010, Carmona, Spain                 Federated Conference Week, 24-28 June 2013, San Jose, California
      (2010)                                                                     (2013)
[26]   Christopher Landauer, “Mitigating the Inevitable Failure of Knowl-     [35]   Dr. Christopher Landauer, Dr. Kirstie L. Bellman, Dr. Phyllis R.
       edge Representation”, Proceedings M@RT@ICAC 2017: The 2nd                     Nelson, “Modeling Spaces for Real-Time Embedded Systems”,
       International Workshop on Models@run.time for Self-aware Com-                 Proceedings SORT 2013: The Fourth IEEE Workshop on Self-
       puting Systems, Part of ICAC2017: The 14th International Confer-              Organizing Real-Time Systems, 20 June 2013, part of ISORC 2013:
       ence on Autonomic Computing, 17-21 July 2017, Columbus, Ohio                  The 16th IEEE International Symposium on Object / component /
       (2017)                                                                        service-oriented Real-time distributed Computing, 19-21 Jun 2013,
                                                                                     Paderborn, Germany (2013)
[27]   Christopher Landauer, Kirstie L. Bellman, “Generic Programming,
       Partial Evaluation, and a New Programming Paradigm”, Chapter 8,        [36]   Jan Van Leeuwen, “On the construction of Huffman trees”, p.382-
       pp.108-154 in Gene McGuire (ed.), Software Process Improvement,               410 in Proceedings ICALP 1976: the Third International Collo-
       Idea Group Publishing (1999)                                                  quium on Automata, Languages and Programming, 20-23 July 1976,
                                                                                     Edinburgh (1976)
[28]   Christopher Landauer, Kirstie L. Bellman, “Self-Modeling Sys-
       tems”, pp.238-256 in R. Laddaga, H. Shrobe (eds.), “Self-Adaptive      [37]   Alexander M. Meystel, James S. Albus, Intelligent Systems: Archi-
       Software”, Springer Lecture Notes in Computer Science, vol.2614               tecture, Design, and Control, Wiley (2002)
       (2002)
                                                                              [38]   Ryszard S. Michalski, Jaime G. Carbonell, Tom M. Mitchell (eds.),
[29]   Christopher Landauer, Kirstie L. Bellman, “Managing Self-                     Machine Learning: An Artificial Intelligence Approach, Tioga Press,
       Modeling Systems”, in R. Laddaga, H. Shrobe (eds.), Proceedings               Palo Alto, CA (1983)
       Third International Workshop on Self-Adaptive Software, 09-11 Jun
                                                                              [39]   Helmuth Karl Bernhard Graf von Moltke, On Strategy (in German),
       2003, Arlington, VA (2003)
                                                                                     translated in Daniel J. Hughes and Harry Bell, Moltke on the Art of
[30]   Christopher Landauer, Kirstie L. Bellman, “Model-Based Coopera-               War: Selected Writings, Presidio Press (1993); paperback Presidio
       tive System Engineering and Integration”, Proceedings SiSSy 2016:             Press (1995)
       3rd Workshop on Self-Improving System Integration, 19 July 2016,
                                                                              [40]   E. Rutten, “Feedback Control as MAPE-K loop in Autonomic
       part of ICAC2016: 13th IEEE International Conference on Auto-
                                                                                     Computing”, Research Report RR-8827, INRIA Sophia Antipolis
       nomic Computing, 19-22 July 2016, Wuerzburg, Germany (2016)
                                                                                     - Méditerranée, INRIA Grenoble - Rhône-Alpes (10 Dec 2015)
[31]   Christopher Landauer, Kirstie L. Bellman, “Self-Modeling Systems
                                                                              [41]   Gregory T. Sullivan, “Dynamic Partial Evaluation”, Proceedings
       Need Models at Run Time”, Proceedings M@RT 2016: the 11th
                                                                                     PADO II: Second Symposium on Programs as Data Objects, 21-
       International Workshop on Models@run.time, 04 October 2016,
                                                                                     23 May 2001, Aarhus, Denmark (2001)
       Part of ACM/IEEE 19th International Conference on Model Driven
       Engineering Languages and Systems. 02-07 October 2016, Palais du       [42]   Ken Thompson, “Reflections on Trusting Trust”, Comm. of the ACM,
       Grand Large, Saint Malo, Brittany, France (2016)                              vol.27, no.8, pp.761-763 (Aug 1984), http://dl.acm.org/citation.cfm?
                                                                                     id=358210 (availability last checked 03 Apr 2017); see also the
[32]   Christopher Landauer, Kirstie L. Bellman, “An Architecture for Self-
                                                                                     “back door” entry of “The Jargon File”, widely available on the
       Awareness Experiments”, Proceedings SeAC 2017: 2nd Workshop
                                                                                     Web, and other comments findable by searching for “back door Ken
       on Self-Aware Computing, Part of ICAC2017: The 14th International
                                                                                     Thompson moby hack” (availability last checked 03 Apr 2017)
       Conference on Autonomic Computing, 17-21 July 2017, Columbus,
       Ohio (2017)                                                            [43]   Stewart W. Wilson, “Classifier Fitness Based on Accuracy”, Evolu-
                                                                                     tionary Computation, v.3, no.2, p.149175 (1995)
[33]   Dr. Christopher Landauer, Dr. Kirstie L. Bellman, Dr. Phyllis R.
       Nelson, “Wrapping Tutorial: How to Build Self-Modeling Sys-            [44]   Stewart W. Wilson, “Generalization in the XCS Classifier System”,
       tems”, Proceedings SASO 2012: The 6th IEEE Intern. Conf. on                   p.665674 in John R. Koza et al. (eds.), Proceedings GP 1998:
       Self-Adaptive and Self-Organizing Systems, 10-14 Sep 2012, Lyon,              the Third Annual Conference on Genetic Programming, 22-25 July
       France (2012)                                                                 1998, University of Wisconsin, Madison, Wisconsin (1998)
[34]   Dr. Christopher Landauer, Dr. Kirstie L. Bellman, Dr. Phyllis R.
       Nelson, “Wrapping Tutorial: How to Build Self-Modeling Systems”,
       Proceedings CogSIMA 2013: 2013 IEEE Intern. Inter-Disciplinary
       Conf. Cognitive Methods for Situation Awareness and Decision
       Support, 25-28 February 2013, San Diego, California (2013)