<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Christopher Landauer Topcy House Consulting</institution>
          ,
          <addr-line>Thousand Oaks, California</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>-This paper describes the architecture of a WrappingBased Self-Modeling System that includes all of the models and processes we have mentioned in our previous papers on the subject of methods for mitigating the effects of the Get Stuck theorems. To make the description more concrete, we have selected a particular application domain example: “active system integrity fences”, that protect and defend a complex computing system or network (called the host system) from all enemies, foreign and domestic.</p>
      </abstract>
      <kwd-group>
        <kwd>Self-Aware Systems</kwd>
        <kwd>Self-Adaptive Systems</kwd>
        <kwd>Self-Modeling Systems</kwd>
        <kwd>Get Stuck Theorems</kwd>
        <kwd>Behavior Mining</kwd>
        <kwd>Dynamic Knowledge Management</kwd>
        <kwd>Wrapping Integration Infrastructure</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction to the Problem</title>
      <p>
        This paper is a further continuation of and companion
to previous papers on Self-Modeling Systems [
        <xref ref-type="bibr" rid="ref24">31</xref>
        ] [
        <xref ref-type="bibr" rid="ref19">26</xref>
        ],
concerning new and extended methods for mitigating the
effects of the “Get Stuck” Theorems (see [
        <xref ref-type="bibr" rid="ref24">31</xref>
        ] for a more
recent explanation. and [
        <xref ref-type="bibr" rid="ref19">26</xref>
        ] for a discussion of issues). These
theorems basically say that the system needs to be able to
rearrange its own knowledge structures for compactness and
efficiency, if it expects to survive for long periods of time
in a demanding environment [
        <xref ref-type="bibr" rid="ref23">30</xref>
        ] [
        <xref ref-type="bibr" rid="ref24">31</xref>
        ]. The mechanism for
doing that must include comparison of design-time
expectation models with run-time observations of behavior.
SelfModeling Systems will do these comparisons themselves,
as much as feasible. Though, strictly speaking, this is a
way to decide on adaptations, not an adaptation itself;
the adaptation processes we use fall out of the Wrapping
infrastructure.
      </p>
      <sec id="sec-1-1">
        <title>1.1. Example: Active System Integrity Fences</title>
        <p>In this Subsection, we describe our application example:
“active system integrity fences”, which are modules charged
with a single simple (if difficult) goal: explore system
behavior and look for anomalies.</p>
        <p>We use the term “fences” intentionally, to emphasize a
distinction in purpose: guards keep things out, wardens keep
things in, watchers and monitors do neither, and fences do
both.</p>
        <p>We consider complex engineered systems that are
managed or controlled by software, but that have essential
hardware components driven by the software. These
systems may be distributed, and operate in environments that
are too large, too remote, too hazardous, or too rapid for
direct human operation. They therefore need a great deal of
autonomy, and even local adaptivity.</p>
        <p>These systems are not entirely software. Hardware in
systems is usually accompanied by very specific operating
conditions, provided by the manufacturer. Complex
hardware often has a large and diverse set of commands that
it interprets, and part of the role of the active fences here
is to guarantee that the command streams do not drive the
hardware outside its operating envelope without a certain
level of authorized over-ride (there are frequently multiple
levels of envelopes for any hardware component ranging
from “not recommended” to “this will break it”.</p>
        <p>The purpose of an active system integrity fence is to
protect and defend a complex computing system or network
(called the host system) from all enemies, foreign and
domestic.</p>
        <p>An active integrity fence sits on an interface between two
system components (or at the common entry interface of one
component), with an explicit notion of the characteristics
expected in the traffic (in both directions, both temporal
behavior and semantics), derived from expectations provided
at design time and observations collected at run time.</p>
        <p>In the simplest case, it can throttle the data volume to an
acceptable level (by silently ignoring or pointedly rejecting
input), but it can also use constraints on the contents of data
to make similar decisions (e.g., for routing, acceptance, and
even explicit rejection or tacit dropping of data).</p>
        <p>The criteria are a combination of “type” constraints on
the content characteristics of the data and associated
“allowed data volumes” (more complex examples have explicit
state-machine protocol identifiers and constraints on their
allowed transition volume).</p>
        <p>The main purpose is to identify unexpected
consequences and prevent them from damaging system
performance: for example, even after a system password
compromise, the notion that certain external entities can access the
system is a failure that should be rejected.</p>
        <p>What are the likely dangerous unexpected
consequences?
system leaks (provides data content it should not)
system overloads
system thrashes
system forgets (loses data)
system breaks (still runs, but wrong answers)
system stops</p>
        <p>Available computational resources dictate whether the
fence is continuous (checking whenever anything in the
interface changes) or sporadic (occasional, periodic, or other
event based activity rule).</p>
        <p>Our focus here is on the computational mechanisms that
enable this kind of protection.</p>
        <p>We can also consider mobile fences, moving around in
the system or network, either transferring from one machine
to another, or just changing focus on different interfaces
(which clearly needs a map of the interfaces to define the
space of movement.</p>
      </sec>
      <sec id="sec-1-2">
        <title>1.2. Structure of Rest of Paper</title>
        <p>The structure of the rest of the paper is as follows. In
Section2, we provide some background on Wrappings, to
make the paper more self-contained.</p>
        <p>In Section3, we provide some comments on why we
consider self-modeling systems, and how they relate to
the self- world. We also briefly introduce the “Get Stuck”
theorems, which motivate this mitigation approach.</p>
        <p>In Section 4, we provide a notional system architecture,
based on Wrappings, that is sufficiently flexible to allow
the “behind-the-scenes” knowledge management that is
performed by our mitigation processes, and within which the
mitigation models interact.</p>
        <p>In Section 5, we provide a description of the models
used in our application and in the mitigation processes, and
describe the mitigation processes in more detail,</p>
        <p>Finally, in Section 6, we present our conclusions and
prospects for further advances.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Background on Wrapping</title>
      <p>
        We provide a short description of Wrappings in this
Section, since there are many other more detailed
descriptions elsewhere [
        <xref ref-type="bibr" rid="ref20">27</xref>
        ] [
        <xref ref-type="bibr" rid="ref18">25</xref>
        ] [6], and especially the tutorials
[33] [
        <xref ref-type="bibr" rid="ref26">34</xref>
        ]. The Wrapping integration infrastructure is our
approach to run-time flexibility, with its run-time
contextaware decision processes and computational resources. The
basic idea is that Wrappings are Knowledge-Based
interfaces to the uses of computational resources in context,
and they are interpreted by processes that are themselves
resources.
      </p>
      <p>
        The basic idea starts with the “Problem Posing”
interpretation of programs [
        <xref ref-type="bibr" rid="ref20">27</xref>
        ], which replaces explicit
invocation of computationsl resources with an implicit request to
address a problem.
      </p>
      <p>Thus, programs interpreted in this style do not “call
functions”, “issue commands”, or “send messages”; they
“pose problems” (these are information service requests).
Program fragments are not written as “functions”,
“modules”, or “methods” that do things; they are written as
“resources” that can be “applied” to problems (these are
information service providers).</p>
      <p>Because we separate the problems from the applicable
resources, we can use more flexible mechanisms for
connecting them than simply using the same name.</p>
      <p>We have shown that this approach leads to some
interesting flexibilities, when combined with the “meta-reasoning”
approach of Wrappings [5], including such properties as
software reuse without source code modification, delaying
language semantics to run-time, and system upgrades by
incremental migration instead of version based replacement.</p>
      <p>We specifically want to make the mapping from
problems to resources explicit, because implicit mechanisms are
hard to study, so for Wrappings we use a Knowledge Base.</p>
      <p>The Wrapping integration infrastructure is defined by
its two complementary aspects, the Wrapping Knowledge
Bases and the Problem Managers.</p>
      <p>The Wrapping Knowledge Bases (WKBs) contain the
Wrappings that map problems to resources in context. They
define the entire set of problems that the system knows how
to treat (there are usually also default problems that catch the
ones otherwise not recognized). The mappings are problem-,
problem parameter-, and context-dependent.</p>
      <p>
        The Problem Managers (PMs) are the programs that
read WKBs and select and apply resources to problems. The
meta-recursion follows because the PMs are also resources,
and are Wrapped in exactly the same way as other resources,
and are therefore available for the same flexible integration
as any resources. These systems therefore have no privileged
resource; anything can be replaced. Default PMs are
provided with any Wrapping implementation, but the defaults
can be superseded in the same way as any other resource.
These are the processes that replace the implicit invocation
process, allowing arbitrary processes to be inserted in the
middle of the resource invocation process. This choice leads
to very flexible systems [33] [
        <xref ref-type="bibr" rid="ref26">34</xref>
        ].
      </p>
      <p>The basic notion is the interaction of one very simple
loop, called the “Coordination Manager”, and a very simple
planner, called the “Study Manager”.</p>
      <p>The default Coordination Manager (CM) is responsible
for keeping the system going. It has only three repeated
steps, after an initial FC = Find Context step as shown in
Figure 1.</p>
      <p>To “Find Context” means to establish a context for
problem study, possibly by requesting a selection from a
user, but more often getting it explicitly or implicitly from
the system invocation. It is our placeholder for conversions
from that part of the system’s invocation environment that</p>
      <p>CM</p>
      <sec id="sec-2-1">
        <title>Pose Problem</title>
      </sec>
      <sec id="sec-2-2">
        <title>Study Problem</title>
      </sec>
      <sec id="sec-2-3">
        <title>Assimilate Results</title>
        <p>SM</p>
      </sec>
      <sec id="sec-2-4">
        <title>Match Resources</title>
      </sec>
      <sec id="sec-2-5">
        <title>Resolve Resources</title>
      </sec>
      <sec id="sec-2-6">
        <title>Select Resource</title>
      </sec>
      <sec id="sec-2-7">
        <title>Adapt Resource</title>
      </sec>
      <sec id="sec-2-8">
        <title>Advise Poser</title>
      </sec>
      <sec id="sec-2-9">
        <title>Apply Resource</title>
      </sec>
      <sec id="sec-2-10">
        <title>Assess Results</title>
        <p>This step invokes
the resource to do
whatever it does
is necessary for the system to represent to whatever internal
context structures are used by the system.</p>
        <p>To “Pose Problem” means to get a problem to study from
the problem poser (a user or the system), which includes a
problem name and some problem data, and to convert it into
whatever kind of problem structure is used by the system
(we expect this is mainly by parsing of some kind).</p>
        <p>To “Study Problem” means to use an SM and the
Wrappings to study the given problem in the given context, and
to “Assimilate Results” means to use the result to affect
the current context, which may mean to tell the poser what
happened. Each step is a problem posed to the system by
the CM, which then uses the default SM to manage the
system’s response to the problem. The first problem, “Find
Context”, is posed by the CM in the initial context of “no
context yet”, or in some default context determined by the
invocation style of the program.</p>
        <p>The main purpose of the default CM is cycling through
the other three problems, which are posed by the CM in
the context found by the first step. This way of providing
context and tasking for the SM is familiar from many
interactive programming environments: the “Find context”
part is usually left implicit, and the rest is exactly analogous
to LISP’s “read-eval-print” loop, though with very different
processing at each step, mediated by one of the SMs. In
this sense, this CM is a kind of “heartbeat” that keeps the
system moving.</p>
        <p>If the Coordination Manager is the basic cyclic program
heartbeat, then the Study Manager is a planner that organizes
the resource applications. The CM and SM interact as shown
schematically in Figure 1.</p>
        <p>We have divided the “Study Problem” process into three
main steps: “Interpret Problem”, which means to find a
resource to apply to the problem; “Apply Resource”, which
means to apply the resource to the problem in the current
context; and “Assess Results”, which means to evaluate the
result of applying the resource, and possibly posing new
problems. We further subdivide problem interpretation into
five steps, which organize it into a sequence of basic steps
that we believe represent a fundamental part of problem
study and solution. These are implemented in the default
Study Manager (SM).</p>
        <p>To “Match Resources” is to find a set of resources that
might apply to the current problem in the current context. It
is intended to allow a superficial first pass through a possibly
large collection of Wrapping Knowledge Bases.</p>
        <p>To “Resolve Resources” is to eliminate those that do not
apply. It is intended to allow negotiations between the posed
problem and each Wrapping of the resource to determine
whether or not it can be applied, and make some initial
bindings of formal parameters of resources that still apply.</p>
        <p>To “Select Resource” is simply to make a choice of
which of the remaining candidate resources (if any) to use.</p>
        <p>To “Adapt Resource” is to set it up for the current
problem and problem context, including finishing all required
bindings.</p>
        <p>To “Advise Poser” is to tell the problem poser (who
could be a user or another part of the system) what is about
to happen, i.e., what resource was chosen and how it was
set up to be applied.</p>
        <p>To “Apply Resource” is to use the resource for its
information service, which either does something, presents
something, or makes some information or service available.</p>
        <p>To “Assess Results” is to determine whether the
application succeeded or failed, and to help decide what to do
next.</p>
        <p>Finally, we insist that every step in the above sequences
is actually a posed problem, and is treated in exactly the
same way as any other, which makes these sequences
“meta”-recursive [1]. That means that if we have any
knowledge at all that a different planner may be more appropriate
for the context and application at hand, we can use it (after
defining the appropriate context conditions), either to replace
the default SM when it is applicable, or to replace individual
steps of the SM, according to that context (which can be
selected at run time).</p>
        <p>Of course, we also have to have something to replace
or supersede. We have therefore provided default resources
for each of the CM and SM steps, to be used when no
other is selected to supersede it (as the above SM is the
default resource for the problem “Study Problem”). A simple
complication occurs with the default among many possible
resources for the “Select Resource” problem: we want to
allow other resources to be used, so we insist that the default
resource (which otherwise might just pick the first resource
on the list) not pick itself if there is another choice when it
is addressing the “Select Resource” problem.</p>
        <p>In addition, since the resources that read the WKB are
selected in context as is any other, the WKB can be
heterogeneous, with context determining which reader is used
for which format of Knowledge Base. This helps greatly for
implementing improvements to programs, since the new and
old formats can exist simultaneously, unti the old format is
no longer needed.</p>
        <p>
          We have used these algorithms many times to explain
and implement autonomous and reflective agents and
systems [
          <xref ref-type="bibr" rid="ref21">28</xref>
          ] [
          <xref ref-type="bibr" rid="ref22">29</xref>
          ], and shown that they provide the appropriate
level of manageable flexibility and auditable integration.
The advantage in flexibility this approach provides over
other activity loops that have been proposed is that the SM
and CM steps are “meta”-steps, with posed problems for
the activities, allowing one further level of abstraction and
indirection when it is useful. There are a number of other
activity loops that we have seen described in various places
[5], especially the popular MAPE-K loop of autonomic
computing [
          <xref ref-type="bibr" rid="ref14">21</xref>
          ] [
          <xref ref-type="bibr" rid="ref17">24</xref>
          ] [3] [
          <xref ref-type="bibr" rid="ref33">40</xref>
          ], and we have shown that our
CM / SM meta-recursive interaction subsumes all of them.
The meta-interpretation style [1] of Wrappings [
          <xref ref-type="bibr" rid="ref20">27</xref>
          ] can of
course be applied to any of them to make them much more
flexible.
        </p>
        <p>We have implemented several different kinds of CMs
in addition to the simple default CM defined above. There
are CMs that short cut the reflection by calling the default
step resources directly, and fully recursive versions that have
extra levels of problem posing. Some of them are described
in other papers in the references.</p>
        <p>We have also used different SMs, beyond the default
one that tries only one resource: one SM tries all applicable
resources and returns with the first success, another tries
them all and evaluates them to return the best success,
and one collects all successes and summarizes. There are
also different kinds of SM steps. The Match and Resolve
resources that read XML WKBs are different from the ones
that read text only WKBs. A different Match or Resolve
might invoke a more sophisticated planner if there are no
matches. A different Select might choose all compatible
resources, then negotiate among them. Different versions of
apply, beyond the default function call, might send a request
message, or invoke an interpreter or other process. Another
one might simply add the resource to a configuration, instead
of invoking it.</p>
        <p>
          Wrapping-based systems support run-time decisions
about which resources to apply in the current context, both
at the application level (the resources that perform the
task at hand) and at the meta-level (the resources that are
used to select and organize the application level resources).
This flexibility does come with a cost, but there are also
mechanisms based on partial evaluation [
          <xref ref-type="bibr" rid="ref6">13</xref>
          ] [20] [
          <xref ref-type="bibr" rid="ref20">27</xref>
          ] [41]
[15] for removing any decisions that will be made the same
way every time, thus leaving the costs where the variabilities
need to be.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Self-Modeling Systems</title>
      <p>
        Our approach is related to the architectures of robots
and autonomous vehicles [2] [
        <xref ref-type="bibr" rid="ref30">37</xref>
        ], but we add some features
not computationally feasible at the beginning. Our systems
are Self-Adaptive [6] [9] [
        <xref ref-type="bibr" rid="ref16">23</xref>
        ], which means that they can
observe their own behavior, reason about it, and use the
results to adjust the behavior.
      </p>
      <p>
        We are especially enamored of Self-Modeling Systems
[
        <xref ref-type="bibr" rid="ref21">28</xref>
        ] [
        <xref ref-type="bibr" rid="ref22">29</xref>
        ], which have models of their own behavior, derived
from original specifications of intent (from the designers),
as modified by observation of actual behavior, because (in
this author’s paraphrase):
      </p>
      <p>
        No plan ... extends with any certainty beyond the
first contact with ... [reality], reference [39], p.92
These models are interpreted to produce the system’s
behavior. That is to say, the behavior, sometimes including
the interpreter itself, is the interpretation of the models (this
is not as hard as it seems [
        <xref ref-type="bibr" rid="ref28">42</xref>
        ] [
        <xref ref-type="bibr" rid="ref21">28</xref>
        ] [
        <xref ref-type="bibr" rid="ref24">31</xref>
        ]).
      </p>
      <p>The reason for self-modeling systems is to retain as
much flexibility in the operational system as possible, and to
allow the system to depart wildly from its original design
specifications (under carefully controlled or appropriately
identified situations, of course). Additionally, it allows the
system to examine itself, looking for anomalies:
O wad some Power the giftie gie us To see oursels
as ithers see us! [10]</p>
      <p>
        This architectural approach also supports the processes
that mitigate the effects of the “Get Stuck” Theorems, which
essentially say that any software-intensive system that makes
models of its environment and behavior will eventually need
to reorganize its knowledge structures. To that end, we
defined several mitigation processes in [
        <xref ref-type="bibr" rid="ref19">26</xref>
        ] [
        <xref ref-type="bibr" rid="ref25">32</xref>
        ], and this
paper is a description of a notional architecture in which to
implement the processes.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Architecture</title>
      <p>In this Section, we describe an architectural context for
the mitigation processes, using a Wrapping infrastructure to
provide the flexibility of operations that we want. In the next
Section, we describe the processes in a little more detail and
show how they might interact.</p>
      <p>A notional picture of the architecture under
consideration is in Figure 2.</p>
      <p>The top part of the picture is the usual Wrapping CM /
SM loop, accessing the WKB in context to apply resources
to posed problems, possibly also adjusting the context (and
allowing the context to be changed by the system
environment). This is the standard behavior of a Wrapping-based
system.</p>
      <p>The other processes in the main system, that is, the
ones in the application domain, are invoked by the SM, and
build and maintain the domain knowledge bases that are the
focus of the mitigation effort (though, of course, the same
mitigation processes apply equally well to the Wrapping
Knowledge Base. due to reflection).</p>
      <p>The mitigation processes BM, KnRef, and DKM collect
information from the choices made in the SM and adjust
the WKB, and they collect information from the domain
knowledge bases and adjust them also.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Description of Models Used</title>
      <p>There are three classes of models (processes): the
infrastructure models, such as the CM, SM, and other PMs;
WKB</p>
      <p>KnRef</p>
      <p>BM</p>
      <p>context
domain
processes
domain
knowledge
DKM
behavior
resources
the application models that perform the system’s objective
activity, and the mitigation models that sit behind that
activity and keep it running.</p>
      <sec id="sec-5-1">
        <title>5.1. Infrastructure Models</title>
        <p>The infrastructure has been described before, in
Section 2. It contains the default PMs, which are the various
flavors of SM (default is straight line plan, others are try all
until success, try all and combine, and others. These can be
fully recursive or not.</p>
        <p>It contains the various flavors of CM used in the system,
such as the default simple loop and the Mud CM for
distributed components, and these can be recursive or not.
It contains the WKB and the context knowledge base, and
the reflective nature of Wrappings allows both of them to
be heterogeneous, since the knowledge base readers are also
resources, selected according to posed problems in the usual
way.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Fences Application Models</title>
        <p>These are the models that do the fences application
work. Each fence is responsible for one class of interfaces
(often just one interface).</p>
        <p>instrumentation: record summaries of data types and
volume for all (or just representative) crossing the
interface in either direction; in certain cases, the
content matters also;
movement: for mobile fences, decisions about when
and where to move (perhaps to track down a problem
or just randomly spot check); this process clearly
needs a map of system interfaces;
examination: build models of the data crossing the
interface (in both directions), infer actual protocol
with performance measurements; in the simplest
case, just measuring close approaches to
acceptability boundaries is enough model;
retention: store these models in a system domain
knowledge base;
communication: announce current status and
observed behavior, trends and variability distributions;
interact with other fences to discover and isolate
problems; announce to the system that there is an
issue in certain places (so that the context can be
chand to avoid them);
cooperation: interact with other fences to discover
and isolate problems; apply a requested data throttle
at an interface;
escalation: when a problem gets dangerous or large
or widespread enough (criteria supplied by the
designers). ask for help.</p>
        <p>Each of these processes has relatively simple inputs, and
only the examination process has any difficult algorithms,
Most of the difficulty lies in the criteria for data extraction,
modeling , and escalation.</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Mitigation Models</title>
        <p>
          These are the ones that do the mitigations [
          <xref ref-type="bibr" rid="ref24">31</xref>
          ] [
          <xref ref-type="bibr" rid="ref19">26</xref>
          ]:
Behavior Mining
Model Deficiency Analysis
Knowledge Refactoring
Dynamic Knowledge Management
Constructive Forgetting
        </p>
        <p>Continual Contemplation
We describe the intent of each of these processes and how
they might interact.
5.3.1. Behavior Mining. This process examines every
decision in context, recording the following data:
problem with parameters,
relevant context,
resource application selected and constructed.</p>
        <p>The relevant context is the set of context conditions that
were examined and succeeded for the selection. In fancier
cases, this will also include resources not selected for this
problem, and why (according to the context conditions that
eliminated them).</p>
        <p>In general, problems are stratified into layers, based
on their resolution (in time, space, or concept). Strictly
speaking, this should be called semiotic content, but that
discussion leads beyond the scope of this paper.</p>
        <p>The BM process also examines changes to the domain
knowledge base, looking for infelicities, which can be
statistical or even esthetic (unbalanced trees, different amounts
of detail in different areas, unique or very low frequency
references, which often results from specification errors). In
this case, the models are “plausibility” models, since there
is no available basis to declare them correct or incorrect.</p>
        <p>The BM process then feeds its results to the MDA
process for resolution.</p>
      </sec>
      <sec id="sec-5-4">
        <title>5.3.2. Model Deficiency Analysis. This process attempts</title>
        <p>to determine where models have gone wrong (this is a
retrospective analysis, not predictive). The simplest form of
this begins with a behavioral assertion about model effects
that has been violated.</p>
        <p>We expect the assertions to be provided initially by
the designers, since models are expected to provide some
information service, and the model creators decide what that
is. After all, there was a purpose for creating the model in
the first place, and these assertions define the designers’
expectations for it.</p>
        <p>Every behavioral assertion involves some of the
variables within the model (or some performance parameters).
In the most intrusive case, every change to any of those
variables causes the assertion to be (it is possible to reduce
this a little bit if if can be proven that the change cannot
make an assertion go fro true to false).</p>
        <p>When an assertion fails, the hard part begins: the
assignment of blame, that is, how can the system decide where
the failure is and who did it. This is especially difficult
for assertions in the usual kinds of first-order logic, since
mathematically there is no such culprit.</p>
        <p>However, the assertions are not the only information
available to the MDA process. It also has access to the
component behavior models constructed by the BM process,
and sometimes it can use those to decide which part of an
assertion is less likely to be wrong. This assessment can
often be done by maintaining a reliability “reputation” for
each of the assertions, each of its components and each of
the processes that produce the variables tha occur in those
components. The reputation of an assertion component is
enhanced by its success frequency (tempered by a measure
of how well the input data to each variable producer fits the
input assumptions).</p>
        <p>In addition, the MDA process gets warnings from the
BM process about models that may be incorrect due to
statistical or esthetic considerations. It tries to decide when
a strange structure is an error and when it is just strange.
Of course, it can’t really do that, mainly for undecideability
reasons, but it can discover certain kinds of problems and
announce the others to the system monitors to try to get
help. If no help is forthcoming, then the issue is simply
recorded as a problem that the MDA process cannot solve,
and if enough instances of those occur, it escalates the issue
to a deficiency warning.
5.3.3. Knowledge Refactoring. This process is partly
housekeeping: re-arranging knowledge for efficiencies: it
can be applied to the context conditions for the same
problem (e.g., to get the shortest average time for decision,
given the time distribution of conditions). It is also related
to the esthetic criteria in the BM process: if we consider
any knowledge representation mechanism as a graph with
labeled nodes and directed labeled edges, then nodes with
an excessive number of edges may be too general, and nodes
with too few edges may be too specific.</p>
        <p>This process is also sensitive to the access patterns
of the knowledge elements. We want elements that are
accessed very often to have shorter access paths, which can
distort any nice a priori ontology (the goal in this case is
not readability; it is efficiency; other semantics-preserving
refactorings may be needed for readability).</p>
        <p>This process is intended to be transparent to the users
of the knowledge bases, in the sense that their access
mechanisms remain exactly the same; only the resulting search
performance is affected.</p>
      </sec>
      <sec id="sec-5-5">
        <title>5.3.4. Dynamic Knowledge Management. This process</title>
        <p>
          organizes the knowledge for quick reasoning; it subsumes
Constructive Forgetting (a mitigation from [
          <xref ref-type="bibr" rid="ref24">31</xref>
          ] [
          <xref ref-type="bibr" rid="ref19">26</xref>
          ]). We
called that one out explicitly because it is not normally
acceptable to throw knowledge away, but we have shown
that it is inevitable.
        </p>
        <p>The idea here is that there are different frequencies of
access for different parts of the knowledge base, and
rearranging the knowledge base can take that into account.</p>
        <p>
          The simplest version of this is to pushd the Least
Recently Used (LRU) knowledge elements off to longer
access paths (this is much the same process as defining
Huffman codes [19] [
          <xref ref-type="bibr" rid="ref29">36</xref>
          ] [
          <xref ref-type="bibr" rid="ref7">14</xref>
          ]). This measure of recency
can be weighted by importance of consequences and
gathered by relevant context (big context change implies much
knowledge re-ordering). There are also other relevant factors
that will be different for each different application.
5.3.5. Continual Contemplation. This process is the
general term for all the background concurrent examination
processes that examine everything: some examine as-is
models (from Behavior Mining), and compare with as-designed
models (from system definition). These functions are
described earlier in this Section.
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions and Prospects</title>
      <p>We have argued that the flexibility of self-modeling
systems make them well-suited to handle remote, complex,
and / or hostile environments, but that flexibility comes with
the obvious cost in performance, and a less obvious cost
in organization. How much of this infrastructure machinery
is used in any given application is an application-specific
engineering decision that depends on the expected level of
hazard and the required level of performance.</p>
      <p>We have shown that the Wrapping infrastructure
supports a very flexible interweaving of domain resources and
infrastructure resources, and can now include processes that
mitigate the effects of the “Get Stuck” Theorems, which
limit the lifetime of any autonomous system that builds and
maintains models.</p>
      <p>One of the most exciting prospects is that the system
has enough knowledge about its own behavior that it can
explain what it is doing, or what it is about to do, and why,
and most particularly, what it is not doing and why (the SM
manages the selection, so it has the data to explain what it
does not select). This requires the system to reason about
the situation it is in [4], so they can describe it.</p>
      <p>
        A key advance in the state of the reasoning processes
would be to provide tools for them to reason about
incomplete and inconsistent information [7] [8] [12], since that
describes essentially all of the system’s knowledge.
Similarly, various kinds of advanced learning methods [11] [
        <xref ref-type="bibr" rid="ref31">38</xref>
        ]
[
        <xref ref-type="bibr" rid="ref11">18</xref>
        ] [
        <xref ref-type="bibr" rid="ref10">17</xref>
        ] could be applied in the model building processes,
to avoid needing to specify a model type or structure in
advance. These could be of great use to the model building
processes in these systems.
      </p>
      <p>
        However, learning methods such as XCS [
        <xref ref-type="bibr" rid="ref36">43</xref>
        ] [
        <xref ref-type="bibr" rid="ref37">44</xref>
        ] have
not much place here, since the behaviors in these systems
are not well modeled by MDP (Markov Decision Processes)
or even POMDP (Partially Observable MDP) in most cases,
and these methods are not model-free; they assume a state
transition model that can be described as a POMDP.
      </p>
      <p>We also described mobile fences, moving around in the
system or network, with the responsibility to explore, detect,
decide and act or escalate. If these fences are also
knowledgeable about more of the system behavioral expectations,
then they should be able to detect certain software errors,
in addition to external anomalies.</p>
      <p>We can also imagine some physical existence for “touch
points” (like the little blue police / watchman boxes for
periodic checking in), so the fence can monitor and examine
its own progress.</p>
      <p>We think that these systems can be made much safer
than they are now, but that requires an engineering
judgment based choice of how much protective infrastructure to
include.
[19]
[20]
[6]
[7]
[8]
[9]</p>
      <p>
        Kirstie L. Bellman, Christopher Landauer, Phyllis Nelson, Nelly
Bencomo, Sebastian Go¨tz, Peter Lewis and Lukas Esterle,
“Selfmodeling and Self-awareness”, Chapter 9, pp. 279-304 in [
        <xref ref-type="bibr" rid="ref15">22</xref>
        ]
Leopoldo E. Bertossi, Anthony Hunter, Torsten Schaub (eds.),
Inconsistency Tolerance, Springer Lecture Notes in Computer Science,
Volume 3300, Springer Verlag (2004)
Jean-Yves Beziau, Walter Carnielli and Dov Gabbay (eds.),
Handbook of Paraconsistency, King’s College (2007)
Robert Birke, Javier Ca´mara, Lydia Y. Chen, Lukas Esterle, Kurt
Geihs, Erol Gelenbe, Holger Giese, Anders Robertsson and Xiaoyun
Zhu, “Self-aware Computing Systems: Open Challenges and Future
Research Directions”, Chapter 26, pp. 709-722 in [
        <xref ref-type="bibr" rid="ref15">22</xref>
        ]
[10] Robert Burns, “To a Louse” (1768); Robert Burns in Your Pocket,
Waverley Press (2009); along with many other collections and web
sites
[11] Jaime G. Carbonell, “Learning by Analogy: Formulating and
Generating Plans from Past Experience”, pp. 137-161 in [
        <xref ref-type="bibr" rid="ref31">38</xref>
        ]
[12]
      </p>
      <p>Walter A. Carnielli, M.E. Coniglio and J. Marcos, “Logics of Formal
Inconsistency”, pp. 15-107 in [16]</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>Harold</given-names>
            <surname>Abelson</surname>
          </string-name>
          ,
          <article-title>Gerald Sussman, with Julie Sussman, The Structure and Interpretation of Computer Programs</article-title>
          , Bradford Books,
          <string-name>
            <surname>now MIT</surname>
          </string-name>
          (
          <year>1985</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>James S. Albus</surname>
          </string-name>
          ,
          <string-name>
            <surname>Alexander M. Meystel</surname>
          </string-name>
          , Engineering of Mind:
          <article-title>An Introduction to the Science of Intelligent Systems</article-title>
          , Wiley (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>Paolo</given-names>
            <surname>Arcaini</surname>
          </string-name>
          , Elvinia Riccobene, Patrizia Scandurra, “
          <article-title>Modeling and Analyzing MAPE-K Feedback Loops for Self-Adaptation”</article-title>
          ,
          <source>Proceedings SEAMS</source>
          <year>2015</year>
          :
          <article-title>The 2015</article-title>
          <source>IEEE/ACM 10th International Symposium on Software Engineering for Adaptive and SelfManaging Systems</source>
          ,
          <volume>18</volume>
          -
          <fpage>19</fpage>
          May
          <year>2015</year>
          , Florence, Italy (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>Jon</given-names>
            <surname>Barwise</surname>
          </string-name>
          ,
          <source>The Situation in Logic, CSLI Lecture Notes No. 17</source>
          ,
          <string-name>
            <surname>Center</surname>
            <given-names>for</given-names>
          </string-name>
          <source>the Study of Language and Information</source>
          , Stanford U. (
          <year>1989</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Dr. Kirstie L. Bellman</surname>
          </string-name>
          , Dr. Christopher Landauer, Dr. Phyllis R. Nelson, “
          <article-title>Managing Variable and Cooperative Time Behavior”</article-title>
          ,
          <source>Proceedings SORT 2010: The First IEEE Workshop on Self-Organizing Real-Time Systems</source>
          , 05 May, part of ISORC 2010:
          <article-title>The 13th</article-title>
          IEEE International Symposium on Object/component/service-oriented
          <source>Realtime distributed Computing</source>
          ,
          <volume>05</volume>
          -
          <fpage>06</fpage>
          May
          <year>2010</year>
          , Carmona, Spain (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>C.</given-names>
            <surname>Consel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Danvy</surname>
          </string-name>
          , “Tutorial Notes on Partial Evaluation”, p.
          <fpage>493</fpage>
          -
          <lpage>501</lpage>
          <source>in Proceedings PoPL 1993: The 20th ACM Symposium on Principles of Programming Languages</source>
          ,
          <fpage>10</fpage>
          -
          <lpage>13</lpage>
          January 1993,
          <article-title>Charleston</article-title>
          ,
          <string-name>
            <surname>SC</surname>
          </string-name>
          (
          <year>January 1993</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Thomas</surname>
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Cormen</surname>
          </string-name>
          , Charles E. Leiserson, Ronald L.
          <string-name>
            <surname>Rivest</surname>
          </string-name>
          , and Clifford Stein, Introduction to Algorithms, MIT Press (
          <year>1990</year>
          )
          <article-title>Second Edition</article-title>
          ,
          <string-name>
            <surname>McGraw-Hill</surname>
          </string-name>
          (
          <year>2001</year>
          ),
          <source>Section 16.3</source>
          , p.
          <fpage>385392</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>Marcus</given-names>
            <surname>Denker</surname>
          </string-name>
          , Orla Greevy, Michele Lanza, “
          <article-title>Higher Abstractions for Dynamic Analysis”</article-title>
          , pp.
          <fpage>32</fpage>
          -
          <lpage>38</lpage>
          <source>in Proceedings PCODA'2006: the 2nd International Workshop on Program Comprehension through Dynamic Analysis</source>
          ,
          <source>Technical report 2006-11</source>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <given-names>D.</given-names>
            <surname>Gabbay</surname>
          </string-name>
          , F. Guenthner (eds.),
          <source>Handbook of Philosophical Logic</source>
          , vol.
          <volume>14</volume>
          ,
          <string-name>
            <surname>Reidel</surname>
          </string-name>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Ian</surname>
            <given-names>Goodfellow</given-names>
          </string-name>
          , Yoshua Bengio, Aaron Courville, Deep Learning, MIT (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Trevor</surname>
            <given-names>Hastie</given-names>
          </string-name>
          , Robert Tibshirani, Jerome Friedman,
          <source>The Elements of Statistical Learning: Data Mining, Inference, and Prediction</source>
          , Springer (
          <year>2009</year>
          ), 2nd ed. Springer (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <given-names>David A.</given-names>
            <surname>Huffman</surname>
          </string-name>
          , “
          <article-title>A Method for the Construction of MinimumRedundancy Codes”</article-title>
          ,
          <source>Proceedings of the IRE</source>
          , v.
          <volume>40</volume>
          , no.
          <issue>9</issue>
          , p.
          <fpage>1098</fpage>
          -
          <lpage>1101</lpage>
          (
          <year>1952</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <given-names>N. D.</given-names>
            <surname>Jones</surname>
          </string-name>
          , “Partial Evaluation”,
          <string-name>
            <surname>Computing</surname>
            <given-names>Surveys</given-names>
          </string-name>
          , Volume
          <volume>28</volume>
          , No.
          <volume>3</volume>
          (
          <year>September 1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kephart</surname>
          </string-name>
          , “
          <article-title>Feedback on feedback in autonomic computing systems”</article-title>
          ,
          <source>in Proceedings FC 2012: the 7th International Workshop on Feedback Computing</source>
          , San Jose, California (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Samuel</surname>
            <given-names>Kounev</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jeffrey O. Kephart</surname>
          </string-name>
          , Aleksandar Milenkoski, Xiaoyun Zhu (eds.),
          <source>Self-Aware Computing Systems</source>
          , Springer (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Samuel</surname>
            <given-names>Kounev</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Peter</given-names>
            <surname>Lewis</surname>
          </string-name>
          , Kirstie L. Bellman, Nelly Bencomo, Javier Ca´mara, Ada Diaconescu, Lukas Esterle, Kurt Geihs, Holger Giese, Sebastian Go¨tz, Paola Inverardi, Jeffrey O.
          <article-title>Kephart and Andrea Zisman, “The Notion of Self-aware Computing”, Chapter 1</article-title>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>16</lpage>
          in [22]
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Philippe</surname>
            <given-names>Lalanda</given-names>
          </string-name>
          ,
          <article-title>Julie A</article-title>
          .
          <string-name>
            <surname>McCann</surname>
            ,
            <given-names>and Ada</given-names>
          </string-name>
          <string-name>
            <surname>Diaconescu</surname>
          </string-name>
          ,
          <source>Autonomic Computing: Principles, Design and Implementation</source>
          , Undergraduate Topics in Computer Science Series, Springer (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Christopher</surname>
            <given-names>Landauer</given-names>
          </string-name>
          , “
          <article-title>Infrastructure for Studying Infrastructure”</article-title>
          ,
          <source>Proceedings ESOS 2013: Workshop on Embedded Self-Organizing Systems, 25 June</source>
          <year>2013</year>
          , San Jose, California; part of 2013
          <source>USENIX Federated Conference Week</source>
          ,
          <fpage>24</fpage>
          -
          <lpage>28</lpage>
          June 2013, San Jose, California (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Christopher</surname>
            <given-names>Landauer</given-names>
          </string-name>
          , “
          <article-title>Mitigating the Inevitable Failure of Knowledge Representation”</article-title>
          ,
          <string-name>
            <surname>Proceedings</surname>
            <given-names>M</given-names>
          </string-name>
          <source>@RT@ICAC 2017: The 2nd International Workshop on Models@run.time for Self-aware Computing Systems, Part of ICAC2017: The 14th International Conference on Autonomic Computing</source>
          ,
          <fpage>17</fpage>
          -
          <issue>21</issue>
          <year>July 2017</year>
          , Columbus, Ohio (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Christopher</surname>
            <given-names>Landauer</given-names>
          </string-name>
          , Kirstie L. Bellman, “
          <article-title>Generic Programming, Partial Evaluation, and a New Programming Paradigm”, Chapter 8</article-title>
          , pp.
          <fpage>108</fpage>
          -
          <lpage>154</lpage>
          in Gene McGuire (ed.),
          <source>Software Process Improvement</source>
          , Idea Group Publishing (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [28]
          <string-name>
            <surname>Christopher</surname>
            <given-names>Landauer</given-names>
          </string-name>
          , Kirstie L. Bellman, “
          <string-name>
            <surname>Self-Modeling</surname>
            <given-names>Systems</given-names>
          </string-name>
          ”, pp.
          <fpage>238</fpage>
          -
          <lpage>256</lpage>
          in R. Laddaga, H. Shrobe (eds.), “
          <string-name>
            <surname>Self-Adaptive Software</surname>
          </string-name>
          ”,
          <source>Springer Lecture Notes in Computer Science</source>
          , vol.
          <volume>2614</volume>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Christopher</surname>
            <given-names>Landauer</given-names>
          </string-name>
          , Kirstie L. Bellman, “
          <article-title>Managing SelfModeling Systems”</article-title>
          , in R. Laddaga, H. Shrobe (eds.),
          <source>Proceedings Third International Workshop on Self-Adaptive Software</source>
          ,
          <fpage>09</fpage>
          -
          <lpage>11</lpage>
          Jun 2003, Arlington,
          <string-name>
            <surname>VA</surname>
          </string-name>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [30]
          <string-name>
            <surname>Christopher</surname>
            <given-names>Landauer</given-names>
          </string-name>
          , Kirstie L. Bellman, “
          <article-title>Model-Based Cooperative System Engineering and Integration”</article-title>
          ,
          <source>Proceedings SiSSy 2016: 3rd Workshop on Self-Improving System Integration, 19 July</source>
          <year>2016</year>
          ,
          <source>part of ICAC2016: 13th IEEE International Conference on Autonomic Computing</source>
          ,
          <fpage>19</fpage>
          -
          <issue>22</issue>
          <year>July 2016</year>
          , Wuerzburg, Germany (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [31]
          <string-name>
            <surname>Christopher</surname>
            <given-names>Landauer</given-names>
          </string-name>
          , Kirstie L. Bellman, “
          <article-title>Self-Modeling Systems Need Models at Run Time”</article-title>
          ,
          <string-name>
            <surname>Proceedings</surname>
            <given-names>M</given-names>
          </string-name>
          @
          <article-title>RT 2016: the 11th</article-title>
          <source>International Workshop on Models@run.time, 04 October</source>
          <year>2016</year>
          ,
          <source>Part of ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems</source>
          .
          <volume>02</volume>
          -
          <issue>07</issue>
          <year>October 2016</year>
          ,
          <article-title>Palais du Grand Large</article-title>
          , Saint Malo, Brittany, France (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [32]
          <string-name>
            <surname>Christopher</surname>
            <given-names>Landauer</given-names>
          </string-name>
          , Kirstie L. Bellman, “
          <article-title>An Architecture for SelfAwareness Experiments”</article-title>
          ,
          <source>Proceedings SeAC 2017: 2nd Workshop on Self-Aware Computing, Part of ICAC2017: The 14th International Conference on Autonomic Computing</source>
          ,
          <fpage>17</fpage>
          -
          <issue>21</issue>
          <year>July 2017</year>
          , Columbus, Ohio (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [34]
          <string-name>
            <surname>Dr</surname>
          </string-name>
          . Christopher Landauer, Dr. Kirstie L. Bellman, Dr. Phyllis R. Nelson, “Wrapping Tutorial:
          <article-title>How to Build Self-Modeling Systems”</article-title>
          ,
          <source>Proceedings SASO 2012: The 6th IEEE Intern. Conf. on Self-Adaptive and Self-Organizing Systems</source>
          ,
          <volume>10</volume>
          -
          <fpage>14</fpage>
          Sep 2012, Lyon, France (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <given-names>Dr. Christopher</given-names>
            <surname>Landauer</surname>
          </string-name>
          , Dr. Kirstie L. Bellman, Dr. Phyllis R. Nelson, “Wrapping Tutorial:
          <article-title>How to Build Self-Modeling Systems”</article-title>
          ,
          <source>Proceedings CogSIMA</source>
          <year>2013</year>
          : 2013
          <string-name>
            <given-names>IEEE</given-names>
            <surname>Intern. Inter-Disciplinary Conf</surname>
          </string-name>
          .
          <source>Cognitive Methods for Situation Awareness and Decision Support</source>
          ,
          <fpage>25</fpage>
          -28
          <source>February</source>
          <year>2013</year>
          , San Diego, California (
          <year>2013</year>
          ) [39]
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [42]
          <string-name>
            <surname>Dr</surname>
          </string-name>
          . Christopher Landauer, Dr. Kirstie L. Bellman, Dr. Phyllis R. Nelson, “
          <article-title>Modeling Spaces for Real-Time Embedded Systems”</article-title>
          ,
          <source>Proceedings SORT 2013: The Fourth IEEE Workshop on SelfOrganizing Real-Time Systems, 20 June</source>
          <year>2013</year>
          , part of ISORC 2013:
          <article-title>The 16th IEEE International Symposium on Object / component / service-oriented Real-time distributed</article-title>
          <string-name>
            <surname>Computing</surname>
          </string-name>
          ,
          <volume>19</volume>
          -
          <fpage>21</fpage>
          Jun 2013, Paderborn, Germany (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [36]
          <string-name>
            <surname>Jan</surname>
            <given-names>Van Leeuwen</given-names>
          </string-name>
          , “
          <article-title>On the construction of Huffman trees”</article-title>
          , p.
          <fpage>382</fpage>
          -
          <lpage>410</lpage>
          <source>in Proceedings ICALP 1976: the Third International Colloquium on Automata, Languages and Programming</source>
          ,
          <volume>20</volume>
          -
          <issue>23</issue>
          <year>July 1976</year>
          ,
          <string-name>
            <surname>Edinburgh</surname>
          </string-name>
          (
          <year>1976</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [37]
          <string-name>
            <surname>Alexander</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Meystel</surname>
          </string-name>
          , James S. Albus,
          <source>Intelligent Systems: Architecture</source>
          , Design, and Control, Wiley (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [38]
          <string-name>
            <surname>Ryszard</surname>
            <given-names>S</given-names>
          </string-name>
          . Michalski, Jaime G. Carbonell, Tom M. Mitchell (eds.),
          <source>Machine Learning: An Artificial Intelligence Approach</source>
          , Tioga Press, Palo Alto, CA (
          <year>1983</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <string-name>
            <given-names>Helmuth</given-names>
            <surname>Karl Bernhard Graf von Moltke</surname>
          </string-name>
          ,
          <article-title>On Strategy (in German), translated in Daniel J. Hughes and Harry Bell, Moltke on the Art of War: Selected Writings</article-title>
          , Presidio Press (
          <year>1993</year>
          ); paperback Presidio Press (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>E.</given-names>
            <surname>Rutten</surname>
          </string-name>
          , “
          <article-title>Feedback Control as MAPE-K loop in Autonomic Computing”</article-title>
          ,
          <source>Research Report RR-8827</source>
          ,
          <string-name>
            <given-names>INRIA</given-names>
            <surname>Sophia</surname>
          </string-name>
          Antipolis - Me´diterrane´e, INRIA Grenoble - Rhoˆne-Alpes (
          <issue>10</issue>
          <year>Dec 2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <string-name>
            <surname>Gregory</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Sullivan</surname>
          </string-name>
          , “
          <article-title>Dynamic Partial Evaluation”</article-title>
          ,
          <source>Proceedings PADO II: Second Symposium on Programs as Data Objects</source>
          ,
          <fpage>21</fpage>
          -
          <lpage>23</lpage>
          May
          <year>2001</year>
          , Aarhus, Denmark (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          <string-name>
            <given-names>Ken</given-names>
            <surname>Thompson</surname>
          </string-name>
          , “Reflections on Trusting Trust”,
          <source>Comm. of the ACM</source>
          , vol.
          <volume>27</volume>
          , no.
          <issue>8</issue>
          , pp.
          <fpage>761</fpage>
          -
          <lpage>763</lpage>
          (
          <year>Aug 1984</year>
          ), http://dl.acm.org/citation.cfm?
          <source>id=358210 (availability last checked 03 Apr</source>
          <year>2017</year>
          )
          <article-title>; see also the “back door” entry of “The Jargon File”, widely available on the Web, and other comments findable by searching for “back door Ken Thompson moby hack” (availability last checked</article-title>
          03
          <source>Apr</source>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [43]
          <string-name>
            <surname>Stewart</surname>
            <given-names>W</given-names>
          </string-name>
          . Wilson, “
          <source>Classifier Fitness Based on Accuracy”</source>
          , Evolutionary Computation, v.
          <volume>3</volume>
          , no.
          <issue>2</issue>
          , p.
          <volume>149175</volume>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [44]
          <string-name>
            <surname>Stewart</surname>
            <given-names>W</given-names>
          </string-name>
          . Wilson, “
          <article-title>Generalization in the XCS Classifier System”</article-title>
          , p.
          <volume>665674</volume>
          in John R. Koza et al. (eds.),
          <source>Proceedings GP 1998: the Third Annual Conference on Genetic Programming</source>
          ,
          <fpage>22</fpage>
          -
          <issue>25</issue>
          <year>July 1998</year>
          , University of Wisconsin, Madison, Wisconsin (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>