=Paper= {{Paper |id=Vol-3007/2021-short-2 |storemode=property |title=What I Want in a (Computational) Partner |pdfUrl=https://ceur-ws.org/Vol-3007/2021-short-2.pdf |volume=Vol-3007 |authors=Christopher Landauer |dblpUrl=https://dblp.org/rec/conf/lifelike/Landauer21 }} ==What I Want in a (Computational) Partner== https://ceur-ws.org/Vol-3007/2021-short-2.pdf
                                    What I Want in a (Computational) Partner

                                                               Christopher Landauer
                                        Topcy House Consulting, Thousand Oaks, California, 91362
                                                         topcycal@gmail.com



                             Abstract                                                 is well-described in earlier papers on Self-Awareness and
                                                                                      Self-Modeling Landauer and Bellman (1999) Landauer and
  This is a position paper about some stringent conditions that                       Bellman (2002) Landauer (2013).
  we should expect any computational partners to satisfy before                          We are not at all suggesting that this is the only way to
  we are willing to trust them with non-trivial tasks, and the
  methods we expect to use to build systems that satisfy them.                        address these issues, or even that these are the only impor-
                                                                                      tant issues; only that we think that all of these issues are
  We have implemented many of the methods we describe to
  demonstrate some of these properties before, but we have not                        important and must be addressed, and that this approach has
  collected them into a single implementation where they can                          gained enough coherence and completion, with supporting
  reinforce or interfere with each other. This paper is a pre-                        computational methods, to attempt an implementation.
  liminary description of a design of such an implementation,                            There is a large literature in multiple communities that ad-
  which we are in the process of defining and constructing.                           dresses some of these problems: organic computing Würtz
                                                                                      (2008) Müller-Schloer et al. (2011), which is largely about
                        Introduction                                                  the system qualities needed to enable collections of largely
                                                                                      autonomous systems to be effective in complex real-world
The structure of the paper is as follows: it will start with                          environments, interwoven and self-integrating systems Bell-
some context and a list of relevant tasks and locations for                           man et al. (2021), which are concerned with the difficult
which computer assistance could be extremely helpful.                                 and dynamic boundaries between cooperating systems, and
   Then comes the first main part of the paper: the set of                            how much a system needs to know about its own behav-
characteristics and responsibilities that should be expected,                         iors and capabilities to integrate effectively into a team, and
These include predictability (we can know what it is likely to                        even explainable artificial intelligence, though that is cur-
do), interpretability (we can follow what it is doing), and ex-                       rently largely limited to explaining a few classes of learning
plainability (we can understand why it did whatever it did),                          algorithms.
guarantees of behavior, graceful degradation, reputation and
trust management, and continual improvement.                                          Context
   The second main part of the paper is about the “enablers”,                         The context of this work is a software-managed system (also
that is, the methods and approaches that we think will lead to                        called constructed complex system, cyber-physical system,
systems that can satisfy all of these expectations to the extent                      technical system) in a difficult environment: complex, dy-
required (there is, of course, an application-dependent engi-                         namic, remote, hazardous, malicious or even actively hos-
neering decision about how much of each of these properties                           tile, or all of the above. These systems will go places we
is needed). The chief enablers are Computational Reflec-                              cannot (or should not) go, or provide physical or computa-
tion, Model-Based Operation, and Speculative Simulation,                              tional assistance when we do.
each of which will be explained in that part of the paper.
   The paper follows this description with a list of problems,                        Relevant Tasks and Locations
that is, major difficulties with these application areas that we
                                                                                      The kinds of tasks we are expecting to be relevant are quite
think every computing system will have to address if it is to
                                                                                      varied:
be used in that environment. We are sure that this list is far
from complete, and will likely never be complete, but we are                           • search and rescue (remember also that dogs and other an-
expecting that it is complete enough to proceed.                                         imals may be helping);
   Finally, the paper closes with a few notes about our con-
clusions and prospects. In the interests of space, it does                             • medical maintenance, prevention and intervention, emer-
not discuss the integration of models and components that                                gencies;




               Copyright ©2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
• IED (Improvised Explosive Devices) and other bombs,            use for our own planning. An important aspect of these
  also biological, chemical, or radiation threat detection;      properties is the notion of an “audit trail”, that is, a descrip-
                                                                 tion of what was and was not done and why (essential ele-
• shelter / road / bridge / space station construction;          ments of the decision process that made the choice and the
• scientific exploration; and                                    information used to do so), with the implications of making
                                                                 alternative choices (this is much more than the usual kind of
• cooperative / competitive games.                               audit trail that records only what was done). It is also im-
                                                                 portant to be able to know what can be expected in different
We are not so much interested in autonomous vehicles, not        situations, before those situations occur.
because it is not important or prominent today, but because         These explanation will necessarily involve multiple lev-
we are waiting for that area to have a purpose beyond tak-       els of detail, not only the usual multiple time and space
ing people where they want to go (unless that counts as a        scopes (range of consideration) and scales (resolution de-
competitive game in a hostile environment).                      tail), but also multiplicity and granularity in the semantic do-
   The potential locations in which these activities might       main detail (which could be a superficial summary or more
take place are also varied:                                      detailed descriptive terminology; each is useful for different
• inside a building or other structure, perhaps collapsed and    purposes).
  / or unstable;                                                    We expect these systems to undergo verification and vali-
                                                                 dation (see any Software Engineering book for these terms)
• outside buildings in an urban area;                            as usual at design / test / deployment time, but also contin-
                                                                 ually at run time (not just to satisfy requirements, which is
• in a remote wilderness area (forest and jungle, hill and       verification, but also to satisfy expectations, which is valida-
  canyon, ice and rocks and tundra);                             tion).
• airborne, surface maritime, or undersea;                          A perennial trade-off in system design is flexibility vs.
                                                                 assurance. The issue is to balance the flexibility of adap-
• active conflict zone; and                                      tive behavior with the assurance of (semi-)predictable be-
                                                                 havior. We are strongly on the side of emphasizing flexi-
• off-planet, either on another or in space.                     bility, with explicit processes that limit it when appropriate.
Some of these tasks are obviously more relevant to some of       This requires behavioral constraint management, to account
these locations than others, and we do not expect any system     for guarantees of behavior, both safety and liveness (these
to manage all (or even more than one or two) of them the         terms and related ones are defined in Alpern and Schneider
way we generally expect humans to be able to do.                 (1986)).
                                                                    If the environment stays within the specified constraints,
       Characteristics and Responsibilities                      then the system will behave as advertised (this is called
The first main point in this paper is our current list of the    an “assumption - guarantee” model in Kwiatkowska et al.
desired and / or expected characteristics and responsibilities   (2010) and Chilton et al. (2013); this model has been stud-
of such a system:                                                ied for quite some time).
                                                                    We want the system to exhibit graceful degradation,
• predictability and interpretability and explainability;        which means, that if the environment does not stay within
                                                                 the expected constraints, then the system will only gradu-
• verification and validation;                                   ally lose capability (up to a certain catastrophic level of per-
• flexibility vs assurance;                                      nicious behavior). In this sense, we want the system to be
                                                                 robust and resilient. A system is robust if it can maintain
• reputation and trust management;                               function in the face of disruptive efforts or effects. A system
                                                                 is resilient if it can restore function in the aftermath of a dis-
• continual improvement; and                                     ruption. These definitions are of course dependent on the
• power assist.                                                  meanings of “maintain” and “restore” (and “disruption”),
                                                                 but those meanings can be turned into graded measurements,
   To our mind, some of the most important properties of a       both for how much a system is disrupted and how much and
system are its predictability (we can know for our own plan-     how quickly and how well it recovers. They are also clearly
ning what it is likely to do), interpretability (we can rec-     task- and location-specific.
ognize and follow what it is doing), and explainability (we         Another area where design engineering often fails is in the
can understand why it did whatever it did and not something      notion of “appropriate efficiency”. We know that efficiency
else). These properties allow us to build a viable mental        is the enemy of robustness, and we are expecting this sys-
model of what that system is, which we can (indeed, must)        tem to be extremely robust. We are willing to spend a little
efficiency to gain that robustness.                                      A Computationally Reflective system has access to all of
   Reputation and trust management are also essential. Not            its own internal computation and decision processes, can
only do we want the system to do the right thing, it must             reason about its capabilities and behavior, and can change
be trusted that it does the right thing, that it will do the          that behavior when and as appropriate (see Kiczales et al.
right thing under increasingly varied and difficult circum-           (1991) Buschmann (1996) for a description of reflection, or
stances, and that it will tell us when it cannot, hopefully           Landauer and Bellman (1999) for a description of our ap-
before that happens. The system should provide deficiency             proach). In addition to whatever external expectations there
and degradation announcements, so we know what we can                 are, it engages in a process we call “continual contempla-
expect. There also needs to be a method for new role ne-              tion”, examining its own activity for anomalies and potential
gotiation, based on a system’s current or projected degraded          improvements.
or enhanced capability. The mechanisms and processes of                  For our purposes, the easiest way to do this is through
establishing and maintaining trust are well-studied in the lit-       models (this is “Model-Based Operation”). The way we
erature, but we are also expecting the system to have notions         use the term, “model-based operation” is more than model-
of trust of its own behavior, something like a self-assessment        based design or engineering (as in Schmidt (2006)), which
of reliability (e.g., if it tries to perform a certain action, will   make extensive use of modeling during system design and
that action occur?).                                                  development. All system knowledge and processes are
   There are some other aspects of the system that are not as         maintained as models, which can be examined as part of
important to us at first, but that will become important if we        the decision processes, and exercised or interpreted to pro-
want to use these systems.                                            duce the system’s behaviors. That way, when the system
   There should be a way for human operators to over-ride             changes the models, it changes its own behavior. In some
almost anything, with varying levels of justification effort          applications requiring extreme flexibility, even the model in-
required. Of course, that can become problematic in time-             terpreter can be one of the models, so the very notation in
and life-critical decisions. There should be a kind of “power         which the system is written can change.
assist” mode (as in power steering and power brakes in au-               In that sense, we call these systems “self-modeling” (as
tomobiles), where the system lets its operators drive it under        described in Landauer and Bellman (2002), Landauer et al.
certain circumstances, with little or no decision making (just        (2013)). While there are many ways such a system might be
providing low level support to operator selected actions).            implemented, we have shown the efficacy of Wrappings as
This becomes even more useful if we instrument everything             one way to do these things. Wrappings are described in Lan-
and analyze it later, to suggest operating improvements.              dauer and Bellman (1999), Landauer (2013), and in many
   Finally, we expect the system to analyze itself for contin-        other papers. We will not describe them here for lack of
ual improvement along several paths. It will be improving             space, except to say that they provide a Knowledge-based
current behaviors by streamlining behavior combinations: if           integration infrastructure that is extremely flexible and ex-
decisions are made the same way every time, they can be               pressive.
compiled out of the code until the relevant part of the envi-            For us, modeling is pervasive throughout the lifetime of
ronment changes (this process is called “partial evaluation”          the system (we have written about this issue in Landauer and
Jones (1996); it can be very useful in conjunction with a             Bellman (2015a), Landauer and Bellman (2015b), Landauer
process that watches for relevant environmental changes).             and Bellman (2016a), for example). In fact, we expect the
   We do not by any means think that these are all the impor-         system itself to build models, as a way of coping with the va-
tant properties, but we think they are enough to make system          garies and hazards of its environment, by retaining essential
behavior more amenable to difficult applications.                     properties of it for analysis and planning, as well as perspec-
                                                                      tive views of how those models change in time.
                           Enablers                                      That puts model construction at the center of our consid-
The second main point of this paper is that we believe that           erations. The system will perform, as part of its continual
all of this is feasible: there is a set of enablers that we be-       contemplation, what we call “Behavior Mining”, an exami-
lieve can supply these aforementioned properties. They have           nation of the event and action history, for the purpose of dis-
played a prominent role in recent work in Self-Aware and              covering persistent structures or event patterns that can be
Self-Adaptive Systems (see Lewis et al. (2016), Bellman               used for system improvement. This includes history main-
et al. (2017), Kounev et al. (2017), Bellman et al. (2021),           tenance and management, so the system has access to the
Bellman et al. (2020)), and we are using several results and          activity. Machine Learning techniques can be valuable here,
approaches from that area in this design.                             but they are just one of the possible approaches (e.g., gram-
   Among the most important ones we have used are Com-                matical and event pattern inference), and in any case, they
putational Reflection, Model-Based Operation, and Specu-              do not usually address data in the form of partially ordered
lative Simulation (there are others, but these are the three          sets of multiple-resolution descriptions of events.
we wish to discuss here).                                                This process entails a continual identification of common
structures and behaviors, based on internal activity indi-                                  Problems
cators. Our Wrapping integration infrastructure (see Lan-          There is of course a myriad of potential problems. The un-
dauer and Bellman (1999), Landauer and Bellman (2002),             fortunate part is that most of them cannot be overcome, only
Landauer (2013)) facilitates such access, encapsulation of         mitigated, and some not even that. On the other hand, it is
commonly co-occurring activities, and bottom-up evolution          our contention that almost all these same problems apply no
of empirical system structures (the structures change in re-       matter what kind of system is constructed and deployed.
sponse to behavior changes).                                          Bad models. When you live by your models, you die by
   Another significant model process is Model Deficiency           your models: it is known that computer programs are eas-
Analysis, based on the discovery of anomalies, such as             ier to subvert when they are formally proven than otherwise
noticing unusual or unexpected behavior (e.g., “that’s pecu-       (don’t attack the object being protected, attack the protector
liar”), the discovery of novelty, including the exploitation of    by side-stepping the formal model). The only thing we can
side effects, and other model evaluations (e.g., for resource      do here is call for help. This is the one of these problems
cost-effectiveness reliability).                                   that is caused by our emphasis on models, but we prefer the
   To manage all of this complexity of knowledge embod-            model-based approach anyway for its ability to be analyzed.
ied in models, we use processes we have called Dynamic                Lack of data. There are many things we can try to do:
Knowledge Management (see Landauer (2017) and Lan-                 go get more, find workarounds (some workarounds can be
dauer and Bellman (2016b)), including knowledge refactor-          planned in advance, in anticipation of certain kinds and lev-
ing and constructive forgetting.                                   els of data unavailability). We can have the system make
   The third enabler, Speculative Simulation, is a way for the     best guesses, using some kind of hazard-risk-consequence
system to try decisions out before committing to them, or          map, with a corresponding sensitivity analysis over potential
just explore the space of possibilities (much like the “play”      decisions (which ones have the worst consequences, which
described in Bellman (2013)). This kind of analysis includes       ones can the system afford to treat in its current state).
actions and adaptations, and clearly requires models of the           Hardware failures are foreseeable from years of reliabil-
effects of system choices. Most of the “what-if” scenario          ity studies, but specific instances are largely unpredictable.
descriptions that we have seen are for risk management, usu-       They are related to the second most difficult category.
ally from a business standpoint, but sometimes for engineer-       Unforeseen circumstances and consequences, about which
ing design. We have not seen any good ones for opportu-            there are only a few things we can do, none of which are
nity exploitation, that is, how to recognize that a certain pro-   guaranteed to work at all (we have only the barest minimum
cess or resource could save time or improve accuracy, but          of available responses to this problem Landauer (2019)).
this ability is clearly useful for systems in complex environ-     We can design the system with numerous and varied back-
ments.                                                             ups (alternative ways to carry out some tasks) and failsafes
   This is one of the hard parts, but also the most exciting       (consistent levels of reduced functionality), that may provide
for us. We are expecting the system to act as an experimen-        enough time for a problem to be addressed or even solved.
tal scientist, exploring and attempting to explain its environ-       And of course, the most difficult of all. Reliability of hu-
ment. To make this effective, the system needs methods for         mans and other partners. We hope their training and knowl-
hypothesis generation, experimental design, and experiment         edge suffices, as they hope ours does.
evaluation.                                                           There are others, of course, but these are at least among
   The most important and difficult questions to be answered       the most pernicious and persistent ones.
here are:
                                                                                Conclusions and Prospects
• How does the system decide it needs to do an experiment?         We hope this note contains enough description to explain
  When it doesn’t know something.                                  why we think that an implementation can be constructed,
                                                                   what we intend its basic structure to be, and how we ex-
• How does it decide that it needs to know something it            pect the system to satisfy the original expectations. We
  doesn’t know? There are missing steps in an analysis or          know that it does not explain how all of these properties will
  explanation.                                                     be achieved, because in many cases, the answer is not yet
                                                                   known. We expect that this kind of system architecture will
• How does it know something is missing? There are pro-            allow us to study these (and other) hard questions. We think
  cesses for completing analyses or explanations that can          that systems with these capabilities could be acceptable as
  identify that something is missing (it is still quite hard to    computational partners.
  determine what exactly is missing).
                                                                                           References
In some sense, this system is being constructed to explore         Alpern, B. and Schneider, F. B. (1986). Recognizing safety and
approaches and potential answers to these questions.                    liveness. Distributed Computing, 2:117–126.
Bellman, K., Botev, J., Diaconescu, A., Esterle, L., Gruhl, C., Lan-   Landauer, C. and Bellman, K. L. (2002). Self-modeling systems. In
     dauer, C., Lewis, P. R., Nelson, P. R., Pournaras, E., Stein,         Laddaga, R. and Shrobe, H., editors, Self-Adaptive Software,
     A., and Tomforde, S. (2021). Self-improving system integra-           volume 2614 of Lecture Notes in Computer Science, pages
     tion: Mastering continuous change. FGCS: Future Genera-               238–256. Springer.
     tion Computing Systems, Special Issue on SISSY, 117:29–46.
                                                                       Landauer, C. and Bellman, K. L. (2015a). Automatic model assess-
Bellman, K., Dutt, N., Esterle, L., Herkersdorf, A., Jantsch, A.,          ment for situation awareness. In Proceedings CogSIMA 2015:
     Landauer, C., Lewis, P. R., Platzner, M., TaheriNejad, N., and        The 2015 IEEE International Multi-Disciplinary Conference
     Tammemäe, K. (2020). Self-aware cyber-physical systems.              on Cognitive Methods in Situation Awareness and Decision
     ACM Transactions on Cyber-Physical Systems (TCPS).                    Support, Orlando, Florida.
Bellman, K. L. (2013). Sos behaviors: Self-reflection and a version    Landauer, C. and Bellman, K. L. (2015b). System development
     of structured “playing” may be critical for the verification          at run time. In Proceedings M@RT 2015: The 10th Interna-
     and validation of complex systems of systems. In CSD&M                tional Workshop on Models@Run-Time, Ottawa, Canada.
     2013: The Fourth International Conference on Complex Sys-
     tems Design & Management.                                         Landauer, C. and Bellman, K. L. (2016a). Model-based coopera-
                                                                           tive system engineering and integration. In Proceedings SiSSy
Bellman, K. L., Landauer, C., Nelson, P., Bencomo, N., Götz, S.,          2016: The 3rd Workshop on Self-Improving System Integra-
     Lewis, P., and Esterle, L. (2017). Self-modeling and self-            tion, Würzburg, Germany.
     awareness. In Kounev, S., Kephart, J. O., Milenkoski, A., and
     Zhu, X., editors, Self-Aware Computing Systems, chapter 9,        Landauer, C. and Bellman, K. L. (2016b). Self-modeling systems
     pages 279–304. Springer.                                              need models at run time. In Proceedings M@RT 2016: The
                                                                           11th International Workshop on Models@run.time, Palais du
Buschmann, F. (1996). Reflection. In Vlissides, J. M., Coplien,            Grand Large, Saint Malo, Brittany, France.
    J. O., and Kerth, N. L., editors, Pattern Languages of Pro-
    gram Design 2, chapter 17, pages 271–294. Addison-Wesley.          Landauer, C., Bellman, K. L., and Nelson, P. R. (2013). Self-
                                                                           modeling for adaptive situation awareness. In Proceed-
Chilton, C., Jonsson, B., and Kwiatkowska, M. (2013). Assume-              ings of CogSIMA 2013: The 2013 IEEE International Inter-
     guarantee reasoning for safe component behaviours. In                 Disciplinary Conference on Cognitive Methods for Situation
     Pǎsǎreanu, C. S. and Salaün, G., editors, FACS 2012: For-          Awareness and Decision Support, San Diego, California.
     mal Aspects of Component Software, volume 7684 of Lecture
     Notes in Computer Science. Springer.                              Lewis, P. R., Platzner, M., Rinner, B., Trresen, J., and Yao, X.
                                                                           (2016). Self-Aware Computing Systems: An Engineering Ap-
Jones, N. D. (1996). Partial evaluation. Computing Surveys, 28(3).         proach. Springer, 1st edition.
Kiczales, G., des Rivières, J., and Bobrow, D. G. (1991). The Art     Müller-Schloer, C., Schmeck, H., and Ungerer, T., editors (2011).
     of the Meta-Object Protocol. MIT Press.                                 Organic Computing - A Paradigm Shift for Complex Systems.
                                                                             Springer.
Kounev, S., Lewis, P., Bellman, K. L., Bencomo, N., Cámara, J.,
    Diaconescu, A., Esterle, L., Geihs, K., Giese, H., Götz, S.,      Schmidt, D. C. (2006). Model-driven engineering: Introduction to
    Inverardi, P., Kephart, J. O., and Zisman, A. (2017). The              the special issue. IEEE Computer, 39:25–31.
    notion of self-aware computing. In Kounev, S., Kephart, J. O.,
    Milenkoski, A., and Zhu, X., editors, Self-Aware Computing         Würtz, R. P., editor (2008). Organic Computing. Springer.
    Systems, chapter 1, pages 3–16. Springer.
Kwiatkowska, M., Norman, G., Parker, D., and Qu, H. (2010).
    Assume-guarantee verification for probabilistic systems. In
    Esparza, J. and Majumdar, R., editors, TACAS 2010: Tools
    and Algorithms for the Construction and Analysis of Systems,
    volume 6015 of Lecture Notes in Computer Science. Springer.
Landauer, C. (2013). Infrastructure for studying infrastructure.
    In Proceedings ESOS 2013: Workshop on Embedded Self-
    Organizing Systems, San Jose, California.
Landauer, C. (2017). Mitigating the inevitable failure of knowl-
    edge representation. In Proceedings 2nd M@RT: The 2nd
    International Workshop on Models@run.time for Self-aware
    Computing Systems, Columbus, Ohio.
Landauer, C. (2019). What do I do now? Nobody told me how to
    do this. In Proceedings M@RT 2019, The 14th International
    Workshop on Models@run.time, Munich, Germany.
Landauer, C. and Bellman, K. L. (1999). Generic programming,
    partial evaluation, and a new programming paradigm. In
    McGuire, G., editor, Software Process Improvement, pages
    108–154. Idea Group Publishing.