<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Model-based Diagnosis Patterns for Model Checking</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vincent LEILDE</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vincent RIBAUD</string-name>
          <email>ribaud@univ-brest.fr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Philippe DHAUSSY</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lab-STICC</institution>
          ,
          <addr-line>team MOCS, ENSTA Bretagne,rue François Verny,Brest</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Lab-STICC, team MOCS, UBO</institution>
          ,
          <addr-line>avenue Le Gorgeu,Brest</addr-line>
        </aff>
      </contrib-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Context and challenges</title>
      <p>
        Model checking is a technique used to verify that a certain system's design satisfies
its requirements. Given some models of the design and system's requirements
formulated as formal properties, the system model can be checked [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], and if
properties are violated, the model checker provides the user with counter examples
that represent execution sequences (traces) leading to an unexpected situation. Then
the engineer analyzes the cause of the problem, i.e. diagnosis activity, correct models
or properties and carry out another verification endeavors. A verification process may
include many verification endeavors gathering models and properties successively
refined, which might be recorded in a dedicated form; stated deservedly by Ruys [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
to be a challenge. Diagnosing the cause of faulty properties is also a challenging task.
Under the assumption formal properties are valid, and without neglecting the difficult
problem to judge whether the formalized problem statement (model, properties) is an
adequate description of the actual verification problem [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], we reduce here the scope
to modeling errors. Model-based diagnosis (MBD) is a promising approach to
diagnose modeling errors and consists in the interaction of observation and prediction
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] where observation indicates what the device is actually doing, and prediction
indicates what it is supposed to do. “The interesting event is any difference between
these two, a difference termed a discrepancy [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].” MBD presumes that “if the model
is correct, all the discrepancies between observation and prediction arise from defects
of the device [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].” Thereby diagnosis consists in identifying the faulty components
responsible of the observed failure. When we apply this approach to model checking,
the design is the system-under-study, and we need a correct model of the design to
apply model-based reasoning. The diagnostician can be assisted by methods like
Case-Based Reasoning (CBR) to dispose of a correct model. CBR consists in “solving
a new problem by remembering a similar situation and by reusing information and
knowledge of that situation [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]." Unfortunately these diagnostic methods/techniques
are only possible if significant features about cases are identified and formalized. In
conclusion, dealing with multiple data or diagnosing faults are challenges which
require the verification's information to be well-defined and managed through time; to
this intent we propose to define patterns facilitating information's formalization and
sharing among engineers. We illustrate the idea through a verification scenario on a
sample case, with a focus on diagnosis artifacts.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Diagnoses using patterns</title>
      <sec id="sec-2-1">
        <title>Using patterns in model checking</title>
        <p>
          The seminal book on design patterns defines design patterns as "descriptions of
communicating objects and classes that are customized to solve a general design
problem in a particular context" [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. Issued from C. Alexander’s work [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], design
patterns describe object-oriented designs but one could be easily generalized in the
context of model checking, as illustrated by Dwyer et al. [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] who provides form of
pattern to specify formal properties. Verification process tools heavily depend on the
structure and content of their case for searching, versioning, diagnosing, or adapting
models, their description share goals of patterns as they must address a consensus
about a topic, shared by a community. Relationships between cases such as uses or
extends have also to be expressed in the patterns. In the model-checking community,
benchmarks, i.e. collections including models, are used to assess tools performances.
Benchmark repositories organize models, properties, verification runs and results. We
used the structure of the BEEM benchmark [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] as a basis for describing cases. In the
rest of the article the names of potential pattern's constituents are formatted in italics.
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Bob and Alice share a yard</title>
        <p>
          Lets us take a case borrowed from Lamport [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], with a viewpoint related to model
checking. Alice and Bob are sharing a yard in an exclusive manner because their pets
cannot be together in the yard. Lamport's solution uses two threads sharing only two
Boolean variables (flags), each of which can be written by one thread and read by the
other. For verification purposes, the system based on the mutual exclusion algorithm
should be translated into a model describing how the system behaves, with clearly
defined components and functions (required to a good diagnosis, i.e. finding the
component/function source of the fault). Lamport’s algorithm and a corresponding
model in a concurrent automata-like language are given in Fig.1.
        </p>
        <p>Alice:
while (true) {
flagAlice = up;</p>
        <p>while (flagBob == up)skip;
catInYard;
flagAlice = down;
}
Bob:
while (true) {
flagBob = up;
while (flagAlice == up) {
flagBob = down;</p>
        <p>while (flagAlice == up)skip;
flagBob = up;
}
dogInYard;
flagBob = down;
}</p>
        <p>Fig. 1. Lamport's mutual exclusion algorithm
To prove the model is correct, a verification by model checking is conducted
according properties: - Mutual Exclusion (P1), Bob and Alice must not be together in
critical section (CS); - Deadlock Freedom (P2), it never happens that Alice or Bob are
trying to enter their CS, but never succeeds; - Lockout Freedom (P3), from a fair path
Alice or Bob try to access the CS, it will terminate; - Fairness (P4), it is always
possible for Alice or Bob to access the CS. We reach a valid model through several
verification endeavors until all properties are satisfied. A verification endeavor is
composed of different versions of behavioral models coupled with properties and run
results. Models are expressed with different implementations (such as in Fig.2) and
properties are checked on these implementation during verification runs.
CS: printf("CS\n");
L_CS:
atomic{
flagAlice=false;
printf("Leaving CS \n");
bool flagAlice,flagBob;
#define csa alice@CS; #define csb bob@CS; #define mtx !(csa &amp;&amp; csb)
#define ecsa alice@E_CS; #define ecsb bob@E_CS;
#define lcsa alice@L_CS; #define lcsb bob@L_CS;
active proctype alice(){ active proctype bob(){
E_CS: printf("Entering CS\n"); E_CS: printf("Entering CS\n");
do::flagAlice=true; do::flagBob=true;
do::!flagBob-&gt; break do:: flagAlice-&gt;flagBob=false;
::else-&gt;skip; do::!flagAlice
od; -&gt;atomic{flagBob=true;break;}
::else-&gt;skip;
od
:: !flagAlice-&gt;break
od
CS: printf("CS\n");
L_CS: atomic{</p>
        <p>flagBob=false; printf("Leaving CS\n");}
od;}</p>
        <sec id="sec-2-2-1">
          <title>Run result</title>
          <p>errors: 0
errors: 0
errors: 0
Acceptance
cycle
errors: 1</p>
          <p>
            Verification runs are performed in a model checker with a configuration, give results
about the status of properties (see Fig. 3), and traces are produced raising diagnosis,
corrections and new verification cycles. The SPIN model-checker is used here [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ].
          </p>
        </sec>
        <sec id="sec-2-2-2">
          <title>Implementation</title>
          <p>never {// ![]mtx
T0_init:
do:: atomic { (! ((mtx))) -&gt; assert(!(! ((mtx)))) }
:: (1) -&gt; goto T0_init
od;
accept_all: skip}</p>
          <p>Automatically checked by SPIN
}
od;
}</p>
        </sec>
        <sec id="sec-2-2-3">
          <title>Property</title>
          <p>P1
P2
P3 (for Alice, the never {// !([]&lt;&gt;csa-&gt;[](ecsa-&gt;&lt;&gt;csa))
results are the same T0_init:
for Bob) do:: (! ((csa)) &amp;&amp; (ecsa)) -&gt; goto T0_S69</p>
          <p>:: (1) -&gt; goto T0_init
od;
T0_S69:
do:: (! ((csa))) -&gt; goto T0_S69
od;}
P4 nTe0v_eirnit:{ // !([]&lt;&gt;csb)
(for bob, no errors do:: (! ((csb))) -&gt; goto accept_S4
found for Alice) od:;: (1) -&gt; goto T0_init
accept_S4:
do:: (! ((csb))) -&gt; goto accept_S4
od;}</p>
          <p>Fig. 3. Properties implementation and verification in SPIN</p>
          <p>
            Each property is expressed as a never claim and checked successively by SPIN,
almost all properties are verified (errors:0), excepted P4 for Bob, the fairness property
which is falsified meaning that it doesn't give to Bob a “fair” chance to try to enter its
CS, and thus we are faced with an unexpected situation, and one have to find the
origin of the fault. Following a MBD approach, a model that predicts how the system
behaves can be used to find the component or function that causes the problem. Even
though in model checking this model is not available, CBR approach can luckily be
applied to retrieve a similar case. A CBR system gathers previous cases (that we call
Problem Cases), as a basis to reason on a new case (called Sample Case), and helps to
isolate the faulty element that leads to the discrepancies between observation (Sample
case) and prediction (Problem case). CBR process is conducted in four steps [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ],
retrieve the most similar cases from the case base, reuse these cases to solve the new
problem, revise the proposed solution, and retain the new problem and solution in the
case base for later use.
2.3
          </p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>Reuse the problem case and solution.</title>
        <p>
          Once the sample case has been assessed, the most initial declaration;
similar problem case is searched in the case base. repeat forever
Our case addresses the classical problem case of noncritical section;
Mutual Exclusion as it supports almost the same ecnrtirtyic;al section;
properties. A problem case is a sample case that has exit ;
been reified in the case base. It provides a end repeat
specification in a form of a structure and a set of
abstract properties that prescribes the system's Fig. 4. Mutual Exclusion Structure
behavior. A specification addresses the problem space (not the solution space) and is
by definition independent from any implementation language, which is rather given
by solution implementations. A problem case provides a structure, for Mutual
Exclusion we state that each process’s program may be written as follows (see Fig. 4):
The entry statement represents all the operation executions between a noncritical
section (NCS) and the subsequent critical section (CS); The exit statement represents
all the operation between a CS and the subsequent NCS; The initial declaration
describes the initial values of the variables. Abstract properties might be expressed in
temporal logic like LTL (see Fig.5) which is based upon the propositional calculus,
where formulas are composed of atomic propositions and operators [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], by contrast to
assertions that impose control point in the system and cannot express all properties
(deadlock, starvation). Abstract properties are inspired by Lamport's work: -Mutual
Exclusion (P1), For any pair of distinct processes i and j, no pair of operation
executions CSi and CSj are concurrent i.e. □¬(CSi˄CSj); -Deadlock Freedom
(P2), deadlock appends if it is impossible to reach a state in which some processes are
trying to enter their CS, but no process is successful; -Lockout Freedom (P3), if a
process in a fair path tries to execute its CS then eventually it succeed, i.e. for i
□◊CSi→□(E_CSi→◊CSi). -Fairness (P4), it should be possible for each process to
access the CS , i.e. (□◊CSi)˄(□◊CSj).
and
˄
&amp;&amp;
implies
→
-&gt;
        </p>
        <p>The structure and properties of problem cases have to be bound to the sample case
including the difficult task to identify and bind parts of the model to equivalent parts
of the problem case structure. Parts of the implementation model might be annotated
(E_CS, CS, L_CS) in order to apply the properties given the problem structure. The
problem case is used to predict the expected behavior, and thus fairness by comparing
the observed model of Alice and Bob and deducted which component is not fair. To
this intent, a problem case should provide a set of potential causes and solutions to a
failed property, for instance fairness property P4 is violated by Bob, it results the
cause "Bob is too kind with Alice", and a solution have to be found to this cause that
will update processes accordingly.</p>
        <p>
          Thus a problem case may propose a set of solutions, for mutual exclusion many
solutions exist with different properties, in our example a possible solution might be
the Peterson’s algorithm [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] given in Fig.6. Then the engineer applies the adequate
solution to its wrong model, to do that he/she must understand precisely how to bind
the solution to his sample case either by copying or adaptation [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] techniques.
Copying is the trivial type of reuse where the solution is directly transferred to the
case as its solution while adaptation is relevant if differences are taken into account, it
consists in either reuse the past case solution (transformational reuse), or reuse the
past method that constructed the solution (derivational reuse). When a solution is
applied to the sample case, a new verification must be performed. If the properties are
still not satisfied, the diagnostic of the cause become clearer, and revised solutions
can be brought again. Finally the new solution is retained in the case base, together
with sample case reified as a problem case. The most relevant features of the new
problem case have to be revealed to facilitate further identification.
bool flag[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]; flag[0] = false; flag[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] = false; int turn;
P0: flag[0] = true; turn = 1; P1: flag[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] = true; turn = 0;
while (flag[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] &amp;&amp; turn == 1){ while (flag[0] &amp;&amp; turn == 0){
// entry // entry
}
// critical section
// exit
flag[0] = false;
}
// critical section
// exit
flag[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] = false;
Model checking relies on a large collection of heterogeneous artifacts and diagnosing
faults is generally a tedious task which should benefit from a knowledge base system
to collect and provide access to well-formalized verification artifacts. In the formal
method community, patterns are mostly understood in reference to the work by
Dwyer et al. [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], and actually there is no common agreement on a formalization or
organization for the whole verification process artefacts, such as models, traces or
sessions of verifications. Hence this paper asks the question about unmet needs for
such patterns and their possible relationships, with a particular focus on patterns
required for diagnosis techniques such as problem case or solution in CBR.
4
        </p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Ben-Ari</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2008</year>
          )
          <article-title>Principles of the Spin Model Checker</article-title>
          . London: Springer.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Ruys</surname>
            ,
            <given-names>T. C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Brinksma</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          (
          <year>2003</year>
          ).
          <article-title>Managing the verification trajectory</article-title>
          .
          <source>International journal on software tools for technology transfer</source>
          ,
          <volume>4</volume>
          (
          <issue>2</issue>
          ),
          <fpage>246</fpage>
          -
          <lpage>259</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Baier</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Katoen</surname>
            ,
            <given-names>J. P.</given-names>
          </string-name>
          (
          <year>2008</year>
          ).
          <article-title>Principles of model checking</article-title>
          (Vol.
          <volume>26202649</volume>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>82</lpage>
          ). Cambridge: MIT press.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Davis</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>1984</year>
          ).
          <article-title>Diagnostic reasoning based on structure and behavior</article-title>
          .
          <source>Artificial intelligence</source>
          ,
          <volume>24</volume>
          (
          <issue>1</issue>
          ),
          <fpage>347</fpage>
          -
          <lpage>410</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Davis</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Hamscher</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          (
          <year>1988</year>
          ).
          <article-title>Model-based reasoning: Troubleshooting</article-title>
          .
          <source>In Exploring Artificial Intelligence: Survey Talks from the National Conferences in Artificial Intelligence</source>
          (pp.
          <fpage>297</fpage>
          -
          <lpage>346</lpage>
          ). Morgan-Kaufmann.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>Agnar</given-names>
            <surname>Aamodt and Enric Plaza</surname>
          </string-name>
          (
          <year>1994</year>
          ).
          <article-title>Case-based reasoning: foundational issues, methodological variations, and system approaches</article-title>
          .
          <source>AI Commun</source>
          .
          <volume>7</volume>
          ,
          <issue>1</issue>
          (March
          <year>1994</year>
          ),
          <fpage>39</fpage>
          -
          <lpage>59</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Erich</surname>
            <given-names>Gamma</given-names>
          </string-name>
          , Richard Helm, Ralph Johnson, and John Vlissides (
          <year>1995</year>
          ).
          <article-title>Design Patterns: Elements of Reusable Object-Oriented Software</article-title>
          .
          <string-name>
            <surname>Addison-Wesley Longman</surname>
          </string-name>
          Publishing Co., Inc., Boston, MA, USA.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Alexander</surname>
            , Christopher,
            <given-names>Sara</given-names>
          </string-name>
          <string-name>
            <surname>Ishikawa</surname>
          </string-name>
          , and
          <string-name>
            <surname>Murray Silverstein</surname>
          </string-name>
          (
          <year>1977</year>
          ).
          <article-title>A Pattern Language: Towns, Buildings, Construction</article-title>
          . New York: Oxford University Press.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Dwyer</surname>
          </string-name>
          ,
          <string-name>
            <surname>Matthew</surname>
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>George</surname>
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Avrunin</surname>
          </string-name>
          , and
          <string-name>
            <surname>James C. Corbett</surname>
          </string-name>
          (
          <year>1999</year>
          ).
          <article-title>Patterns in property specifications for finite-state verification</article-title>
          .
          <source>In Proceedings of the 1999 International Conference on Software Engineering</source>
          . IEE.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Pelánek</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>BEEM: Benchmarks for explicit model checkers</article-title>
          .
          <source>In Model Checking Software</source>
          (pp.
          <fpage>263</fpage>
          -
          <lpage>267</lpage>
          ). Springer Berlin Heidelberg.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Lamport</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>1985</year>
          ).
          <article-title>Solved problems, unsolved problems and non-problems in concurrency</article-title>
          .
          <source>ACM SIGOPS Operating Systems Review</source>
          ,
          <volume>19</volume>
          (
          <issue>4</issue>
          ),
          <fpage>34</fpage>
          -
          <lpage>44</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Peterson</surname>
            ,
            <given-names>G. L.</given-names>
          </string-name>
          (
          <year>1981</year>
          ).
          <article-title>Myths about the mutual exclusion problem</article-title>
          .
          <source>Information Processing Letters</source>
          ,
          <volume>12</volume>
          (
          <issue>3</issue>
          ),
          <fpage>115</fpage>
          -
          <lpage>116</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>