<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Program Proceedings</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2011</year>
      </pub-date>
      <fpage>15</fpage>
      <lpage>56</lpage>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Malay K. Ganai</title>
    </sec>
    <sec id="sec-2">
      <title>Armin Biere</title>
    </sec>
    <sec id="sec-3">
      <title>Clark W. Barrett</title>
    </sec>
    <sec id="sec-4">
      <title>Armin Biere</title>
    </sec>
    <sec id="sec-5">
      <title>Alessandro Cimatti</title>
    </sec>
    <sec id="sec-6">
      <title>Cindy Eisner</title>
    </sec>
    <sec id="sec-7">
      <title>Malay K. Ganai</title>
    </sec>
    <sec id="sec-8">
      <title>Ganesh Gopalakrishnan</title>
    </sec>
    <sec id="sec-9">
      <title>Daniel Kroening</title>
    </sec>
    <sec id="sec-10">
      <title>Robert P. Kurshan</title>
    </sec>
    <sec id="sec-11">
      <title>Ken Mcmillan</title>
    </sec>
    <sec id="sec-12">
      <title>Chao Wang</title>
    </sec>
    <sec id="sec-13">
      <title>Sandip Ray</title>
    </sec>
    <sec id="sec-14">
      <title>Nadia Papakonstantinou</title>
    </sec>
    <sec id="sec-15">
      <title>Program Chairs</title>
      <p>NEC Labs America USA</p>
      <p>Johannes Kelpler University Austria</p>
    </sec>
    <sec id="sec-16">
      <title>Technical Program Committee</title>
      <p>New York University USA
Johannes Kelpler University Austria
Fondazione Bruno Kessler Italy
IBM Haifa Research Lab Israel
NEC Labs America USA
University of Utah USA
Oxford University UK
Cadence Design Systems USA
Microsoft Research USA</p>
      <p>Virginia Tech, USA</p>
    </sec>
    <sec id="sec-17">
      <title>Local Arrangement Chairs</title>
      <p>UT Austin, USA</p>
      <p>NEC Labs America
i
 </p>
      <p>Preface 
The first DIFTS (Design and Implementation of Formal Tools and Systems) workshop was held at
Austin, Texas on Nov 3rd, 2011, co-located with Formal Methods in Computer-Aided Design
(FMCAD) Conference. The workshop emphasized the insightful experiences in tool and system
design. It provided a forum for sharing challenges and solutions that are highly original with ground
breaking results. It took a broad view of the formal tools area, and solicited contributions from
various domains including decision procedures, verification, testing, validation, diagnosis,
debugging, and synthesis. It provided a forum for discussing the engineering aspect of the tools,
and various design decisions required to put them in practical use. The workshop provided a
discussion forum for a pragmatic view of practicing formal methods.</p>
      <p>The workshop received 9 original submissions, out of which 4 were chosen under tool category
and 2 were chosen under system category. There were also two invited talks: one given by
Andreas Kuehlmann, Sr. VP of R&amp;D at Coverity on “The pain of making research tools into
software products”, and another by Chris Morison, Chief Architect, Real Intent Inc. on “From putty
to product: what it takes to bring a verification tool to market”.</p>
      <p>First of all we thank FMCAD’s steering committee for their approval of the workshop. We also
thank Anna Slobodova, David Rager, and Sandip Ray for their help in local arrangements and
Nadia Papakonstantinou for her help in web updates. We sincerely thank the program committee
members and sub reviewers for selecting the papers and providing candid review feedbacks to the
authors. Last but not least, we thank all the authors for contributing to the workshop and to all the
participants of the workshop.</p>
      <p> </p>
      <sec id="sec-17-1">
        <title>Malay K. Ganai and Armin Biere</title>
      </sec>
      <sec id="sec-17-2">
        <title>Program Chairs</title>
        <p>The pain of making research tools into software products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</p>
        <p>Andreas Kuehlmann
From Putty to Product: What it takes to bring a Veri cation Tool to Market . . . . . . . . . . . . .</p>
        <p>Chris Morrison
An Application of Formal Methods to Cognitive Radios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</p>
        <p>Konstantine Arkoudas, Ritu Chadha and Jason Chiang
Data Structure Choices for On-the-Fly Model Checking of Real-Time Systems . . . . . . . . . . . . 13</p>
        <p>Peter Fontana and Rance Cleaveland
metaSMT: Focus on Your Application not on Solver Integration . . . . . . . . . . . . . . . . . . . . . . . . . . 22</p>
        <p>Finn Haedicke, Stefan Frehse, Goerschwin Fey, Daniel Grosse and Rolf Drechsler
A Study of Sweeping Algorithms in the Context of Model Checking . . . . . . . . . . . . . . . . . . . . . . 30</p>
        <p>Zyad Hassan, Yan Zhang and Fabio Somenzi
Enhancing ABC for stabilization veri cation of SystemVerilog/VHDL models . . . . . . . . . . . . . 38</p>
        <p>Jiang Long, Sayak Ray, Baruch Sterin, Alan Mishchenko and Robert K. Brayton
On Incremental Satis ability and Bounded Model Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46</p>
        <p>Siert Wieringa
1
2
3
The Pain of Making Research Tools into Software Products</p>
        <sec id="sec-17-2-1">
          <title>Andreas Kuehlmann</title>
          <p>Sr. VP of R&amp;D at Coverity</p>
          <p>President of IEEE Council on EDA (CEDA), and an IEEE fellow
Abstract: Building a research implementation of a new algorithm and validating it for a small
set of benchmarks is a key component of pushing the state of the art of design and verification
tools. Moreover, if the algorithm works, it is often also the most exciting phase of the long
journey to have a new idea used by many people. The less exciting and often more painful part
includes the productization of the research into a form that can be applied by hundreds or
thousands of users on a daily basis. In this talk we discuss some of the challenges that developers
encounter in a product development environment from prototyping to code maintenance. The
presentation will cover practical aspects of developing software in larger teams from product
planning to deployment and support. We will also elaborate on some of the opportunities for new
research to help the practical deployment and maintenance of software products.
From Putty to Product: What it takes to bring a
Verification Tool to Market</p>
        </sec>
        <sec id="sec-17-2-2">
          <title>Dr. Chris Morrison</title>
          <p>Chief Architect, Real Intent Inc.</p>
          <p>Abstract: For a new company interested in developing a formal verification product, the
temptation would be to believe that the majority of the time-to-market would be spent on
developing the core formal technology. As a corollary, if that core technology could be bought
off-the-shelf, the time-to-market would be greatly reduced. While that belief may have been true
for the first generation of formal verification products, it is, unfortunately, not credible today.
There are many "significant items" in state-of-the-art products that now consume considerable
resources. The input language has changed from BLIF to SystemVerilog '09 and VHDL '08.
Designs that the formal tools had to consider used to be single-clock and simple reset designs.
Now they have 10+ clocks, derived and gated clocks, complex resets and multiple functional
modes. The output is no longer a simple text file, but a hyperlinked report tying together VCD
waveforms, source RTL, and state machine and schematic viewers. Manual control with many
switches is no longer acceptable. A nearly automatic flow with very few switches is demanded.
The tool must manage to deliver very high throughput for thousands of checks on
multimilliongate designs as well as acceptable user experience on high-latency checks. User productivity is
the combination of tool run time, piloting convenience as well as the debug experience. The
infrastructure needed to achieve these goals (which was originally simple and relatively quick to
implement) is now very complex involving an intricate flow through a variety of substantial
software packages and multiple databases. With the goal of reaching a broad class of customers,
current and future verification products must be automated, easy to use and be able to reliably
produce actionable results. Delivering on that requires attention to every part of the product
rather than just the formal verification technology.
An Application of Formal Methods to Cognitive Radios</p>
          <p>Konstantine Arkoudas and Ritu Chadha and Jason Chiang</p>
          <p>Telcordia Research</p>
          <p>One Telcordia Drive</p>
          <p>Piscataway, NJ 08854
fkonstantine, chadha, chiangg@research.telcordia.com
We discuss the design and implementation of a formal policy
system regulating dynamic spectrum access (DSA) for
cognitive radios. DSA policies are represented and manipulated
in a proof framework based on first-order logic with
arithmetic and algebraic data types. Various algebraic operations
combining such policies can be easily implemented in such
a framework. Reasoning about transmission requests is
formulated as a satisfiability-modulo-theories (SMT) problem.</p>
          <p>Efficient SMT solvers are used to answer the corresponding
queries and also to analyze the policies themselves. Further,
we show how to reason about transmission requests in an
optimal way by modeling the problem as an SMT instance of
weighted Max-SAT, and we demonstrate that this problem
too can be efficiently solved by cutting-edge SMT solvers.</p>
          <p>We also show that additional optimal operations on
transmission requests can be cast as classic optimization
problems, and to that end we give an algorithm for minimizing
integer-valued objective functions with a small number of
calls to an oracle SMT solver. We present experimental
results on the performance of our system, and compare it to
previous work in the field.</p>
          <p>INTRODUCTION</p>
          <p>One of the world’s most prized physical resources is the
electromagnetic spectrum, and particularly its radio frequency
(RF) portion, stretching roughly from 10 KHz to 300 GHz.</p>
          <p>The RF spectrum is so valuable that its allocation is strictly
regulated by world governments, and these days even small
parts of it can be sold for billions of dollars in spectrum
auctions.</p>
          <p>The research reported in this paper was performed in connection
with contract number W15P7T-08-C-P213 with the U. S. Army
Communications Electronics Research and Development
Engineering Center (CERDEC). The views and conclusions contained in this
document are those of the authors and should not be interpreted as
presenting the official policies or position, either expressed or
implied, of the U. S. Army CERDEC, or the U. S. Government unless
so designated by other authorized documents. Citation of
manufacturers or trade names does not constitute an official endorsement
or approval of the use thereof. The U. S. Government is
authorized to reproduce and distribute reprints for Government purposes
notwithstanding any copyright notation herein.</p>
          <p>
            As wireless devices continue to proliferate, demand for
access to RF spectrum is becoming increasingly pressing,
and increasingly difficult to achieve. After all, the (usable)
spectrum is finite, while the demand for it continues to
increase without bounds. However, it has been recognized [
            <xref ref-type="bibr" rid="ref13 ref14 ref29 ref30">13,
14</xref>
            ] that problems of spectrum scarcity and overuse arise not
so much from physical shortage of frequencies, but mostly
as a result of the centralized and rigid way in which spectrum
allocation has been managed. Spectrum has been allocated
in a static way: bands of the RF range are statically assigned
to licenced users on a long-term basis, who then have
exclusive rights to the corresponding region. Examples here
include the 824–849 MHz, 1.85–1.91 GHz, and 1.930–1.99
GHz frequency bands, which are reserved for FCC-licenced
cellular and personal communications services (PCS). Other
parts of the RF spectrum, by contrast, are unlicensed, and
anyone can transmit in those frequencies as long as their
power does not exceed the regulatory maximum.
Consequently, large portions of the RF range remain underutilized
for long periods of time, while other parts—such as the ISM
bands—are overutilized.
          </p>
          <p>
            Observations like these provided the impetus behind the
NeXt Generation (XG) Communications program, a
technology development program sponsored by DARPA [
            <xref ref-type="bibr" rid="ref25 ref9">9</xref>
            ], which
proposed opportunistic spectrum access as a means of
coping with spectrum scarcity. The underlying approach has
come to be known as dynamic spectrum access (DSA), and
has evolved into a general methodology for dealing with
spectrum scarcity [
            <xref ref-type="bibr" rid="ref33">17</xref>
            ]. The main idea is that a DSA
network has two classes of users: primary and secondary.
Primary users are licenced to use a particular spectrum band
and always have full access to that band whenever they need
it. Secondary users are allowed to use such spectrum
portions but only as long as their use does not interfere with the
operating needs of the primary users. The secondary users
are typically cognitive radios [
            <xref ref-type="bibr" rid="ref20 ref4">4</xref>
            ] that can dynamically adjust
their operating characteristics (such as waveform, power, etc.).
          </p>
          <p>Secondary users sense the spectrum for available
transmission opportunities, determine the presence of primary users,
and attempt to use the spectrum in a way that interferes as
little as possible with the activities of the primary users.</p>
          <p>
            Policies are used for specifying how secondary radio nodes
are allowed to behave. Policies consist of declarative
statements that dictate what secondary radios may or may not
do, without prescribing how they might do it [
            <xref ref-type="bibr" rid="ref34">18</xref>
            ]. Different
policies may be applicable in different regions of the world.
          </p>
          <p>Even in one and the same region there may be multiple
policies in place, reflecting different constraints imposed by
different regulatory bodies. Policy systems for cognitive radios
should be able to load new policies on the fly.</p>
          <p>There are two key questions in designing such a policy
system, one representational and the other computational:
How should we represent policies, and how can we make
efficient use of them? Policies regulating DSA must be able to
express rules such as the following: “Allow a transmission if
its frequency is in a certain range and the power does not
exceed a certain limit, or if the transmission is an emergency.”
Thus, for representation purposes, we need at least the full
power of propositional logic, as well as numbers (both
integers and reals), and numeric comparison relations. It would
also be helpful to have algebraic datatypes at our disposal,
so that we could, for example, represent the mode of a
transmission as one of a finite number of distinct symbolic
values, such as everyDay or emergency. A certain amount of
quantification would also be convenient, so that we could,
e.g., quantify over all possible transmissions. We must also
be able to introduce arbitrary transmission parameters, so a
strong type system, preferrably with subtyping, should be
available. Hence, first-order logic with arithmetic, and
perhaps additional features such as algebraic datatypes and
subsorting, would appear to be a natural representation choice,
provided that we can make efficient use of it, a point that we
will address shortly.</p>
          <p>So much for representation. What are the main
computational problems that a policy system must solve? The main
problem is this: determine whether an arbitrary transmission
request should be allowed or denied, given the current set of
policies. In fact, if this were all there was to using policies
for spectrum access, things would be relatively simple. But
we usually want the policy engine to do more. For example,
when a transmission request is not permissible, a simple
“denied” answer is not very useful. The policy system should
also tell the requesting radio why the request was denied,
and more importantly, what it would take for its request to
be allowed. As a simple example, the system might tell the
radio: “Your request cannot be allowed in its present form,
but it will be allowed if you only change your transmission
power to such-and-such level.” Further, in such cases the
policy system will usually have many choices as to which
parts of the transmission request to modify in order to make
it permissible. We then want it to make an optimal choice,
i.e., a choice that satisfies the original request to the greatest
possible extent.</p>
          <p>
            In this paper we present an implementation of a policy
system in Athena, an interactive proof system for
polymorphic multi-sorted first-order logic with equality, algebraic
data types, subsorting, and a higher-order functional
programming language [
            <xref ref-type="bibr" rid="ref19 ref3">3</xref>
            ]. Athena’s rich logic, sort system,
and programming language significantly facilitated the
representation and manipulation of policies. Another
innovation of our paper is the use of SMT solvers [
            <xref ref-type="bibr" rid="ref15 ref31">15</xref>
            ] for
reasoning with and about policies. The problem of
determining whether a transmission request is permissible by a given
policy can be couched as a satisfiability problem, namely,
the problem of determining whether the request is consistent
with the policy. So SAT solving would seem to be a
natural fit for this domain. However, policy rules in this domain
also make heavy use of equality and numeric relations, not to
mention symbolic values such as transmission modes. Thus,
viewed as an abstract syntax tree, the body of a policy rule
has an arbitrarily complicated boolean structure internally,
with relational atoms at the leaves, whose semantics come
from standard theories such as those of arithmetic,
equality, and so on. That is what makes SMT solving an ideal
paradigm for this problem. We will show that the problem
of making optimal adjustments to transmission parameters
can also be formulated quite naturally in this setting as a
weighted Max-SAT problem, many instances of which can
be efficiently solved despite the problem’s high theoretical
complexity. Indeed, competitive SMT solvers such as CVC3
[
            <xref ref-type="bibr" rid="ref23 ref7">7</xref>
            ] and Yices [
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ] are highly optimized programs that can
readily decide extremely large formulas. As a quick
comparison, another recently built policy system for spectrum
access, Bresap, which uses BDDs as its underlying
technology, cannot handle policies with more than 20 atomic
expressions [1, p. 83]. By contrast, our system can easily
handle policies with many thousands of atomic expressions.
          </p>
          <p>The remainder of this paper is structured as follows. The
next section describes the representation of spectrum-access
policies in Athena.1 Section 3 discusses the integration of
SMT solvers with Athena, and shows how to reduce the
problem of reasoning with spectrum-access policies to SMT
solving. In section 4 we model the problem of making
optimal changes to transmission requests as a Max-SAT
problem, and we describe how our system models and solves
such problems. We also present an optimizing algorithm
that uses a version of binary search in order to minimize
integer-valued objective functions with very few calls to an
oracle SMT solver (at most log(n) such calls, where n is
the maximum value that can be attained by the objective
function). This algorithm can be used to further optimize
transmissions, e.g., by minimizing the distance between the
values desired by the cognitive radio and the values allowed
by the policy. Section 5 presents results evaluating the
performance of our policy engine. Finally, section 6 discusses
related work and concludes.</p>
          <p>POLICY REPRESENTATION
1While some Athena code is presented, knowledge of the language
is not necessary. Any reader familiar with formal methods and
functional programming languages should be able to follow the
code.</p>
          <p>The set of radio transmissions is treated as an abstract data
type with fields such as frequency, power, mode, etc. The
abstract domain of transmissions is introduced in Athena as
follows:
domain Transmission
A transmission field (or “parameter”) can then be introduced
as a function from transmissions to values of some
appropriate sort. For illustration purposes, we demonstrate the
declaration of the following parameters: frequency, power,
mode, and time. Additional parameters can be introduced as
needed.
d e c l a r e frequency: [Transmission] -&gt; Int
d e c l a r e power: [Transmission] -&gt; Real
d e c l a r e mode: [Transmission] -&gt; Mode
d e c l a r e hours, minutes: [Transmission] -&gt; Int
Here Int and Real are built-in Athena sorts
representing the sets of integers and real numbers, respectively, with
Int a subsort of Real. The sort Mode is a finite algebraic
datatype:
d a t a t y p e Mode := emergency | everyDay | ...</p>
          <p>There are two distinguished unary predicates on the set of
all possible transmissions: allow and deny. These
predicates are used by policies to specify which transmissions are
considered permissible and which are not. More concretely,
a policy is represented as a pair of sentences: an allow
condition and a deny condition. An allow condition (or AC for
short) is of the form: 2
(forall ?x:Transmission. allow ?x &lt;==&gt;</p>
          <p>) (1)
while a deny condition (DC) is of the form:
(forall ?x:Transmission. deny ?x &lt;==&gt;
)</p>
          <p>(2)
Either condition may be absent, i.e., a policy may specify
only an allow condition but no deny condition, or only a
deny condition but no allow condition. An absent condition
is represented by the unit value, (). The constructor for
policies is therefore a procedure that takes two conditions
and simply puts them together in a two-element list:
d e f i n e make-policy :=</p>
          <p>lambda (AC DC) [AC DC]
There are two destructor or selector procedures, one for
retrieving each component of the policy:
d e f i n e [get-AC get-DC] := [first second]</p>
          <p>An important operation is the application of a given
policy condition C of the form (1) or (2) to a term t of sort
Transmission. Let body be whatever sentence would
2A more accurate term instead of allow would be “allow
provisionally,” because, as we will see, for a transmission request to be
granted, the allow condition must hold and the deny condition must
not hold. With this caveat in mind, we will continue to use the term
allow in the interest of simplicity.
normally be where the ellipses now appear in (1) (or in (2)).</p>
          <p>Then the result of the said application is the sentence
obtained from body by replacing every free occurrence of the
quantified variable ?x:Transmission by the term t. In
Athena code, this procedure is defined as follows:
d e f i n e apply :=
lambda (C t)
match C {
(forall x ((_ x) &lt;==&gt; body)) =&gt;</p>
          <p>(replace-var x t body) }
The built-in ternary Athena procedure replace-var is
such that (replace-var x t p) produces the result
obtained from the sentence p by replacing every free
occurrence of variable x by the term p. As a concrete example,
here is a sample AC:
d e f i n e AC1 :=
(forall ?x .</p>
          <p>allow ?x &lt;==&gt;
frequency ?x in [5000 5700] &amp;
power ?x &lt; 30.0 &amp;
mode ?x = everyDay &amp; hours ?x &lt;= 11)</p>
          <p>It says that a transmission is (provisionally) permissible iff
its frequency is in the 5000 : : : 5700 MHz range; its power is
less than 30.0 dBm; its mode is everyDay; and the
transmission’s time is no later than 11 in the morning. Here in is
a binary procedure defined as follows:
d e f i n e in :=
lambda (x range)
match range {</p>
          <p>[a b] =&gt; (a &lt;= x &amp; x &lt;= b)}
(Note that variables do not need to be explicitly annotated
with their sorts. Athena has polymorphic sort inference, so
a Hindley-Milner algorithm will recover the most general
possible sorts of all variable occurrences.) We can now
construct a policy with the above AC and with no DC as follows:
d e f i n e policy-1 := (make-policy AC1 ())
Our second policy example has both an AC and a DC:
d e f i n e policy-2 :=
(make-policy
(forall ?x .</p>
          <p>allow ?x &lt;==&gt; mode ?x = emergency)
(forall ?x .</p>
          <p>deny ?x &lt;==&gt; power ?x &gt; 25.0))
A key operation on policies is that of merging. New policies
may be downloaded into the policy database at any given
time. A newly acquired policy must be integrated with the
existing policy that is in place in order to produce a single
merged policy with a new AC and a new DC. Merging is
therefore a binary operation. The current definition of
merging two policies p1 and p2 is simple: it is disjunction on
both the AC and DC. Specifically, the AC of the merged
policy allows a transmission t iff p1 allows t or p2 allows it;
and it denies t iff p1 denies it or p2 denies it. Merging is
implemented as an Athena procedure. Other alternative
definitions of merging are possible, and these could be
straightforwardly implemented in the same way. As an example,
here is the result of merging policy-1 and policy-2,
as these were defined above:
d e f i n e merged-policy :=</p>
          <p>(merge-policies policy-1 policy-2)
&gt;(get-AC merged-policy)
Sentence:
(forall ?v4:Transmission
(iff (allow ?v4)
(or (and (and (&lt;= 5000</p>
          <p>(frequency ?v4))
(&lt;= (frequency ?v4)</p>
          <p>5700))
(and (&lt; (power ?v4)</p>
          <p>30.0)
(and (= (mode ?v4)</p>
          <p>everyDay)
(&lt;= (hours ?v4)</p>
          <p>11))))
(= (mode ?v4)</p>
          <p>emergency))))
&gt;(get-DC merged-policy)
Sentence: (forall ?v5:Transmission
(iff (deny ?v5)
(&gt; (power ?v5)</p>
          <p>25.0)))
(Note that while Athena accepts input in either infix or
prefix form, sentences are output in prefix for indentation
purposes.)</p>
          <p>We define a transmission request as a set of constraints
over some object of sort Transmission, typically just
a free variable ranging over Transmission. These
constraints, which are usually desired values for some—or all—
of the transmission parameters, can be expressed simply as
a sentence. A sample transmission request:
d e f i n e request-1 := (frequency ?t = 150 &amp;
mode ?t = everyDay &amp;
hours ?t = 16)
This is a conjunction specifying that the frequency of the
desired transmission (represented by the free variable ?t)
should be 150, its mode should be everyDay, and its time
should be between 4:00 and 4:59 (inclusive) in the evening.</p>
          <p>A transmission request may be incomplete, i.e., it may not
specify desired values for every transmission parameter. The
preceding request was incomplete, since it did not specify
values for the power and minutes parameters.</p>
          <p>POLICY REASONING</p>
          <p>The main task of the policy system is this: determine
whether the policies currently in place allow or deny a given
transmission request. This is in fact the simplest
formulation of the core reasoning problem, in that it only requires
a yes/no answer from the policy system. But in general the
policy system needs to solve more interesting versions of this
problem, namely: If the transmission request is granted, find
appropriate values for any missing parameters, in case the
request was incomplete; if the request is not granted, find
appropriate values for the transmission parameters that would
render the transmission permissible. Moreover, in the
second case, we often want to compute values in a way that is
optimal w.r.t. to the given request, e.g., so that the original
transmission request is disrupted as little as possible. We
discuss such optimality issues in section 4.</p>
          <p>
            The above problems are naturally formulated as
satisfiability problems. However, the most appropriate satisfiability
framework here is not that of straight propositional logic, but
rather that of satisfiability modulo theories, or SMT for short
[
            <xref ref-type="bibr" rid="ref15 ref31">15</xref>
            ]. In the SMT paradigm, the solver is given an arbitrary
formula of first-order logic with equality and its task is to
determine whether or not the formula is satisfiable with respect
to certain background theories. Typical background theories
of interest are integer and/or real arithmetic (typically linear,
but not necessarily), inductive data types (free algebras), the
theory of uninterpreted functions, as well as theories of lists,
arrays, bit vectors, etc. Hence, most of the function symbols
and predicates that appear in the input formula have fixed
interpretations, given by these background theories. An SMT
solver will not only determine whether a sentence p is
satisfiable; if it is, it will also provide appropriate satisfying values
for the free variables and/or constants that occur in p.
          </p>
          <p>
            Athena is integrated with a number of SMT solvers, such
as CVC3 and Yices; the one used for this project is Yices [
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ],
mostly because it can solve Max-SAT problems. The main
interface is given by a unary Athena procedure smt-solve,
which takes an arbitrary first-order sentence p and attempts
to determine its satisfiability with respect to the
appropriate theories. If successful, smt-solve will return a list of
values for all free variables and/or constants that occur in p.
          </p>
          <p>Some examples:
d a t a t y p e Day :=</p>
          <p>Mon | Tue | Wed | Thu | Fri | Sat | Sun
&gt;(smt-solve ?x = 2 &amp; ?y = 3.4 |</p>
          <p>?x = 5 &amp; ?d = Mon)
List: [(= ?y 3.4) (= ?x 2) (= ?d Mon)]
The input to smt-solve here was a disjunction of two
conjunctions. The free variable ?d ranges over the datatype
Day; while ?x and ?y range over Int and Real,
respectively. The output is a list of identities providing values for
each free variable that render the input satisfiable. Because
the argument to smt-solve is a sentence, i.e., a native
Athena data value, this provides a very flexible and
highbandwidth programmable interface to SMT solvers. This
interface proved extremely useful for our system.</p>
          <p>The policy reasoning problems described earlier have a
natural formulation in SMT. Specifically, consider an
arbitrary transmission request tr, i.e., a first-order sentence with
a free variable x ranging over Transmission (there may
be other free variables as well). To determine whether tr
is allowed by the current policy: (1) We construct a longer
request, tr0, which is the conjunction of (a) tr; (b) the
application of the current policy’s AC to x; and (c) the application
of the negation of the current policy’s DC to x:
tr &amp; (apply (get-total-AC) x) &amp;</p>
          <p>˜ (apply (get-total-DC) x)
The procedure get-total-AC returns the AC of the
current (“total”) policy; likewise for get-total-DC. So tr0
represents all the constraints imposed on the requested
transmission, both by the original transmission request, tr, and
by the policy itself, whose AC must hold for the
transmission variable x while its DC must not. (2) We then run
smt-solve on tr’. If the result is a satisfying assignment,
then the request is granted, and all we need to do is report
values for any missing parameters (parameters that did not
figure in the given request). If, by contrast, smt-solve
determines that tr’ is unsatisfiable, then tr in its given form
must be rejected. In that case we make a blank request
consisting of the application of the policy’s AC to x conjoined
with the application of the negation of the policy’s DC to
x, and run smt-solve on it. This blank request therefore
imposes no constraints at all on the transmission apart from
those imposed by the policy. If the result is a satisfying
assignment, we provide it to the user, otherwise we report that
the current policy is unsatisfiable.</p>
          <p>The Athena code for this algorithm is expressed in a
procedure evaluate-request, which accepts an arbitrary
request and processes it as described above. Here is the
output for the example of the previous section:
&gt;(evaluate-request request-1)
Transmission allowed. Appropriate values
for missing parameters:
[(= (power ?t1) 20.0)]</p>
          <p>COMPUTING OPTIMAL ADJUSTMENTS</p>
          <p>TO TRANSMISSION PARAMETERS</p>
          <p>When a transmission request is denied, there are usually
many different ways of modifying it so as to make it
permissible. For instance, the solver could change the desired
transmission’s time; or it could change its frequency; or it
could change its power and mode; or it could change all of
the above. Some of these actions may be preferable to
others. For instance, the radio might be less willing to change
the time of the transmission, or its power level, rather than
the frequency. In such cases we want the policy system to
return a satisfying assignment that is optimal in the sense of
respecting as many such preferences of the radio as possible.</p>
          <p>Our system achieves this in a flexible way by formulating
the problem as an (SMT) instance of Max-SAT. In its
transmission request, the radio can provide a weight wi along
with the desired value vi of each transmission parameter pi.</p>
          <p>The weight wi reflects the importance that the radio attaches
to pi taking the value vi. The SMT solver will then try to
find a satisfying assignment for the request that has
maximal total weight. In that case, the request is not just a list
of constraints [c1 cn] but a list of pairs of constraints
and weights [[c1 w1] [cn wn]]. A sample
transmission request might then be:
d e f i n e weighted-request :=
[[(frequency ?t = 8000) 10]
[(power ?t = 35.0) 15]
[(hours ?t = 13) 30]]
indicating that the relative importance of the frequency being
8000 is 10, that of the power being 35.0 is 15, and that of
hours being 13 is 30. Suppose further that the policy in
place has no DC and a disjunctive AC:
d e f i n e AC1 :=
(forall ?t . allow ?t &lt;==&gt;
(frequency ?t in [5000 7000] &amp;</p>
          <p>power ?t &lt;= 30.0 &amp; hours ?t &gt; 12) |
(frequency ?t in [6000 9000] &amp;</p>
          <p>power ?t &lt;= 40.0 &amp; hours ?t &lt;= 8))
d e f i n e policy := (make-policy AC1 ())
Now consider evaluating weighted-request with
respect to this policy. Clearly, the request cannot be allowed
as is. We can make it permissible in more than one way:
(a) we could demand a change of frequency and power in
accordance with the values prescribed by the first branch of
AC1, while keeping the hours parameter to the requested
value of 13; or (b) we could demand a change of the hours
parameter only, in accordance with the second disjunctive
branch of AC, while keeping the desired frequency and power
values; or (c) we could demand that all three parameter
values change. From these possibilities, only (a) is optimal, in
that it honors the original request to the greatest extent
possible (as determined by the given weights). Running this
example in our system results in the following output:
&gt;(evaluate-request weighted-request)
The following parts of the request
could not be satisfied:
(= (power ?t) 35.0) (= (frequency ?t) 8000)
Here is a maximally satisfying assignment:
[(= (mode ?t) everyDay)
(= (frequency ?t) 5999)
(= (power ?t) 30) (= (hours ?t) 13)]
By contrast, if we had changed the weight of the hours
parameter from 30 to 20, the result would then change the
hours parameter instead of the frequency and power, since
retaining the values of the two latter parameters would result
in a maximum weight of 25.</p>
          <p>Note that weights can be arbitrary integers or a special
“infinite” token, indicating a hard constraint that must be
satisfied. In fact this infinite weight is the one that Athena
attaches to the constraints obtained from the AC (and the
negation of the DC) of the current policy. But radios can
also use infinite weights in their requests.</p>
          <p>Occasionally there may be additional requirements on the
assignments returned by the policy system, beyond
honoring the original request to the greatest extent possible. For
example, if a parameter value must change, we may want
the new value to be as close as possible to the original
requested value. If we are not allowed to transmit at the exact
desired time, for instance, we may want to transmit as close
to it as possible (as can be allowed by the currently installed
policies). To meet such requirements we generally need the
ability to perform optimization by minimizing some
objective function, in this case the absolute difference between
desired and permissible parameter values. Most SMT solvers
do not perform optimization (apart from Max-SAT, in the
case of Yices, though see below), but we can efficiently
implement an integer optimizer in terms of SMT solving. The
idea is to use binary search in order to discover the
optimal cost with as few calls to the SMT solver as possible: at
most O(log n) calls, where n is the maximum value that the
objective function can attain. Specifically, let c be an
arbitrary constraint that we wish to satisfy in such a way that
the value of some “cost” term t is minimized, where max is
the maximum value that can be attained by the cost function
(represented by t). (Tf this value is not known a priori, it
can be taken to be the greatest positive integer that can be
represented on the computer.) The algorithm is the
following: We first try to satisfy c conjoined with the constraint
that the cost term t is between 0 and half of the maximum
possible value: 0 t (max div 2). If we fail, we
recursively call the algorithm and try to satisfy c augmented with
the constraint (max div 2) + 1 t max. Whereas, if
we succeed, we recursively call the algorithm and try to
satisfy c augmented with the constraint 0 t (max div 4).</p>
          <p>Some care must be taken to get the bounds right on each
call, but this algorithm is guaranteed to converge to the
minimum value of t for which c is satisfied, provided of course
that the original constraint c is satisfiable for some value of
t. This algorithm is implemented by a ternary Athena
procedure smt-solve-and-minimize, such that</p>
          <p>(smt-solve-and-minimize c t max)
returns a satisfying assignment for c that minimizes t (whose
maximum value is max).</p>
          <p>For example, suppose that x, y, and z are integer variables
to be solved for:
d e f i n e [x y z] := [?x:Int ?y:Int ?z:Int]
subject to the following disjunctive constraint:
d e f i n e c :=
(x in [10 20] &amp; y in [1 20] &amp;
Suppose also that the desired values are x = 13, y = 15,
z = 922. The task is to find values for these variables that
satisfy c while diverging from the desired values as little as
possible. We can readily model this problem in a form that
is amenable to smt-solve-and-minimize as follows.</p>
          <p>First we define the objective-function term t, as the sum of
the three differences:
d e f i n e t := (?d-x:Int + ?d-y:Int + ?d-z:Int)
with the difference terms defined as follows:
d e f i n e d-x-def :=
(ite (x &gt; 13) (?d-x = x - 13)</p>
          <p>(?d-x = 13 - x))
d e f i n e d-y-def :=
(ite (y &gt; 15) (?d-y = y - 15)</p>
          <p>(?d-y = 15 - y))
d e f i n e d-z-def :=
(ite (z &gt; 922) (?d-z = z - 922)</p>
          <p>(?d-z = 922 - z))
Here ite is a simple if-then-else procedure:
d e f i n e ite :=
lambda (c x y) ((c ==&gt; x) &amp;</p>
          <p>(˜ c ==&gt; y))
Assume that we do not know the exact maximum value that
t can attain, but we know that it cannot be more than 106.</p>
          <p>We can then solve the problem with the following call:
d e f i n e diff-clauses :=</p>
          <p>(d-x-def &amp; d-y-def &amp; d-z-def)
d e f i n e query := (c &amp; diff-clauses)
&gt;(smt-solve-and-minimize query t 1000000)
List: [(= ?x 13) (= ?y 15) (= ?z 800)</p>
          <p>(= ?d-x 0) (= ?d-y 0) (= ?d-z 122)]
This solution was found by a total of 8 calls to the SMT
solver (for a total time of about 100 milliseconds). Why
were only 8 calls required when we started the binary search
with a maximum of 106? One would expect about log 106
calls to the smt solver, i.e., roughly 20 such calls. However,
our implementation uses the information returned by each
call to the SMT solver to speed up the search. That often
results in drastic shortcuts, cutting down the total number of
iterations by more than a factor of 2.</p>
          <p>
            This procedure enables our system to optimize any
quantity that can be given an integer metric. Moreover, unlike
independent branch-and-bound algorithms for integer
programming, it has the advantage that it allows not just for
numeric constraints, but for arbitrary boolean structure as well,
along with constraints from other theories such as reals, lists,
arrays, bit vectors, inductive datatypes, etc. While more
sophisticated approaches to optimization in the SMT paradigm
are beginning to emerge (e.g., [
            <xref ref-type="bibr" rid="ref18 ref2">2</xref>
            ]), the implementation
described here has been quite adequate for our purposes.
          </p>
          <p>PERFORMANCE</p>
          <p>In order to test our system we wrote an Athena program
that generates test instances, with two independently
controllable variables: parameter number and policy number.</p>
          <p>In particular, we defined a procedure make-policy-set with
two parameters, param-num and policy-num. The output is
a list of policy-num policies, where each policy involves
param-num transmission parameters. The latter came from
a pre-declared pool of 100 transmission parameters, half of
them integer-valued and the other half real-valued. We
distinguished the following types of policies:
permissive policies, which have only an AC;
prohibitive policies, which have only a DC and no AC;
mixed policies, which have both.</p>
          <p>Ordering=N policies. These policies are
parameterized over N &gt; 0. Specifically, an ordering/N policy is
one whose AC and/or DC is of the following form:
(forall ?t:Transmission .</p>
          <p>D ?t &lt;==&gt;
(fi1 ?t R1 ci1 ) &amp; ... &amp; (fik ?t Rk cik ))
where D is either allow or deny, and 8 j 2 f1; : : : ; kg:
fij is a transmission parameter; Rj 2 f&lt;; &gt;; &lt;=; &gt;=g;
and cij is a constant number of the appropriate sort. We
refer to the ordering constraints (fij ?t Rj cij ) as
the atoms of the AC (or DC). An ordering/N policy
may be permissive, prohibitive, or mixed, but its total
number of atoms (i.e., the number of the AC’s atoms
plus the number of the DC’s atoms) must equal N .</p>
          <p>Equational=N policies. The AC and/or DC of such a
policy is of the same form as shown above, except that
each Ri is the identity relation. These identities are the
atoms of the condition. The total number of atoms (of
the AC and DC together) must be N .</p>
          <p>Inequational=N policies. Same as above, except that
the relation in question here is inequality.</p>
          <p>Range=N policies. The AC and/or DC of such a policy
is of the same form as that of an ordering/N policy,
except that each Ri is the relation in, and each constant
ci is a range [a b].</p>
          <p>Disjunctive-Conjunctive=N policies. The AC and/or
DC of such a policy is a disjunction of conjunctions,
where each atom is of the form (fx ?t in [ ]).</p>
          <p>The number of atoms are required to add up to N .
among all the preceding types. Specifically, 20% of the
policies in L are ordering/N policies. About 1/3 of these
ordering/N policies are permissive, 1/3 are prohibitive, and
1/3 mixed. The next 20% of L consists of equational/N
policies, and these are again evenly split between permissive-,
prohibitive-, and mixed-policy subsets. The third 20%
contains inequational/N policies, and so forth. Once a list L of
policies is thereby obtained, we combine them all into a
single policy by folding the merge-policies operator over
L, with the empty-policy as the identity element. It is
with respect to this merged policy that we tested
transmission requests. The requests were generated randomly.</p>
          <p>Figure 1 shows the SMT-solving times for processing plain
transmission requests (i.e., without weights attached to the
various transmission parameters), for various combinations
of policy-set sizes and transmission parameter numbers, where
the policy sets are evenly mixed as described above.
Figure 2 shows the corresponding times for optimal processing
of transmission requests, i.e., requests that attach weights to
each of the transmission parameters and are solved as
MaxSAT problems. In order to stress-test the implementation
further, we repeated the experiments with sets of policies
that were more structurally complex, doing away with
simple ordering, equational, and inequational policies, and
using instead only range and disjunctive-conjunctive policies.</p>
          <p>The corresponding results are shown in Figure 3 and in
Fig</p>
          <p>A call of the form (make-policy-set N P) returns
a list L containing P policies, each of which involves N
transmission parameters. This list is roughly equally partitioned
ure 4. With these sort of structurally complex policies in
place, the highest increase occurs in optimal processing of
transmission requests with more than 50 policies and
parameters.</p>
          <p>The graphs show only the time spent on SMT solving,
which is the bulk of the work that needs to be done when
processing transmission requests. We do not include the time
spent on translating from Athena to SMT notation and back.</p>
          <p>This translation is a straightforward linear-time algorithm
(linear in the size of the formula to be translated, on average,
since it uses hash tables for the necessary mappings between
Athena and SMT namespaces), and the time spent on it in
most cases is neglibible. For huge policies containing
hundreds of thousands of nodes, the translation does take longer,
though still typically a fraction of a second. Virtually all of
the translation cost is incurred when translating the policy,
since the transmission requests are tiny by comparison.
Observe, however, that there is no reason to be translating the
entire policy anew each time the engine needs to process a
new request. Instead, the (merged) policy can be translated
off-line, once, and subsequently cached in SMT notation in
some file. Thus, the translation time is an one-time, off-line
cost.</p>
          <p>All experiments were performed on a 2.4 GHz Intel Core
i3-370M dual-core processor with 4GB RAM, running
Windows 7. The time command of the Cygwin shell was used
to get the timing data. In most cases, the reported system
time (the sys entry of time’s output) was 0.000, i.e., too
small to be reliably measured, and hence the time reported
here is real (wall clock) time, which includes time
segments spent by other processes and times during which the
SMT solver was blocked (e.g., waiting for I/O to complete).</p>
          <p>Therefore, the times reported here are overly conservative;
the actual times spent on SMT solving are smaller.</p>
          <p>RELATED WORK &amp; CONCLUSIONS</p>
          <p>We have presented an implementation of a policy system
for dynamic spectrum access. Policies are represented and
manipulated in Athena, a formal proof system for first-order
logic with polymorphism, subsorting, inductive datatypes,
arbitrary computation, and definitional extension. Most of
these features have proven useful in the engine’s
implementation; others would become useful if the engine were to be
extended so that it used, e.g., structured ontologies instead
of flat data types.</p>
          <p>
            Previous work in this area includes BBN’s XGPLF [
            <xref ref-type="bibr" rid="ref24 ref8">8</xref>
            ] and
SRI’s Coral [
            <xref ref-type="bibr" rid="ref10 ref11 ref26 ref27 ref41">10, 11</xref>
            ]. XGPLF does not have a formal
semantics and is limited in what it can express (it is based on
OWL [
            <xref ref-type="bibr" rid="ref35">19</xref>
            ]). Moreover, XGPLF cannot model inheritance.
          </p>
          <p>
            To the best of our knowledge, the only implementation of
a policy reasoning engine based on XGPLF is the one built
by the Shared Spectrum Company [
            <xref ref-type="bibr" rid="ref12 ref28">12</xref>
            ]. They used
SWIProlog as the underlying reasoning engine. In field tests
using resource-limited radio devices, SSC replaced the Prolog
reasoner with “a simplified reasoner developed in C/C++”
[12, p. 504]; no details are provided regarding the design,
implementation, or correctness of this simplified reasoner.
          </p>
          <p>SSC’s engine can only return yes/no answers to transmission
requests.</p>
          <p>
            Coral (Cognitive Radio Policy Language) is a new
language specifically designed for expressing policies
governing the behavior of cognitive radios. Like Athena, Coral
offers a rich and extensible first-order logical language that
includes arithmetic and allows for the introduction and
definition of new function symbols. It also features subtyping
and can therefore express inheritance and ontologies. Some
notable differences from Athena include: (a) Unlike Coral,
Athena has polymorphism. Thus, e.g., it is not necessary
to introduce lists for integers and lists for booleans
separately; a generic parameterized definition is sufficient. (b)
Athena has automatic sort inference based on the
HindleyMilner algorithm, which is convenient in practice because
it allows for shorter—and usually cleaner—specifications.
(c) Athena has a general-purpose formally defined
programming language that can seamlessly interact with its logical
layer, and which can be used to dynamically construct and
manipulate logical formulas very succinctly. This
programming language offers procedural abstraction, as well as side
effects through mutation, including reference cells and
vectors, features which are often important for efficiency. The
ability to compute with terms and sentences as primitive data
values was very useful. While Coral can express some
computations via universally quantified identities which can then
be interpreted as executable rewrite rules by a tool such as
Maude [
            <xref ref-type="bibr" rid="ref16 ref32">16</xref>
            ], it does not offer procedural abstraction, i.e., it
is not possible to define arbitrary procedures.
          </p>
          <p>
            Two policy engines based on Coral have been implemented
[
            <xref ref-type="bibr" rid="ref10 ref11 ref26 ref27 ref41">10, 11</xref>
            ]. The initial implementation used Prolog as the
underlying reasoning engine, and was only able to return yes/no
answers to transmission requests. The second (and current)
implementation uses a custom-made proof system to
reason about transmission requests. The system has four kinds
of proof rules: forward chaining, backward chaining,
partial evaluation based on conditional rewriting, and constraint
propagation and simplification. The proof system is
implemented in the rewriting engine Maude. One issue with
this implementation is that a positive answer is given only
if the transmission request is an exact match of the
operative policy conditions. But that is not likely to be the case
in practice. Most transmission requests are likely to be
incomplete or to diverge from the policy, at least in some small
degree. In such cases, the Coral implementation does not
return values for the relevant transmission parameters. Rather,
it returns an arbitrarily complicated first-order formula
representing all the possible “opportunity constraints” that the
radio could use to modify its request so as to make it
permissible. But that is not likely to be of much use to the
requesting party. It is not realistic to require that whoever
made the transmission request should be able to understand
and reason about arbitrarily complicated logical formulas in
order to understand the policy system’s output.
          </p>
          <p>A second issue with this implementation is efficiency. The
Coral team has not published precise benchmarks describing
their engine’s performance for variable numbers and types of
policies (and for variable numbers of transmission
parameters), but they have released a demo of their implementation
that can be used to evaluate transmission requests with
respect to policies that have a fixed set of transmission
parameters, namely: location (latitude and longitude); time (hours,
minutes, and seconds); sensed power; emissions; network
id; and role (slave/master). Even with this fairly small set of
transmission parameters, and with only 11 active policies in
place, it has been reported [1, p. 18] that evaluating a
single transmission request took the Coral engine 58 seconds,
and resulted in an output formula comprising 75 constraints.</p>
          <p>In the preceding section we saw that our implementation can
evaluate transmission requests with several dozens of
parameters and with over 50 complex policies in place in a fraction
of a second; this is so even when the requests are processed
optimally. Moreover, the Coral engine has no notion of
optimality. It does not allow the requesting party to specify that
some parameters are more important than others, or to ask
that the request should be satisfied to the greatest extent
possible, or with as little divergence from the requested values
as possible.</p>
          <p>
            Another policy engine for spectrum access is Bresap, a
system that was recently implemented at Virginia Tech [
            <xref ref-type="bibr" rid="ref1 ref17 ref21 ref5">1,
5</xref>
            ]. Unlike the other systems discussed above, Bresap is
limited to policies that can be adequately represented as finite
Boolean functions, which it encodes as binary decision
diagrams (BDDs). Therefore, unlike Coral and our system,
Bresap is not suitable as an expressive language for
specifying policies. For instance, it has no facilities for introducing
new policy concepts and/or rules. Indeed, Bresap is not a
language. It is a system that reads certain types of policies
expressed in XML and converts them to BDDs. This is done
by assigning a distinct Boolean variable to each atomic
expression that appears in the policy. For instance, a policy
such as “allow transmission in the frequency range a : : : b
if the power does not exceed m” would result in the
introduction of two distinct Boolean variables x1 and x2, with
x1 corresponding to the atom a f b (where f stands
for the transmission’s frequency); and with x2
corresponding to the atomic expression p m (where p stands for the
transmission’s power). The accepting condition could then
be represented by the boolean function x1 ^ x2. Graph
algorithms are used to merge different policy BDDs. With a
single BDD in place representing the result of merging all
active policies, Bresap can then accept an incoming
transmission request and evaluate it. A transmission request must
be an attribute-value list of the form a1 = v1; : : : ; an = vn,
where ai is a transmission parameter (such as mode or
frequency) and vi is the corresponding desired value. Notably,
Bresap is capable of handling underspecified (incomplete)
requests. It is also capable of attaching costs to transmission
parameters (where a given cost indicates the penalty paid for
changing the value of that parameter), and then modifying
the parameter values of a transmission request in a way that
minimizes the overall cost.
          </p>
          <p>A drawback of Bresap is that the underlying policy
representation framework, that of BDDs (or propositional logic,
more generally), lacks semantics for the numeric constraints
that pervade spectrum access policies. To put it simply, BDDs
do not know arithmetic, and thus do not have the
wherewithal to “understand” that &lt; is transitive, that 1 + x is
greater than x, that 10 and 14 4 are the same object, and
so on. Whereas an SMT solver immediately realizes that
f &lt; 10 conjoined with f 30 is contradictory, because
it uses a dedicated reasoning procedure for arithmetic, in
the approach of Bresap these two atoms would result in the
introduction of two Boolean variables, x1 and x2, the
former representing f &lt; 10 and the latter representing f
30. Thus, the conjunction of x1 and x2 is represented by
the innocuous-looking Boolean function x1 ^ x2, a perfectly
consistent condition. To represent the fact that in the present
context this is actually inconsistent, one has to append to it
ad hoc clauses such as C = (:x1 _ :x2) ^ (x1 _ x2). This
extra information, C, conjoined with x1 ^ x2, would then
allow us to conclude that the condition is unsatisfiable. That is,
in fact, what Bresap does: It “semantically analyzes” the
various atomic expressions in the policy in order to generate
additional constraints—so-called “auxiliary rules” [1, section
3.5]—that prevent the engine from taking “illogical” paths
in the BDD. This approach has some disadvantages.3 First,
it is an essentially ad hoc encoding of arithmetic facts, the
generation of which should not be part of the policy engine’s
trusted computing base. Second, the additional encoded
information, even if it is polynomial in size, can significantly
blow up the resulting BDD. Other efficiency issues in
Bresap include the conversion algorithm that produces the BDD
from the atomic Boolean expressions of the policy. This
algorithm has exponential time complexity in the number of
expression variables. As a result, Bresap is not currently
able to handle policies with more than 20 atomic Boolean
expressions [1, p. 83].</p>
          <p>In conclusion, we believe that the combination of a
programmable and expressive formal framework such as Athena
with an efficient SMT solver is a highly suitable
implementation vehicle for spectrum access policy engines. It
combines rich expressivity with efficient performance, a
consideration that is likely to be crucial for cognitive radios. To
the best of our knowledge, SMT solvers have not been used
before for reasoning about policies, although we believe that
they are ideal for this task. Indeed, we believe that there
is not much that is special about spectrum access, and that
the same approach we have introduced here could be used to
represent and reason about policies in other domains.
3Note that this criticism is not peculiar to Bresap. It applies to all
policy engines that represent semantically rich policies (requiring
at least linear arithmetic) as Boolean functions.</p>
          <p>Data Structure Choices for On-the-Fly Model
Checking of Real-Time Systems</p>
          <p>Peter Fontana</p>
          <p>Department of Computer Science
University of Maryland, College Park</p>
          <p>Email: pfontana@cs.umd.edu</p>
          <p>Rance Cleaveland</p>
          <p>Department of Computer Science
University of Maryland, College Park</p>
          <p>Email: rance@cs.umd.edu</p>
          <p>
            Abstract—This paper studies the performance of sparse- checking based on proof search: a formula
correspondmatrix-based data structures to represent clock zones (con- ing to the assertion that a timed automaton satisfies a
vex sets of clock values) in an on-the-fly predicate equation mu-calculus property can be checked in a goal-driven
system model checker for timed automata. We analyze the fashion to determine its truth. Zhang and Cleaveland [
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ]
impact of replacing the dense difference bound matrix demonstrated the efficiency of this approach vis a` vis
(CDRBDMA)rrwaiythdbatoathsttrhuectluinrkee.dF-rliosmtCaRnDalZyosinseoanndthaerpraayir-elidst- other real-time model-checking approaches.
example-by-example differences in time performance, we In this paper we consider the special model
checkinfer the DBM is either competitive with or slightly faster ing case of timed automata and timed modal equation
than the CRDZone, and both perform faster than the systems representing safety properties (also studied in
CRDArray. Using similar analysis on space performance, [
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ]), for which there are still many opportunities for
we infer the CRDZone takes the least space, and the DBM performance improvements. One component of such a
takes less space than the CRDArray. model checker that has a noticeable influence on
performance is the data structure for the sets of clock values.
          </p>
          <p>
            I. INTRODUCTION When analyzing safety properties, each desired set of
Automatic verification of real-time systems is under- clock values forms a convex set of clock values, or clock
taken using notations for verifiable programs and check- zone (see Definition 3). The conventional way to store
able specifications (see [
            <xref ref-type="bibr" rid="ref1 ref17">1</xref>
            ]–[
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ]). One common program a clock zone is as a difference bound matrix (DBM)
notation is timed automata [
            <xref ref-type="bibr" rid="ref23 ref7">7</xref>
            ]. There are specification (see Definition 4) [
            <xref ref-type="bibr" rid="ref15 ref31">15</xref>
            ], which stores the constraints as
notations such as timed computation tree logic (TCTL) a matrix. This approach is used by UPPAAL [
            <xref ref-type="bibr" rid="ref16 ref32">16</xref>
            ] and
[
            <xref ref-type="bibr" rid="ref1 ref17">1</xref>
            ], [
            <xref ref-type="bibr" rid="ref24 ref8">8</xref>
            ] and timed extensions of a modal mu-calculus, described in [
            <xref ref-type="bibr" rid="ref33">17</xref>
            ]. To potentially save space and time,
including one in [
            <xref ref-type="bibr" rid="ref19 ref3">3</xref>
            ] and another given in [
            <xref ref-type="bibr" rid="ref21 ref5">5</xref>
            ]. Specifica- instead of representing the set of constraints as a matrix,
tions in a timed modal mu-calculus can be written as lists one can represent the set as an ordered linked path of
of equations, known as timed modal equation systems constraints where any clock difference not on the path
[
            <xref ref-type="bibr" rid="ref21 ref5">5</xref>
            ], [
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ], [
            <xref ref-type="bibr" rid="ref25 ref9">9</xref>
            ]. For information on the untimed modal-mu has the implicit constraint &lt; 1. If we generalize this
calculi, see [
            <xref ref-type="bibr" rid="ref10 ref26 ref41">10</xref>
            ]–[
            <xref ref-type="bibr" rid="ref12 ref28">12</xref>
            ], and see [
            <xref ref-type="bibr" rid="ref10 ref26 ref41">10</xref>
            ], [
            <xref ref-type="bibr" rid="ref11 ref27">11</xref>
            ] for information to allow for a union of zones to be represented by a
on modal equation systems. directed graph of constraints (representing a tree of paths
          </p>
          <p>
            One approach to model checking timed automata as opposed to a single path), we get a clock restriction
with timed modal mu-calculus specifications is to use diagram (CRD) [
            <xref ref-type="bibr" rid="ref34">18</xref>
            ]. If we compress the nodes to have
predicate equation systems (PES), which were invented them represent upper and lower bound constraints as well
independently by Groote and Willemse (as parameter- as explicitly encoding both valid and invalid paths, we
ized boolean equation systems) [
            <xref ref-type="bibr" rid="ref13 ref29">13</xref>
            ] and by Zhang and get a clock difference diagram (CDD) [
            <xref ref-type="bibr" rid="ref18 ref2">2</xref>
            ]. These two
Cleaveland [
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ], [
            <xref ref-type="bibr" rid="ref25 ref9">9</xref>
            ]. Predicate equation systems provide structures are extensions of binary decision diagrams
a general framework for program models including para- (BDDs) (see [
            <xref ref-type="bibr" rid="ref35">19</xref>
            ] for information).
metric timed automata [
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ] and Presburger systems [
            <xref ref-type="bibr" rid="ref14 ref30">14</xref>
            ]. To improve performance, we take the above idea of
They also admit a natural on-the-fly approach to model a linked implementation and incorporate the sparseness
of the implementation of CRDs while simplifying (or
Research supported by NSF Grant CCF-0820072. shrinking) the structure to only support a single clock
zone (CRDs and CDDs in general can encode unions of
clock zones). This simplified structure is a sparse sorted
linked-list implementation of a DBM, the CRDZone (see
Definition 5). We also implement an array-list version
of the CRDZone, the CRDArray (see Definition 6). A
CRDZone may be seen as a sparse sorted linked-list
implementation of a DBM, and the CRDArray a sparse
array-list implementation of the CRDZone. We examine
the time and space performance of all three clock zone
implementations: the matrix DBM, linked-list CRDZone
and array-list CRDArray.
          </p>
          <p>The contributions of this paper are:</p>
          <p>We run experiments comparing time and space
performance of a model checker (on safety
(reachability) properties) with the DBM, CRDZone and
CRDArray data structure implementations.</p>
          <p>
            We formalize and extend the analysis style
performed in the model checking experiments of [
            <xref ref-type="bibr" rid="ref18 ref2">2</xref>
            ],
[
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ], [
            <xref ref-type="bibr" rid="ref25 ref9">9</xref>
            ], [
            <xref ref-type="bibr" rid="ref34">18</xref>
            ], [
            <xref ref-type="bibr" rid="ref36">20</xref>
            ], [
            <xref ref-type="bibr" rid="ref37">21</xref>
            ] by utilizing paired data
(each implementation checked the same examples)
and applying descriptive statistics on the paired
example-by-example differences on time and space
consumption. See Section VI for details on the
statistics and Section VI-B for the analysis.
          </p>
          <p>After analyzing the experimental results, for time
performance we infer the DBM is either competitive with or
slightly faster than the CRDZone and both perform faster
than the CRDArray for the examples in this experiment.</p>
          <p>In terms of space, we infer the CRDZone takes up the
least space, and the DBM and takes less space than the
CRDArray for the examples in this experiment.</p>
          <p>II. PROGRAM MODEL AND SPECIFICATIONS
A. Timed Automata</p>
          <p>
            A timed automaton encodes the behavior of a real-time
system [
            <xref ref-type="bibr" rid="ref23 ref7">7</xref>
            ], [
            <xref ref-type="bibr" rid="ref38">22</xref>
            ].
          </p>
          <p>Definition 1 (Clock constraint 2 (CX )). Given a set
of clocks CX , a clock constraint is constructed with
the following grammar, where xi is a clock and c 2 Z:
::= xi &lt; c j xi
(CX ) is the set of all possible clock constraints.</p>
          <p>Definition 2 (Timed automaton). A timed automaton
T A = (L; L0; ; CX; I ; E) is a tuple where:</p>
          <p>L is a finite set of locations with the initial set of
locations L0 L.</p>
          <p>is the set of actions and CX is the set of clocks.</p>
          <p>I : L ! (CX ) gives a clock constraint for each
location l. I (l) is called the invariant of l.</p>
          <p>E L (CX ) 2CX L is the set of edges.</p>
          <p>In an edge e = (l; a; ; Y; l0) from l to l0 with action
a, 2 (CX ) is the guard of e, and Y represents
the set of clocks to reset to 0.</p>
          <p>
            Some sources [
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ], [
            <xref ref-type="bibr" rid="ref39">23</xref>
            ] and our PES checker allow
clock assignments (x1 := x2) in addition to clock resets
on edges, other sources [
            <xref ref-type="bibr" rid="ref33">17</xref>
            ] allow constraints on clock
differences and other sources [
            <xref ref-type="bibr" rid="ref1 ref17">1</xref>
            ] allow states to be
labelled with atomic propositions that each state satisfies.
          </p>
          <p>
            Timed automata use clock valuations 2 V (V =
CX ! R 0 is the set of all clock valuations), which
at any moment stores a non-negative real value for each
clock x 2 CX . The semantics of a timed automaton are
described as an infinite-state machine, where each state
is a location-valuation pair (l; ). Transitions represent
either time advances or edge executions (performing an
action). For a formal definition of the semantics of a
timed automaton, see [
            <xref ref-type="bibr" rid="ref23 ref7">7</xref>
            ].
          </p>
          <p>ExamRpelea1l-(ETxaimmpleeof Matiomdedealutomahtoen)c. kCionnsgider</p>
          <p>C
the timed automaton in Figure 1, which models a train
in the generalized railroad crossing (GRC) protocol.</p>
          <p>0: far
exit, x1 &gt; 1
approach, x1 := 0
1: near
x1 &lt; 4
in, x1 = 4,
x1 := 0
2: in
x1 &lt; 15
FigT. C1.TLTim(eIdnvauatloimd)a:tonATFA&lt;∞1,[naemarodel oinf]a train in the generalized
raiTlroCaTdLcro(ssVinagli(dG)R:CA) Gpr&lt;o∞tocol.</p>
          <p>[near ! AF!TP+TDU[far]]</p>
          <p>There are three locations—0: far (initial location),
1: near and 2: in, with one clock x1. There are the actions
approach, in and exit in . Here, location 1 has the2
invariant x1 4 while 0 has no invariant. The edge
(1: near; in; x1 = 4; fx1g;2: in) has the guard x1 = 4
and resets x1 to 0.</p>
          <p>B. Modal Equation Systems (MES)</p>
          <p>
            We use a modal equation system (MES) to represent
real-time temporal properties that timed automata can
possess. A MES is an ordered list of equations with
variables on the left hand side and basic timed temporal
logical formulae on the right. Each equation involves a
variable X , a basic formula and a greatest fixpoint ( )
or a least fixpoint ( ), and the equation is labeled with
the fixpoint (X = or X = ). For a formal definition
of MES syntax and semantics, see [
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ], [
            <xref ref-type="bibr" rid="ref25 ref9">9</xref>
            ].
          </p>
          <p>Example 2 (Continuation of Example 1). Again consider
the timed automaton in Figure 1. The MES</p>
          <p>X1 =
far ^ 8([ ](X1))</p>
          <p>(1)
represents the safety property “the train is always in
state 0: far”, read as “the variable X1 is the greatest
fixpoint of being in state 0: far and for all time advances
(8), for all next actions ([ ]), X1 is true.” This is an
invalid specification for the timed automaton because the
execution
(0: far; [xi = 0]) 2!:5 (0: far; [xi = 2:5])</p>
          <p>appr!oach (1: near; [xi = 0]) !2 : : : (2)
reaches location 1: near and thus violates the property.</p>
          <p>III. DATA STRUCTURES FOR CLOCK VALUE SETS
A. Clock Zones</p>
          <p>
            This definition of a clock zone is taken from [
            <xref ref-type="bibr" rid="ref23 ref7">7</xref>
            ], [
            <xref ref-type="bibr" rid="ref35">19</xref>
            ].
          </p>
          <p>Definition 3 (Clock zone). A clock zone is a convex
combination of single-clock inequalities. Each clock
zone can be constructed using the following grammar,
where xi and xj are arbitrary clocks and c 2 Z:</p>
          <p>Z ::= xi &lt; c j xi</p>
          <p>Clock zones extend clock constraints with inequalities
of clock differences. These inequalities are used for
model checking even though clock difference
inequalities are not used in timed automata. Moreover, in general,
the representation of a clock zone is not unique.</p>
          <p>Example 3. Let z = 1 x1 &lt; 3 ^ 0 x2 5.</p>
          <p>There is the implicitly encoded constraint x2 x1 4.</p>
          <p>To see this, consider the longer path of constraints (x0
is a dummy clock that always has value 0):
+
x2
x0
x2
x0
x1
x1
5
4</p>
          <p>(x2
1 (1</p>
          <p>5)
x1)</p>
          <p>
            To provide a standardized, or canonical, form for clock
zone representations, we use shortest path closure [
            <xref ref-type="bibr" rid="ref33">17</xref>
            ].
          </p>
          <p>
            This form makes every implicit constraint explicit. This
can be implemented in O(n3) time using Floyd-Warshall
all-pairs shortest path algorithm, described in [
            <xref ref-type="bibr" rid="ref40">24</xref>
            ], [25].
          </p>
          <p>
            Other standard forms exist [
            <xref ref-type="bibr" rid="ref34">18</xref>
            ], [
            <xref ref-type="bibr" rid="ref36">20</xref>
            ].
          </p>
          <p>While converting to a canonical form takes a
considerable amount of time, it is needed to simplify and
standardize the algorithms for the zone operations
including time successor (succ(z)) computations and subset
checks. For time successor, having the zone in canonical
form allows the time elapse operation to simply set all
single-clock upper bound constraints to &lt; 1.</p>
          <p>B. Clock Zone Data Structures: Difference Bound
Matrix (DBM), CRDZone and CRDArray</p>
          <p>
            One way to represent a clock zone is a difference
bound matrix (DBM), described in Definition 4. See [
            <xref ref-type="bibr" rid="ref15 ref31">15</xref>
            ],
[
            <xref ref-type="bibr" rid="ref33">17</xref>
            ] for a more thorough description.
          </p>
          <p>Definition 4 (Difference bound matrix (DBM)). Let
n 1 be the number of clocks. A DBM is an n n
matrix where entry (i; j) is the upper bound of the clock
constraint xi xj , represented as xi xj uij or
xi xj &lt; uij . The 0th index is reserved for a dummy
clock x0, which is always 0, allowing bounds on single
clocks to be represented by the clock differences xi x0
and x0 xj . See Figure 2 for a picture of the DBM
structure and Example 4 for a concrete example.</p>
          <p>Definition 5 (CRDZone). A CRDZone is a sparse sorted
linked-list representation of a clock zone. Each constraint
is encoded like a constraint in a DBM, as an upper bound
constraint on xi xj , labeled as (i; j), with x0 always
being 0. The CRDZone has these properties:
1) Nodes are in lexicographical order of clock
constraint: (i1; j1) (i2; j2) iff i1 &lt; i2 or (i1 = i2
and j1 &lt; j2).
2) The (0; 0) node is always given to ensure a
univer</p>
          <p>sal head node with an initial value of x0 x0 0.
3) If a CRDZone node (i; j) is missing then the zone
has an implicit constraint: (i; i) : xi xi 0 and
(i; j); i 6= j : xi xj &lt; 1.</p>
          <p>See Figure 3 for a picture of the CRDZone structure
and Example 4 for a concrete example.</p>
          <p>
            This lexicographical ordering is the same ordering
used in CDDs and CRDs [
            <xref ref-type="bibr" rid="ref18 ref2">2</xref>
            ], [
            <xref ref-type="bibr" rid="ref34">18</xref>
            ]. While the ordering
j
}|
(z2: : : : : :
i 4: : : : : :
: : : : : : xi
: : :
: : :
xj
uij
{
3
5
Fig. 2. DBM: a matrix with constraint xi xj
uij in entry (i; j).
          </p>
          <p>Using a dynamic allocation instead of our static
allocation for the CRDArray array list is conjectured to save
space at the expense of time.</p>
          <p>Example 4 (Clock zone in various representations).</p>
          <p>Consider the clock zone from Example 3, which is
z = 1 x1 &lt; 3 ^ 0 x2 5 ^ x2 x1 4.</p>
          <p>DBM representation of z:
2x0
4x1
x2
Definition 6 (CRDArray). The CRDArray is an array list
implementation of the CRDZone. Thus, instead of using
linked nodes, we use an array to store the nodes with
the 0th element being the head. We statically allocate
the array to hold the maximum number of elements and
store a back-pointer to the back of the array list. See
Figure 4 for a picture of the CRDArray structure and
Example 4 for a concrete example.</p>
          <p>IV. ON-THE-FLY MODEL CHECKING: CONVERTING</p>
          <p>TO A PES AND PROOF SEARCH</p>
          <p>
            Our model checker takes in a predicate equation
system (PES) (taken from [
            <xref ref-type="bibr" rid="ref25 ref9">9</xref>
            ], [
            <xref ref-type="bibr" rid="ref13 ref29">13</xref>
            ]), which is a general
framework representing logical expressions that involve
fixpoints and first order quantifiers. We take a timed
automaton and a MES and convert it to a PES. Currently
the PES model checker can only check safety properties,
which involve only greatest fixpoints in both the PES
and the input MES. For more information on a PES,
including its syntax, semantics and how to convert a
timed automaton and a MES to a PES, see [
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ], [
            <xref ref-type="bibr" rid="ref25 ref9">9</xref>
            ].
          </p>
          <p>The model checker takes the conclusion sequent (the
sequent we wish to prove) and applies proof rules in
a recursive goal-driven tree-like fashion on the premise
sequents, trying to prove each premise sequent until it
reaches a proof rule with no premise (called a leaf) or
a circularity (a previously seen premise sequent). When
4 checking a proof, we will often encounter circularity.</p>
          <p>In general, when the circularity reached is a greatest
fixpoint, we can stop and declare the proof branch valid.</p>
          <p>
            For the formal conditions for circularity and the proof
rules, see [
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ], [
            <xref ref-type="bibr" rid="ref25 ref9">9</xref>
            ].
          </p>
          <p>V. EXPERIMENTS: VARIOUS DATA STRUCTURE</p>
          <p>IMPLEMENTATIONS</p>
          <p>We compare the DBM implementation to the
CRDZone and CRDArray implementations. Each
implementation uses shortest path closure to compute canonical
form. The only difference in the DBM, CRDZone and
CRDArray versions is the data structure implementation.</p>
          <p>Remark 2 (On our analysis approach). We ask: what does
Remark 1 (On DBM vs. CRDZone and CRDArray
methods). Due to the sparse implementation and removal
of implicit nodes, the CRDZone and CRDArray can
improve time by reducing the number of nodes, and thus
! : : :
! xi
xj
uij
!</p>
          <p>: : :
Fig. 3. CRDZone: A linked list with nodes in lexicographical order it mean for an implementation to perform better than
of constraint xi xj uij. another? We consider consider better to be measured
in the number (or percentage) of examples that one
[ j j : : : jxi xj uij j : : :] system outperforms another in. The larger aim is for
any implementation, if we were to know all the examples
Fig. 4. CRDArray: An array list with nodes in lexicographical order that it would run (including and beyond the experiment
of constraint xi xj uij. examples), we would like one implementation to perform</p>
          <p>Method
k-quality N = 5
k-combined-1 N = 5
s-quality N = 5
s-height N = 1
s-height N = 2
s-combined-1 N = 2
s-combined-2
BDD Sweeping
SAT Sweeping
Fig. 2. Number of merges on nodes within different height ranges
that exceed the threshold.</p>
          <p>B. Combined Sweeping Methods</p>
          <p>In this section, we show experimental evidence that
supports the guidelines of combining sweeping methods
presented in Section IV. In particular, we try several
possibilities of allotting the budget to the different sweeping
algorithms with the purpose of identifying empirically if
they should be combined, and if so, in which way. In
what follows we use the “s-height N = 1” variant of
cut-sweeping since it is the fastest, and we simply refer
to it as cut sweeping.</p>
          <p>In our combined approach, SAT sweeping is always
run last since it is the only complete method of the three,
and should thus be given whatever time is left to find
equivalences not discovered by the other methods. Also,
the time not used by any of the preceding methods is
passed to SAT sweeping. For instance, if cut sweeping
is given a time budget of 4 seconds and only uses 3 of
them, SAT sweeping gets to run for one extra second.</p>
          <p>We compare the reduction measured in terms of the
number of AIG nodes removed, and the total time
spent in sweeping. Data are aggregated over the 818
benchmarks. The base case for our comparisons is the
pure SAT sweeping case in which SAT sweeping gets
the whole budget. The time budget used in our study is
10 seconds.</p>
          <p>We consider the following policies: (a) allocating
the budget to two methods, (b) allocating it to three
methods, and (c) allocating the whole budget to SAT
sweeping. For (a) and (b), we consider all the different
permutations of assigning integer time values to each
method, such that they sum up to 10 seconds. Note that if
a sweeping algorithm times out, what it has achieved thus
far is used in what follows. In all cases, a set of
lightweight sequential simplification algorithms are applied
before sweeping. This set of algorithms includes
coneof-influence reduction, stuck-at-constant latch detection,
and equivalent latch detection. The total number of
AIG nodes for all 818 benchmarks measured after the
sequential simplification step is 6.1M.</p>
          <p>Results are presented in Table II. The first column lists
the methods, where the number before each sweeping
method indicates the number of seconds given to it.</p>
          <p>The second column shows the number of AIG nodes
removed. The third column shows the total time spent in
sweeping. The methods are listed in order of decreasing
reduction. The last row is for pure SAT sweeping. We
only show the best three setups in terms of reduction for
each of the possible orders of the method sequences.</p>
          <p>Several observations can be made. First, when it
comes to running two methods in sequence, BDD
sweeping combined with SAT sweeping outperforms cut
sweeping combined with SAT sweeping. The method
that achieves maximum reduction (8 seconds of BDD
sweeping followed by 2 seconds of SAT sweeping)
removes 56K more nodes than pure SAT sweeping (7.7%
more reduction). Second, more reduction is achievable
by running three methods in sequence. As suggested in</p>
          <p>Section IV, ordering the methods by increasing effort</p>
          <p>TABLE II
EFFECT OF BUDGET ALLOCATION ON REDUCTION.
(or equivalently by increasing degree of completeness) and after sweeping) in the sweeping case. We compare
achieves more reduction than otherwise. Here, the best the number of solves, and the aggregate runtime among
method (4 seconds, 5 seconds, and 1 second for cut, all benchmarks.</p>
          <p>BDD and SAT sweeping, respectively), has an edge It is important to note that the ic3 algorithm has a
of 65K nodes over pure SAT sweeping (about 8.9% random flavor. In particular, the order by which
genermore reduction). Third, in terms of sweeping time, it alization (dropping literals) is attempted is randomized.
is clear that a large drop occurs (&gt; 100 seconds) when Also, since the algorithm is SAT-based, randomization
two or three methods are combined versus pure SAT occurs in the SAT solver decisions. To have reliable
sweeping, which is due to the often smaller time needed experimental results, each experiment is repeated with
by BDD and cut sweeping to discover equivalences than 10 different seeds, and the results are averaged over the
SAT sweeping. Given an overall model checking budget, different seeds.
smaller sweeping time allows more time for the model Results are shown in Table III. The first column shows
checking engine, which is desirable. the seed index, the second and third columns show the</p>
          <p>The question of whether such difference has a consid- number of solves without and with sweeping, and the
erable effect on the performance of the model checking fourth and fifth columns show the aggregate runtime
engine is answered in the next section. without and with sweeping.</p>
          <p>The results confirm a positive effect of sweeping on
C. Effect on ic3 the performance of ic3. On average, five more solves</p>
          <p>
            The recently developed model checking algorithm, are achieved with sweeping, and the aggregate runtime
ic3 [
            <xref ref-type="bibr" rid="ref1 ref17">1</xref>
            ], has been regarded as the best standalone model drops by 3.8%.
checking algorithm developed up till now [
            <xref ref-type="bibr" rid="ref16 ref32">16</xref>
            ]. As the The enhancement in the performance of ic3 in
presinteraction of combinational simplification methods with ence of sweeping can be attributed to two factors. First,
different model checking algorithms has been studied in reduction in the number of gates caused by sweeping can
the past, we here aim to study how they interact with result in the reduction in the SAT solver time. Second,
ic3. In particular, we would like to empirically find simplification often results in dropping of latches (e.g.,
out if ic3 benefits from preprocessing the netlist with if it merges the next-state functions of two latches). For
a simplification algorithm or not, and if it does, how example, for the benchmark set used in our experiments,
sensitive it is to the magnitude of reductions achieved sweeping reduces the aggregate number of latches from
through simplification. 279,161 to 269,091 (3.6% decrease). This reduces the
          </p>
          <p>In the first experiment, we compare two runs of ic3, amount of work done by ic3 in generalization of
one that is preceded by SAT sweeping, and one that counterexamples to induction.
is not. The experimental setup is as follows. A total We now repeat the previous experiment, but this time
timeout of 10 minutes is used. The budget for SAT we compare the number of solves and the aggregate
sweeping is 10 seconds. The light-weight sequential runtime between pure SAT sweeping and the empirically
simplification algorithms referred to in Section V-B are optimum combined sweeping scheme of Section V-B.
applied once in the no-sweeping case, and twice (before The purpose of this experiment is to understand how</p>
          <p>TABLE IV
OPTIMUM SWEEPING VERSUS PURE SAT SWEEPING.
sensitive ic3 is to the magnitude of reductions.</p>
          <p>Results are shown in Table IV, where the second and
third columns compare the number of solves for pure
SAT sweeping and the optimum sweeping scheme, and
the fourth and fifth columns compare the total runtime.</p>
          <p>As the results indicate, ic3 does not benefit much
from the better reduction achieved by the combined
sweeping scheme. The lack of performance enhancement
on ic3 can be attributed to the small improvement
in reduction the combined sweeping approach achieves
over pure SAT sweeping. In particular, while pure SAT
sweeping removes 737K nodes out of the total 6.1M
nodes in the 818 benchmarks (12.1% reduction), the
combined approach removes 802K nodes (13.2%
reduction); a mere 1.1% improvement. This suggests that ic3
has a small sensitivity to the magnitude of reduction.</p>
          <p>VI. CONCLUSION</p>
          <p>In this paper, we presented an empirical study of
the different sweeping methods proposed in the past.</p>
          <p>We have shown that a combined sweeping approach
outperforms any of the sweeping methods alone. We
have also proposed a BDD-based cut sweeping method
that is more effective than the original cut sweeping.</p>
          <p>Finally, we have studied the effect of sweeping on the
new model checking algorithm, ic3, and investigated
the causes of the better performance it experiences with
sweeping. The goal of this analysis is to help designers
of model checkers to make decisions regarding the
incorporation of sweeping methods and to provide a
deeper understanding of how sweeping methods interact
with ic3.</p>
          <p>ACKNOWLEDGEMENTS</p>
          <p>We thank Aaron Bradley who motivated this work
and contributed to many discussions. We also thank the
reviewers for their insightful comments regarding cut
sweeping’s pruning heuristics that prompted us to try
the “combined-2” heuristic.
Enhancing ABC for LTL Stabilization Verification
of SystemVerilog/VHDL Models
Jiang Long, Sayak Ray, Baruch Sterin, Alan Mishchenko, Robert Brayton</p>
          <p>Berkeley Verification and Synthesis Research Center (BVSRC)</p>
          <p>Department of EECS, University of California, Berkeley
fjlong, sayak, sterin, alanmi, braytong@eecs.berkeley.edu</p>
          <p>Abstract—We describe a tool which combines a commercial
front-end with a version of the model checker, ABC, enhanced to
handle a subset of LTL properties. Our tool, VeriABC, provides
a solution at the RTL level and produces models for synthesis
and formal verification purposes. We use Verific (a commercial
software) as the generic parser platform for SystemVerilog and
VHDL designs. VeriABC traverses the Verific netlist database
structure and produces a formal model in the AIGER format.</p>
          <p>LTL can be specified using SVA 2009 constructs that are
processed by Verific. VeriABC traverses the resulting SVA parse
trees and produces equivalent LTL formulae using the F,G, Until
and X operators. The model checker in ABC has been enhanced
to handles LTL stabilization properties, an important subset of
LTL. The paper presents VeriABC’s implementation strategy,
software architecture, tool flow, environment setup for formal
verification, use model, the specification of properties in SVA
and translation into LTL. Finally the properties are translated
into safety properties that can be verified by the ABC model
checker.</p>
          <p>I. INTRODUCTION</p>
          <p>We present an integrated tool flow for liveness model
checking using industry hardware description languages (HDLs) and
SystemVerilog Assertions: (i) VeriABC: a front-end to read in
hardware models expressed in HDLs, and (ii) capability of
model checking a subset of liveness properties. VeriABC is
able to read in hardware models expressed in SystemVerilog
or VHDL. SystemVerilog and VHDL languages are the most
popular HDLs being used in industry today for digital designs.</p>
          <p>
            VeriABC generates a formal model in the AIGER[
            <xref ref-type="bibr" rid="ref18 ref2">2</xref>
            ] format
and relies on a commerical front-end, Verific, to build a generic
parser platform for HDLs. This allows down-stream tool flows
in synthesis and verification. A version of ABC was enhanced
from a safety-only verification engine to allow both safety and
liveness verification. Our current version supports a particular
subset of liveness properties called stabilization properties or
generalized fairness constraints (defined in Section IV).
          </p>
          <p>In a typical use model, a user will develop a hardware
design in SystemVerilog or VHDL, and specify its correctness
requirements in the property specification language
SystemVerilog Assertion (SVA). SVA has been adopted into IEEE
SystemVerilog standard and is supported by major
commercial tools in simulation, synthesis and verification. The SVA
language in SystemVerilog 2009 standard contains liveness
constructs that allow full specification of liveness properties
as those defined in LTL formulas. In our framework, a user
can specify both safety properties and liveness properties
(stabilization properties, to be precise). In this paper, we detail
its liveness capabilities.</p>
          <p>
            After reading in a design, VeriABC bit-blasts it into a
bitlevel netlist and converts the SVA stabilization properties into
an intermediate LTL representation. Then the LTL properties
are folded into the bit-level netlist in an appropriate way
(using an extended liveness-to-safety conversion, explained
later in Section IV). The resulting bit-level netlist represents
a formal model of the design, represented as an and-inverter
graph (AIG). And-inverter graphs are concise representations
of finite state machines. The AIGER[
            <xref ref-type="bibr" rid="ref18 ref2">2</xref>
            ] format is a prevalent
format for AIG representation. supported by various academic
tools and used in annual hardware model checking
competitions. Since our liveness verification is based on
liveness-tosafety conversion, eventually the safety verification backend in
ABC[
            <xref ref-type="bibr" rid="ref1 ref17">1</xref>
            ] is called which works on the bit-level netlist produced
by the VeriABC front-end. We have used this methodology
to verify liveness in the context of compositional verification
of deadlock freedom of micro-architectural communication
fabrics. Preliminary experimental results are encouraging.
          </p>
          <p>A. Related work</p>
          <p>
            Although parsing and elaborating RTL languages are a
standard practice for commercial EDA products, it is a daunting
task for academics due to language complexity and continuous
language evolution over the years. Although, vl2mv[
            <xref ref-type="bibr" rid="ref12 ref28">12</xref>
            ] was an
academic tool that attempted to treat a significant subset of the
Verilog language for synthesis and verification purposes, it was
not maintained and language support was not complete. Tools
like ABC[
            <xref ref-type="bibr" rid="ref1 ref17">1</xref>
            ], VIS[
            <xref ref-type="bibr" rid="ref25 ref9">9</xref>
            ] parse subsets of Verilog language too
strict and not applicable in a broad setting. Freely accessible
tools like icarus[
            <xref ref-type="bibr" rid="ref19 ref3">3</xref>
            ] contain Verilog languages front-end but are
not up-to-date with newer SystemVerilog features.
          </p>
          <p>Our choice of a commercial and stable front-end Verific,
allows academics to get around the language barrier to access
real-world designs.</p>
          <p>
            Liveness-to-safety conversion was first proposed in [
            <xref ref-type="bibr" rid="ref23 ref7">7</xref>
            ],
[
            <xref ref-type="bibr" rid="ref40">24</xref>
            ]. They demonstrated that verification problems for any
!-regular property can be converted into a verification
problem of an equisatisfiable safety problem. Their algorithm
has been deployed successfully in industrial setups and used
to verify liveness properties of microprocessor designs [
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ].
          </p>
          <p>
            Our liveness-to-safety conversion algorithm for stabilization
is essentially an extension of the algorithm proposed in [
            <xref ref-type="bibr" rid="ref40">24</xref>
            ]
and is broader than discussed in [
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ].
          </p>
          <p>B. Contribution</p>
          <p>As illustrated in Figure 1, we combine a commercial
frontend, Verific, with a version of our model checker, ABC,
enhanced to handle a subset of LTL properties. Our tool
processes the Verific output, conducts various modeling
procedures on the design, compiles SVA into LTL formulas, then
the enhanced ABC processes the LTL formulas for liveness
model checking. We detail the software architecture, tool flow,
formal model construction, SVA compilation and downstream
LTL modeling checking.</p>
          <p>DEBUG</p>
          <p>RTL+SVA
Verific Netlist Databse
AIGER + LTL Formula</p>
          <p>AIGER
Proven</p>
          <p>Error
Trace</p>
          <p>Verific Parser Platform
VeriABC
L2S</p>
          <p>ABC Backend Engines
Fig. 1. Complete Tool Flow
C. Organization of the Paper</p>
          <p>We first discuss the capabilities of the Verific parser
platform. In Section III we describe the architecture, formal
modeling of VeriABC and translation of SVA into LTL. In
Section IV the stabilization properties are described in further
detail. Section V describes the liveness-to-safety conversion
for stabilization properties. Experimental results are presented
in Section VI.</p>
          <p>II. BACKGROUND: VERIFIC PARSER PLATFORM</p>
          <p>
            Verific Design Automation[
            <xref ref-type="bibr" rid="ref20 ref4">4</xref>
            ] builds SystemVerilog and
VHDL Parser Platforms which enable its customers to develop
advanced EDA products quickly and at low cost. Verific’s
Parser Platforms are distributed as C++ source code or library
and build on all 32 and 64 bit Unix, Linux, and Windows
operating systems. Applications vary from formal verification
to synthesis, simulation, emulation, virtual prototyping, in
circuit debug, and design-for-test. We chose Verific as our
front-end for its commercial success and stability in supporting
the latest language constructs in SystemVerilog.
          </p>
          <p>Figure 2 shows the architecture of the Verific parser
frontend. The main commands we use in Verific library are analyze
and elaborate. Analyze creates parse-trees and performs
type-inferencing to resolve the meaning of identifiers. The
Parser/Analyzer supports the entire SystemVerilog IEEE 1800,
VHDL IEEE 1076-1993, and Verilog IEEE 1364-1995/2001
languages, without any restrictions. The resulting parse tree
comes with an extensive API.</p>
          <p>Elaborate supports both static elaboration and RTL
elaboration. Static elaboration elaborates the entire language, and
specifically binds instances to modules, resolves library
references, propagates defparams, unrolls generate statements,
and checks all hierarchical names and function/task calls.</p>
          <p>The result after static elaboration is an elaborated parse tree,
appropriate for simulation like applications. RTL elaboration is
limited to the synthesizable subset of the language. In addition
to the static elaboration tasks for this subset, it generates
sequential networks through flipflop and latch detection, and
Boolean extraction. The result after RTL elaboration is a netlist
database, appropriate to applications such as logic synthesis
and formal verification. This netlist database is the starting
point of VeriABC and we utilize Verific provided C++ APIs
to access the database.</p>
          <p>A. Verific Netlist Database Structure</p>
          <p>In this Section, we use Verilog terminology to present
Verific’s netlist database structures. The netlist database is
rather intuitive and adheres to the module definitions. Shown
in Table I, there is a one-to-one correspondence between the
C++ API class definitions and Verilog constructs.</p>
          <p>A N etlist corresponds to module definitions in Verilog
while an Instance object corresponds to module instantiation,
Verific Database C++ API Class</p>
          <p>Netlist
Instance</p>
          <p>Port</p>
          <p>Net
PortRef</p>
          <p>Verilog RTL Objects</p>
          <p>Module definition</p>
          <p>Module instantiation
Module port declarations</p>
          <p>wire/reg/assign</p>
          <p>Port to Net connectivity
after the module’s parameters have been characterized. An
Instance is a thin copy of the N etlist plus a pointer to its
parent netlist. A N etlist contains a set of P orts, N ets and
Instances for its internal logic structure. A P ort corresponds
to the Verilog port definitions which can be input, output
or inout. A N et is a named component, intuitively a wire.</p>
          <p>P ortRef contains the connectivity between a P ort and a
N et. The direction of the P ortRef can be in, out, or inout
depending on the type of P ort it contains. Using these C++
objects, the Verific netlist database defines a directed
hypergraph and encapsulates the following types of information:</p>
          <p>Design Hierarchy</p>
          <p>Design hierarchy is captured as an instance tree by
the parent pointers in the Instance with a top-level
netlist as the root.</p>
          <p>Unique Hierarchical Name</p>
          <p>Following the design hierarchy through the instance
tree, each internal object is assigned a unique
hierarchical name.</p>
          <p>Connectivity</p>
          <p>A directed hyper-graph is defined through P ort, N et
and P ortRef : P ort being the node, N et being the
edge, and P ortRef containing the connectivity and
direction information between pairs of a P ort and a
N et. As an edge in the hyper-graph, a N et can be
referenced in multiple P ortRef objects.</p>
          <p>Logic Definition</p>
          <p>At the leaf of the design hierarchy, a N etlist of
primitive operator types such as and, or, adder,
flipflop, latches etc defines the basic logic operators.</p>
          <p>Recursively, the functional behavior of the design is
captured through the directed hierarchical hyper-graph with basic
logic operators at its leaf level.</p>
          <p>III. VERIABC</p>
          <p>VeriABC traverses the above netlist database and transforms
it into a monolithic AIG which can be treated as a directed
acyclic graph (DAG). The AIG contains primary input,
primary output, register nodes and and-inverter nodes. Each
named P ort and N et in the Verific netlist has a mapped node
in the AIG graph. Additional book-keeping information such
as hash tables are created that map the hierarchical name to
the corresponding AIG node. The down-stream model checker
ABC then reads in this AIG to conduct formal analysis.</p>
          <p>A. Architecture</p>
          <p>A hyper-graph is rather hard to traverse and conduct
analysis/transformation at the same time.</p>
          <p>As shown in Figure 3, we employ a two phase approach.</p>
          <p>First we construct an intermediate netlist graph, a DAG with
extra annotated node types representing logic structure and
connectivity of the flattened design. In addition to the simple
node types in and-inverter graphs, extra node types contain
annotations for language constructs such as tri states, flipflops
and latches etc. For example, flipflops contain set/reset pins
and driving d-pins; latches contain additional gated-clock
definitions. By language definition, a design can specify any signal
for the clock and reset signals. Design behavior is defined
by event-driven semantics. Further analysis needs be done to
determine if there is an AIG representation that can capture the
original design semantics. The intermediate netlist is a DAG
for which various traversal algorithms can be conducted for
the later analysis. For this step, VeriABC only traverses the
design hierarchy and hyper-graph in the Verific netlist database
to gather information and construct the intermediate DAG
representation without conducting any design style checking
or transformation.</p>
          <p>The end result of VeriABC is an AIG model that is
consistent with the original RTL design semantics. AIG is
recognized as a finite state machine model. Compared to
the event-driven semantics in HDL language, the execution
semantics of an AIG is synchronous with an implicit universal
clock that ticks at every step of the execution. The register
loads in its driver value at the beginning of each step. The
semantics inferred from the Verific netlist structure is more
complex, such as its flip-flop can have arbitrary reset logic and
clock network. The task in formal modeling is to transform the
above design components into AIG registers with additional
glue logic so that it maintains the consistency. In its simplest
form, the following check determines if the design can be
readily represented by an AIG.</p>
          <p>All registers are clocked by the same primary input signal
which does not fanout to other nodes.</p>
          <p>Reset/Set signals are primary inputs
No combinational loops</p>
          <p>No multiple drivers per node</p>
          <p>More complex design modeling and transformations can
be achieved by identifying certain patterns by traversing the
intermediate netlist graph. Our current implementation
supports the above form and produces a design style summary
for debugging purposes. Though capacity and performance
depends on the type of individual transformation and
analysis, for the above ones, the transformation and design style
checking is very fast, as it conducts only a few traversals of
the intermediate netlist graph. After design style checking, a
final traversal of the intermediate netlist graph generates an
AIGER file representing the formal model.</p>
          <p>C. Commands Implemented</p>
          <p>VeriABC implementation utilizes the Tcl command
interface shipped with the Verific library distribution. The following
is the list of commands implemented to manage the
environment setup for formal verification.</p>
          <p>veriabc analyze</p>
          <p>This command constructs the intermediate netlist
graph in Figure 3 and conducts design style checking.
veriabc set reset</p>
          <p>Although a reset signal can be automatically
detected in certain situations, this command provides
the user with the option to specify the length of
the reset sequence and phase of the reset condition.</p>
          <p>A user can also specify the initial value of the
registers through a VCD waveform or textual file.</p>
          <p>In the generated formal model, the initial value of
the registers will be valued at the end of the reset
sequence and the reset condition will be disabled
after reset.
veriabc set clock</p>
          <p>A user can specify the clock periods and
relationships in the situation when multi-clocks are in the
design. A phase-counter network will be created in
the formal model to generate the corresponding clock
signals.
veriabc set cutpoint</p>
          <p>This command will prune the cone-of-influence at
cutpoint signal and treat it as free input.
veriabc set constant</p>
          <p>This command sets either an input or a cut-point
signal to a constant value.
veriabc set assume</p>
          <p>This constrains the design signal to be a constant.
veriabc write</p>
          <p>write out the final formal model in AIGER format.</p>
          <p>The above commands give the user flexibility to model the
environment with constraints and conduct design abstractions
during verification.</p>
          <p>D. SVA to LTL Compilation
Figure 4 are the templates for matching the basic LTL
operators used in the set of stabilization properties. For stabilization
properties, in our current implementation, we restrict the p and
q to be Boolean expressions which seems to be sufficient in
practice. The SVA verification directive assume is used to
specify a fairness constraint, while assert is used to specify
the target LTL properties.</p>
          <p>property Until(p,q);</p>
          <p>p s_until q;
endproperty
property GF(p);</p>
          <p>always (s_eventually p);
endproperty
property FG(p);</p>
          <p>s_eventually (always p);
endproperty
property X(p);</p>
          <p>s_nexttime (p);
endproperty
Fig. 4. SVA Liveness Template</p>
          <p>Verific also processes SVA constructs into the netlist
database structure, essentially a corresponding parse tree is
integrated into the netlist. We conduct traversals of the parse
tree, identify specific liveness constructs and map them into
the corresponding LTL formula. At the end of the procedure,
along with the AIGER file generated, a separate file containing
the LTL formulas are generated indicating target liveness
assertions and fairness constraints. The support signals referred
to in the LTL formulas are named output signals in the
AIGER file. Although we currently only support stabilization
properties in LTL, the full LTL language using X, F , G,
U operators can be specified fully and translated through
SVA constructs. In doing so, this completes the formal model
generation and SVA compilation at the RTL level.</p>
          <p>IV. LIVENESS MODEL CHECKING IN VERIABC</p>
          <p>
            In the FSM modeling formalism, the most intuitive notion
of stabilization states that the system will always reach a
particular state and stay there forever, no matter which state the
system started from or which path it took. Relaxing this notion
a bit, stabilization means that the system will eventually reach
and stay within a given subset of states. Also, stabilization may
denote conditions on the input and output signals of a system
when it attains a stable state. Applications of stabilization
properties have been demonstrated in [
            <xref ref-type="bibr" rid="ref14 ref30">14</xref>
            ] and [
            <xref ref-type="bibr" rid="ref34">18</xref>
            ], to name
a few. We review some basic temporal logic terminology and
formally define stabilization properties using LTL below.
          </p>
          <p>A. LTL, model checking and stabilization property</p>
          <p>In SystemVerilg 2009 standard, a rich set of LTL operators
are added into SVA language. The SVA properties shown in</p>
          <p>
            Familiarity is assumed with LTL, basic model checking
algorithms, and related terminology like Kripke structures and
Bu¨chi automata. For further details, see [
            <xref ref-type="bibr" rid="ref13 ref29">13</xref>
            ]. In our current
context, we use LTL properties GFp and FGp, and thus
overview their semantics here: let be a path in some Kripke
structure K; j=K Gp means property p will hold on every
state along ; j=K Fp means the property p will hold
eventually on some state along ; j=K GFp means p will
hold along infinitely often, and j=K FGp means p will
hold eventually on forever. Since temporal operators F and
G are dual (i.e. Fp :G:p), operators FG and GF are also
dual (i.e. FGp :GF:p).
          </p>
          <p>Definition 1 (GF-atom): Any LTL formula of the form
GFp or FGp, where p is some atomic proposition or some
Boolean formula involving atomic propositions only, will be
called a ‘GF-atom’.</p>
          <p>Stabilization properties are defined as the family of LTL
formulas that are Boolean combinations of GF-atoms. Formally:</p>
          <p>Definition 2 (Stabilization Property): The set of
stabilization properties is the syntactic subset of LTL defined as
follows:
any GF-atom is a stabilization property
if is a stabilization property, then so is :
if and are stabilization properties, then so are ^
and _</p>
          <p>Example 1: FGp, GFp ) GFq, FGp ^ FGq ) FGr,
and FGp ) FGq _ (FGr ^ GFs) are stabilization
properties where p; q; r, and s are atomic propositions or Boolean
formulas involving atomic propositions only and a ) b is the
usual shorthand for :a _ b. However, G(r ) Fg) is an LTL
liveness property but not a stabilization property.</p>
          <p>Needless to say, these are all liveness properties. But not
all of them specify so-called system stabilization directly.</p>
          <p>Properties like FGp and FGp ^ FGq ) FGr (or its
general</p>
          <p>
            k
ization ^i=1FGpi ) FGq) are perhaps the most elementary
stabilization properties. FGp means that the system eventually
will reach a state from where p will always hold, i.e. the
system will eventually ‘stabilize’ at p. FGp ^ FGq ) FGr
means that if the system stabilizes at p and also at q (at
perhaps some other time), then it will stabilize eventually
at r. Hence, semantics of these properties are close to the
intuitive notion of stabilization. [
            <xref ref-type="bibr" rid="ref14 ref30">14</xref>
            ] demonstrates the use
and significance of stabilization properties in the context
of biological system analysis. However, our definition of
stabilization captures a broader family of specifications. It
includes FGp ) FGq _ (FGr ^ GFs) which may look
contrived, but for example, [
            <xref ref-type="bibr" rid="ref34">18</xref>
            ] uses many such complicated
stabilization properties for compositional deadlock analysis
of micro-architectural communication fabrics. On the other
hand, our definition includes many properties not intended to
specify so-called stabilization behavior. For example, GFp or
GFp ) GFq.
          </p>
          <p>
            The main motivation behind considering this broader subset
of LTL is that we offer a short-cut L2S conversion,
avoiding Bu¨chi automaton construction, in a uniform way (due
to the duality between FG and GF operators). The most
significant applications of this class that we have encountered
is “stabilization verification”, and hence the name is coined
for the family. (This name was inspired by [
            <xref ref-type="bibr" rid="ref14 ref30">14</xref>
            ]). Thus the
L2S conversion proposed here may be applied for proving
properties beyond the context of stabilization verification (eg.
          </p>
          <p>GFp ) GFq).</p>
          <p>
            The class of LTL properties defined as stabilization
properties in this paper is a very important class of temporal
properties extensively studied in the literature. It is related
to so-called fairness specifications. Operators GF and FG
are often called infinitary operators [
            <xref ref-type="bibr" rid="ref35">19</xref>
            ] and symbols F 1
and G1 are used respectively instead [
            <xref ref-type="bibr" rid="ref15 ref31">15</xref>
            ]. The class itself
(i.e. Boolean combination of GF-atoms) has been called
general fairness constraints [
            <xref ref-type="bibr" rid="ref16 ref32">16</xref>
            ], [
            <xref ref-type="bibr" rid="ref35">19</xref>
            ]. As shown in [
            <xref ref-type="bibr" rid="ref16 ref32">16</xref>
            ],
various notions of fairness like impartiality [
            <xref ref-type="bibr" rid="ref37">21</xref>
            ], weak fairness
[
            <xref ref-type="bibr" rid="ref36">20</xref>
            ](also called justice [
            <xref ref-type="bibr" rid="ref37">21</xref>
            ]), strong fairness [
            <xref ref-type="bibr" rid="ref36">20</xref>
            ] (also called
compassion [
            <xref ref-type="bibr" rid="ref37">21</xref>
            ]), generalized fairness [
            <xref ref-type="bibr" rid="ref33">17</xref>
            ], state fairness [
            <xref ref-type="bibr" rid="ref38">22</xref>
            ]
(also known as fair choice from states [
            <xref ref-type="bibr" rid="ref39">23</xref>
            ]), limited looping
fairness [
            <xref ref-type="bibr" rid="ref21 ref5">5</xref>
            ], and fair reachability of predicate [
            <xref ref-type="bibr" rid="ref39">23</xref>
            ] can be
expressed by stabilization properties. These properties are used
to exclude “unfair” counterexamples in liveness verification
in both linear time and (fair) branching time paradigms. For
liveness verification, we usually have a liveness property (the
actual proof obligation) along with a set of fairness constraints.
          </p>
          <p>
            Liveness properties may not be stabilization properties. In that
case we may need to construct the product of the system
and the Bu¨chi automaton of the (negation of the) liveness
property before performing the L2S conversion. Interestingly,
for many interesting applications as in [
            <xref ref-type="bibr" rid="ref14 ref30">14</xref>
            ] and [
            <xref ref-type="bibr" rid="ref34">18</xref>
            ], the
liveness verification obligations fall entirely in the family of
stabilization properties. For these applications, the simple L2S
scheme proposed in this paper works. Note that some liveness
properties like G(request ) Fgrant) are not stabilization
properties, but also have a direct L2S conversion [
            <xref ref-type="bibr" rid="ref40">24</xref>
            ]. It is,
therefore, an interesting question that under what more general
conditions there exists a direct L2S conversion.
          </p>
          <p>V. L2S CONVERSION FOR STABILIZATION PROPERTIES</p>
          <p>It is important to understand that any counterexample to
a liveness property (which must be an infinite trace) can be
seen as a “lasso” like configuration with a finite handle and
a finite loop. Therefore a liveness counter-example is a lasso
which does not satisfy the property on the loop but satisfies
all imposed fairness constraints on the loop.</p>
          <p>In general, a liveness problem is converted to a safety
problem by adding a loop-detection logic and property-detection
logic on top of the product of the FSM of the original system
and the Bu¨chi automaton of the property to be verified. The
loop-detection logic consists of a set of shadow registers,
comparator logic, and an ‘oracle’. The oracle saves the system
state in the shadow registers at a non-deterministically chosen
time. In all subsequent time frames, the current state of the
system is compared to the state in the shadow registers.</p>
          <p>
            Whenever these two states match, the system has completed a
loop. The non-deterministic nature of the oracle allows all such
loops to be explored. The property verification logic checks if
any of the liveness conditions are violated in any such loop
and all fairness conditions always hold in the loop. This check
is done as a safety obligation. For a more detailed exposition,
see [
            <xref ref-type="bibr" rid="ref40">24</xref>
            ].
          </p>
          <p>
            As mentioned, for some simple properties L2S conversion
can be achieved while avoiding explicit Bu¨chi automata
construction. This is done by adding more functionality to the
property detection logic. As presented in [
            <xref ref-type="bibr" rid="ref40">24</xref>
            ], these properties
are Fp; GFp; FGp; pUq; ; G(r ) Fq), and F(p^Xq) (Table
1 of [
            <xref ref-type="bibr" rid="ref40">24</xref>
            ]). This approach, reviewed in Figure 5, depicts an L2S
converted circuit for verifying the LTL property Fp.
          </p>
          <p>
            In the next paragraph, we describe how this construction
verifies Fp. In Section V-A we explain how to extend the ideas
of Figure 5 for stabilization properties. Instead of presenting
the liveness-to-safety conversion through Kripke
structurebased representations (i.e. through explicit state machines
based representations), we present the idea in terms of an
actual circuit construction (i.e. through symbolic representation
of the state space). Also, although we do not discuss it further,
the same mechanism handles fairness constraints, which are
always stabilization properties, so they just entail adding
additional logic to the circuit for the monitor. For Kripke
structure-based descriptions of liveness-to-safety conversion,
see [
            <xref ref-type="bibr" rid="ref40">24</xref>
            ].
          </p>
          <p>In Figure 5, save represents an additional primary input
added to the circuit. This plays the role of the ‘oracle’. When
save is asserted for the first time, the current state of the
circuit is saved in the set of shadow registers, and register
saved is set. saved thus remembers that input save has
been asserted and allows any further activity on save to
be ignored. For subsequent time frames, saved enables the
equality detection between the current state of the circuit and
the state in the shadow registers. Clearly, signal looped is
asserted iff the system has completed a loop. Signal live
remembers if the signal p has ever been asserted. The safety
property that the circuit verifies is, therefore, looped )
live. (In general this would be looped &amp; fair ) live.)
The block marked with “*” represents this logical implication
- the direction of the arrow distinguishes the antecedent signal
from the consequent signal of the implication.</p>
          <p>A. L2S for stabilization properties</p>
          <p>
            In [
            <xref ref-type="bibr" rid="ref40">24</xref>
            ], the authors show how to do the L2S conversion for
GFp and FGp, which are GF-atoms. We demonstrate how to
extend this to any Boolean combination of GF-atoms using an
example, omitting a formal proof of correctness.
          </p>
          <p>Consider a simple stabilization property of the form
FGa ) FGb + FGc. An L2S converted circuit for this
is shown in Figure 6. (For simplicity, we do not show any
fairness constraints in the example.) Note that, signal live
in Figure 5 monitors if signal p has ever been asserted from
the very initial time frame. But for verifying GFp, we need to
monitor whether signal p has been asserted between the time
when saved is set and the time when looped becomes true.</p>
          <p>Using this fact, the duality between FG and GF operators,
and the Boolean structure Xa ) Xb+Xc of the given formula,
we can derive the circuit of Figure 6. Logic that captures the
Boolean structure of is marked with a dotted triangle in
Figure 6. Hence, for any arbitrary stabilization property, we
need to create monitors for individual GF-atoms and a crown
of combinational logic on top of these monitors that captures
the Boolean structure of the property. We can formulate the
following theorem.</p>
          <p>Theorem 1: For any stabilization property, the given
procedure finds one counter-example if one exists.</p>
          <p>(Proof Sketch) Any stabilization property can be
transformed into another stabilization property with GF operators
only. Let f be the Boolean structure in the negation of the
given stabilization property. The procedure described above
will create a monitor that will search for a lasso-loop where
f is violated inside the loop. Since the procedure implicitly
enumerates all possible cycles in the state space, it will detect
a violating cycle if one such exists.</p>
          <p>VI. EXPERIMENTAL RESULTS</p>
          <p>
            We implemented our L2S scheme for general stabilization
properties in ABC and experimented with several designs of
communication fabrics from industry. Our objective was to
verify all stabilization properties defined for every structural
primitive of the XMAS framework [
            <xref ref-type="bibr" rid="ref34">18</xref>
            ]. The properties,
though local to each component, are verified in the context
of the whole design in order to avoid explicit environmental
modeling. BLIF models of the communication fabrics were
generated by the XMAS compiler [
            <xref ref-type="bibr" rid="ref11 ref27">11</xref>
            ] from high-level C++
models. The L2S monitor logic was then created by ABC
on these BLIF models. The XMAS compiler also generates
SMV models from C++ models so that the LTL encoding of
the stabilization properties can be verified directly on the SMV
models using the NuSMV model checker.
          </p>
          <p>
            We found that the ABC based L2S implementation has
much better scalability than NuSMV. NuSMV can solve only
toy designs while on the large designs of interest, it fails
to produce a result. On the other hand, our tool works well
even on large designs. For most cases, it produces a result
almost immediately. For a few cases, initial trials could not
produce a proof, but with the latest version of ABC using
simplification, abstraction, speculative reduction, and property
directed reachability (PDR) analysis [
            <xref ref-type="bibr" rid="ref24 ref8">8</xref>
            ], the proofs were
completed. This observation supports the premise that the use
of highly developed safety techniques can pay off for liveness
verification.
          </p>
          <p>Experimental results are shown below. Among all the local
properties that the XMAS compiler generated, we provide
results for the most challenging one. Call this property ; it
is defined for a FIFO buffer, and has the following LTL form</p>
          <p>
            := FG(:a) ) FG(:b) _ FG(c)
where a; b; and c are appropriate design signals (i.e. interface
signals of a FIFO buffer). Table 1, 2, and 3 compare the
performance of ABC with NuSMV on small examples. These
examples are instances of communication fabrics or
submodules thereof, and are explained in full detail in [
            <xref ref-type="bibr" rid="ref10 ref26 ref41">10</xref>
            ].
simple credit and simple vc (Table 1 and 2, respectively)
are designs corresponding to Figure 4 and 5 of [
            <xref ref-type="bibr" rid="ref10 ref26 ref41">10</xref>
            ], and
simple ms (Table 3) is a much simpler version of the design
shown in Figure 6 of [
            <xref ref-type="bibr" rid="ref10 ref26 ref41">10</xref>
            ]. Note from the table, how
performance of NuSMV degrades even for small designs. For large
designs, NuSMV could not finish for any single instance of
.
          </p>
          <p>Since is defined for a FIFO buffer and the XMAS
compiler created one instance of for each FIFO buffer, the
Fig. 5.</p>
          <p>Liveness-to-safety transformation for Fp
.
..</p>
          <p>Save</p>
          <p>Circuit Under
Verification
state bits ..</p>
          <p>.</p>
          <p>Saved
....</p>
          <p>Save</p>
          <p>Circuit Under
Verification
a
b
c
0
1
0
1</p>
          <p>=</p>
          <p>Live</p>
          <p>Looped</p>
          <p>Looped
Fig. 6.</p>
          <p>L2S for stabilization property F Ga ) F Gb + F Gc
Prop #
0
1
2</p>
          <p>ABC
(sec)
0.25
0.05
0.02</p>
          <p>NuSMV
(sec)
0.115
0.14
0.09</p>
          <p>TABLE II
SIMPLE CREDIT</p>
          <p>Prop #
0
1
2
3
4
5</p>
          <p>ABC
(sec)
0.09
0.07
0.06
0.03
0.5
0.03</p>
          <p>NuSMV
(sec)
33.23
31.8
39.57
16.46
41.37
16.89
number of</p>
          <p>instances is the same as the number of FIFO
buffers. For example, the designs corresponding to Table 1, 2,
and 3 above have 3, 6, and 4 FIFO buffers, respectively.</p>
          <p>
            We also experimented on two large communication fabrics
of practical interest [
            <xref ref-type="bibr" rid="ref10 ref26 ref41">10</xref>
            ], [
            <xref ref-type="bibr" rid="ref34">18</xref>
            ]. One has 20 buffers and the
other has 24 buffers. 19 out of 20 of the first design and 23
out of 24 from the second design were proved by ABC by a
light-weight interpolation engine within a worst case time of
5.83 seconds (most were proved in less than a second).
Lightweight interpolation could not prove one instance from each
design. These were proved using advanced techniques from
ABC’s arsenal of safety verification algorithms. For example,
ABC took a total of 217.2 seconds to prove one of these harder
properties. In this time span, ABC first did some preliminary
simplification, then it tried interpolation, BMC, simulation
and PDR in parallel for a time budget of 20 seconds. But
this attempt failed and it moved on to further simplification
by reducing the design using localization abstraction and
speculation. It ran interpolation, BMC, simulation, BDD-based
reachability and PDR engines in parallel both after abstraction
and speculation, using an elevated time budget of 100 seconds
and 49 seconds respectively. The iteration after abstraction
could not prove the property, but the iteration after speculation
managed to prove it with the PDR engine, which produced the
final proof in 7 seconds.
          </p>
          <p>VII. CONCLUSION &amp; FUTURE WORK</p>
          <p>We have developed a tool, VeriABC, which allows us to
access real industrial designs written in SystemVerilog or
VHDL and to process them into the AIGER format. The result
can be used for synthesis and verification using a tool like
ABC. We described how the RTL processing is done using
the commercial front-end, Verific. SVA assertions are also
processed by Verific, and VeriABC creates a separate file of
equivalent LTL formulas. We showed an application of this
to property checking, where ABC was enhanced to convert a
subset of LTL into a circuit structure, thus effectively allowing
liveness checking in ABC.</p>
          <p>The use of a stable, supported and complete language
processing tool like Verific, allows academics access to real
industrial designs, without going through the hassle and daunting
task of building their own equivalent tool. Liveness property
checking is a growing interest in industry, and our enhanced
ABC with a front end that automatically converts to a circuit
structure for liveness checking, can use the advanced safety
property methods of ABC.</p>
          <p>In the future, the development of VeriABC will allow us
to extract higher level constructs from SystemVerilog and
VHDL by accessing Verific’s parse trees. These constructs
can be passed on using an extended AIGER format to an
enhanced ABC, which will use this information in synthesis
and verification.</p>
          <p>VIII. ACKNOWLEDGMENT</p>
          <p>This work has been supported in part by SRC contract
2057.001 and our industrial sponsors: Abound Logic, Actel,
Altera, Atrenta, IBM, Intel, Jasper, Magma, Oasys, Real Intent,
Synopsys, Tabula, and Verific.</p>
          <p>On Incremental Satis ability and Bounded Model Checking</p>
          <p>Siert Wieringa
Aalto University, Finland</p>
          <p>siert.wieringa@aalto.</p>
          <p>Introduction</p>
          <p>
            Model checking is a formal veri cation technique
Bounded Model Checking (BMC) is a symbolic revolving around proving temporal properties of
model checking technique in which the existence systems modelled as nite state machines. A
of a counterexample of a bounded length is rep- property holds for the model if it holds in all
posresented by the satis ability of a propositional sible execution paths. If the property does not
logic formula. Although solving a single instance hold this can be witnessed by a counterexample,
of the satis ability problem (SAT) is su cient to which is a valid execution path for the model in
decide on the existence of a counterexample for which the property does not hold. Because the
any arbitrary bound typically one starts from model has a nite number of states any in nite
bound zero and solves the sequence of formu- execution of the system includes a loop, and can
las for all consecutive bounds until a satis able thus be represented by a nite sequence of
exeformula is found. This is especially e cient in cution steps. Bounded Model Checking (BMC)
the presence of incremental SAT-solvers, which [
            <xref ref-type="bibr" rid="ref1 ref17">1</xref>
            ] is a symbolic model checking technique in
solve sequences of incrementally encoded formu- which the existence of a counterexample
conlas. In this article we analyze empirical results sisting of a bounded number of execution steps
that demonstrate the di erence in run time be- is represented by the satis ability of a
proposihavior between incremental and non-incremental tional logic formula. It thus allows the use of
SAT-solvers. We show a relation between the ob- decision procedures for the propositional satis
served run time behavior and the way in which ability problem (SAT) for model checking.
Dethe activity of variables inside the solver propa- spite the theoretical hardness of SAT [
            <xref ref-type="bibr" rid="ref23 ref7">7</xref>
            ] such
degates across bounds. This observation has not cision procedures, called SAT-solvers [
            <xref ref-type="bibr" rid="ref14 ref25 ref30 ref33 ref9">9, 14, 17</xref>
            ],
been previously presented and is particularly have become extremely e cient. BMC is
popuseful for designing solving strategies for paral- ular as a technique for refuting properties, and
lelized model checkers. although BMC based techniques can be used for
proving properties we do not consider such
techniques here.
          </p>
          <p>
            Financially supported by the Academy of Finland A typical BMC encoding will have semantics
project 139402 such that if there exists a counterexample of
length k then there also exists a counterexam- ier to solve than the largest unsatis able ones.
ple of any length greater than k. Thus in prin- A more opportunistic search strategy may
reciple solving a single propositional logic formula duce the total time required to nd a satis able
is su cient to decide on the existence of a coun- formula by skipping over hard instances. Such
terexample for any arbitrary nite bound. How- strategies are natural for environments in which
ever, one typically starts to solve the formula multiple computing nodes are available in
paralcorresponding to bound zero and then solves se- lel, where one may de ne some nodes to use a
quentially each consecutive bound until a coun- more opportunistic strategy than others.
terexample is found. We will refer to this as the The observation on the empirical hardness of
standard sequential search strategy. This strat- the smallest satis able formulas compared to the
egy has the nice property that it always nds a largest unsatis able ones can also be made for
counterexample of minimal length. As with ev- BMC. We attempted to implement
opportunisery bound the representing formula grows larger tic strategies in our parallelized BMC framework
it also avoids solving unnecessarily large formu- Tarmo [
            <xref ref-type="bibr" rid="ref34">18</xref>
            ]. This however turned out to be less
las. Importantly, the performance of this strat- e cient then we would have expected, with
peregy bene ts greatly from the availability of incre- formance degrading for many benchmarks. In
mental SAT-solvers. Incremental SAT-solvers this article we evaluate the performance of the
can solve sequences of formulas that share large incremental solver and compare it against that
parts in common e ciently in a single solver pro- of solving each bound separately. The purpose
cess, allowing reuse of information between for- of this study is not to illustrate that
incremenmulas. tal solvers are more e ective for BMC than
nonincremental ones, as that is well known, but to
understand when and how opportunistic
strate2 Motivation gies can be applied. This is done by
comparing against non-incremental solver run times
beAutomated SAT based planning is a problem cause solvers applying opportunistic strategies
closely related to BMC. It deals with the same bene t less from the incremental interface of the
sequences of parameterized formulas, except solver, as the problem is no longer introduced
that the satis ability of a formula now corre- one bound at a time.
sponds to the existence of a plan of a bounded
length. In [
            <xref ref-type="bibr" rid="ref15 ref31">15</xref>
            ] evaluation strategies for
planning were suggested that are more opportunis- 3 Preliminaries
tic than the standard sequential search strategy.
          </p>
          <p>
            They suggest to spend some amount of the to- The majority of modern SAT-solvers are based
tal solving e ort at attempting to solve formu- on the Davis Putnam Logemann Loveland
las for bounds ahead of the currently smallest (DPLL) procedure [
            <xref ref-type="bibr" rid="ref24 ref8">8</xref>
            ]. The DPLL-procedure is
unsolved one. It was inspired by the empirical a backtracking search procedure for SAT that
observation that if a plan exists then amongst builds a partial assignment by iteratively
decidthe smallest satis able formulas in the sequence ing on a branching variable to be assigned a
there are typically formulas that are much eas- value in the partial assignment. When the
partition of the search space de ned by the partial out the need for integrating them into one
appliassignment is without solutions the algorithm cation. The input sequences used were the same
backtracks. In addition to this modern SAT- as the benchmarks used for experimental results
solvers typically employ con ict clause learning presented in [
            <xref ref-type="bibr" rid="ref34">18</xref>
            ]. These sequences were
gener[
            <xref ref-type="bibr" rid="ref33">17</xref>
            ]. Such solvers record a new con ict clause ated with the current state-of-the-art encoding
whenever they are forced to backtrack. They for model checking of linear time temporal logic
then backtrack non-chronologically to a decision properties with past (PLTL) [
            <xref ref-type="bibr" rid="ref18 ref2">2</xref>
            ] as implemented
point at which the con ict clause was still satis- in the model checker NuSMV 2.4.3 [
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ].
ed.
          </p>
          <p>
            The performance of the DPLL-procedure
depends heavily on its branching variable decisions. 4 Run time
A commonly used decision heuristic for clause
learning SAT-solvers is the Variable State Inde- In our experiments we have studied the
behavpendent Decaying Sum (VSIDS) heuristic rst ior of SAT-solvers on problem sequences from
presented in the solver Cha [
            <xref ref-type="bibr" rid="ref14 ref30">14</xref>
            ]. The idea of BMC regarding both the run time and variable
the heuristic is to favor variables that are in- activity. A selection of the results is presented in
cluded in recently derived con ict clauses. For the gures in this article. For each benchmark
each variable an initially zero value called the ac- there are two sub gures, a run time graph
lativity is maintained. Whenever a con ict clause beled (a) and a variable activity graph labeled
is learnt the activity of all variables that occur in (b). In this section we will focus only on the run
the clause is increased. Periodically the activity time graphs, which have bounds on the x-axis
of all variables is divided by a constant. and time on the y-axis. For each bound a
ver
          </p>
          <p>
            All results presented in this article were ob- tical bar displays the time it took to solve the
tained using the SAT-solver MiniSAT 2.2.0 [
            <xref ref-type="bibr" rid="ref25 ref9">9</xref>
            ]1. formula corresponding to that single bound
usThe solver core was not modi ed but a num- ing the SAT-solver in non-incremental fashion.
ber of small modi cations2 were made in aux- If the solver found unsatis able a solid red bar
iliary routines such as the le parser in order is drawn, if the solver found satis able only the
to read incremental SAT sequences from disk. outline of the bar is drawn in green.
          </p>
          <p>When employing BMC typically the SAT-solver The incremental solver solves using the
stanwill not read the formula sequence from disk but dard sequential strategy and whenever it
comit will rather be integrated into a BMC engine pletes a bound it reports the run time up to
that is generating formulas for new bounds on that point. The points in the graphs marked
the y. We use sequences stored on disk as this with crosses (x) and connected by the thick line
is convenient for testing the performance of the represent these run times.</p>
          <p>SAT-solver independently. As our sequences are The thin dotted line connecting the plusses
streamable one can also use them as an interface (+) is representing the cumulative run time of
between a BMC engine and a SAT-solver with- the non-incremental solver, i.e. for each bound
k the value displayed is the sum of all run times
1Available from http://www.minisat.se of the non-incremental solver tests from bound
2Available from http://users.ics.tkk.fi/swiering 0 to k. This demonstrates the time required
70
50
10
0
60
0
0
1400
400
200
0
0
10000
9000
8000
7000
6000
s
e
l
b
a
i
r
a
v
e 5000
v
it
c
a
r
e 4000
p
y
h
3000
2000
1000
0
0
3500
3000
2500
s
e
l
b
e
v
it
c
e
p
y
h
ia 2000
r
a
v
ra 1500
1000
500
30
bound
bound
bound
bound
0
10
20
40
50
60
bound
bound
for the standard sequential strategy using a non- a way that is easy to update when the bound on
incremental solver. This line is intentionally not the length of the execution paths is extended.
in uencing the range of the y-axis, as it typically The solver can be thought of as having tuned
itgrows so large that it would make the other re- self towards verifying the property holds in the
sults hard to see. exact same way over and over.</p>
          <p>From Fig. 1(a) it can be seen that the shortest
lfttsssscbbbSeuhhapmmocAeonrteaui5uTraatgiot7nsenhlltrb-slltdhdeuseoeesarosssnnfreb5attlers1av3hlttxvote0hei.hesiearsr4emaefsdTmaoto.aacepidrhlppfnsblam2reeplodlroaneaeuaogfrmter1ubrleflgfat0tosbynlohuetF5rorcetntmsiuaahuibgfhimnnnrosuen.eoetdrolnsewhi2mancarsrc(os1eut-huawo0liitsnman)silfht6esatcrte,vtasiaarthmvheretibteebrkamhcmeullteouegybnceehirishoefsenrlraoonasesttvfnrsrs-hatfoimeigop.ooluenbdserforna.cems.unlrnod[tltsAe1delechhIevm65airhtneane]moe,tlrgtiemifnshttaaifwtitaa.asrotnoaelnnykyooer-f.l
tttttssoqbFiiioheopuoioamalgeApeulnvtr.enhoneu,cnsc3rrhetdnoab.etr(tsnucaeatthaodohetn)hFteeusfaiiaroaitgssnvstfetrpywotiidopatecriabwhsrrbmahyseearieettxnnulettrirhaacalntoaeaebyhmtrt.tsrlueylmeoeyopBrgwoafaaulityiekiereltsshdkshifoaiint.focssenftooglrgeirlfwctseoacohtitninerrhuhethmlggehreyeartlueihpspsttnlsdtfoahru1aoutiolirh5nlvlcsbteh0reodfclopr,ookreaturrmrpiitrineoeghdnetssndhehpedpsfenooloteeorathnqrsnofSreceduvgAdrartiieesiunTnnhnpisennygget---opportunistic strategy could possibly be bene
cial. An incremental solver can be started from</p>
          <p>
            Note that all results presented in this arti- any arbitrary bound, and it is possible to
procle demonstrate that the use of the incremental ceed by increasing the bound by more than one
solver is crucial when performing the standard every time a formula is solved. Using bound
sequential strategy. Fig. 3(a) presents run time increments larger than one is one of the
simbehavior for the benchmark eijk.S1238.S which ple strategies we have tried in our experiments.
is the encoding of model checking a property that This strategy should still be considered
opporholds on all execution paths of the model. This tunistic because of the \missing information" it
implies that the formulas are unsatis able for all causes for the solver, leading it further away from
bounds. Here, the crucial role that incremen- the e ciency of incremental solving, and further
tal SAT solving often plays in solving BMC is towards non-incremental behavior. Given the
even clearer. Whereas a non-incremental solver small margin of error available for
opportuniswould take about 100 seconds to nd that bound tic approaches for satis able benchmarks, and
150 alone is unsatis able, the incremental solver no chance of any performance improvement for
nds this result for all bounds from 0 to 150 se- many unsatis able benchmarks, we need to be
quentially in half that time. This is typical be- careful when applying these approaches. They
havior for many benchmarks corresponding to are however amongst the most natural ways of
model checking a property that holds. It seems diversifying search strategies amongst nodes in
that in these cases the solver learns that the an environment with parallel computation
reproperty holds for all short execution paths in sources.
seems that the interrupted solver is missing more
information than just con ict clauses. This was
There are two common architectures for parallel one of the reasons to look at the way the
activSAT-solvers [
            <xref ref-type="bibr" rid="ref11 ref27">11</xref>
            ]. The rst is the classic divide- ity of variables propagates across bounds on the
and-conquer approach in which the formula is incremental SAT-solver runs.
split into multiple disjoint subformulas each of
which are then solved on a di erent
computing node [
            <xref ref-type="bibr" rid="ref20 ref35 ref4">4, 19</xref>
            ]. The second approach is the 6 Variable activity
so called portfolio approach [
            <xref ref-type="bibr" rid="ref10 ref26 ref41">10</xref>
            ]. The basic idea
is that every computing node is running a SAT- To obtain data on the activity of variables the
solver that is attempting to solve the same for- SAT-solver was modi ed to print the activity of
mula. As modern SAT-solvers make some deci- all variables after each bound it completed. For
sions randomly their run time varies greatly be- each bound we are interested in which variables
tween runs. This makes the portfolio approach are the most active, and especially in whether
surprisingly e cient as it is able to decide the this activity remains high across several bounds.
satis ability of the formula as soon as the fastest We consider a variable hyperactive if its activity
solver nishes. Further diversi cation may be is within the highest 2% of variables with
nonachieved by using di erent parameters on di er- zero activity.
ent computation nodes. Obviously opportunistic The graphs labeled (b) in this article
visualsearch strategies provide means for diversi ca- ize the hyperactive variables. All variables that
tion when we are considering solving incremen- are hyperactive for at least one bound are
reptally encoded SAT formulas in a parallel envi- resented by an integer value on the y-axis of the
ronment. graph. The variables are sorted on the y-axis
          </p>
          <p>The current implementation of our paral- by their index such that if we de ne y(v) as the
lelized BMC framework Tarmo can be seen as integer on the y-axis corresponding to the
varia parallelized incremental SAT-solver using the able with index v then for any v0 &gt; v we have
portfolio approach. Each computing node is run- y(v0) &gt; y(v).
ning an incremental SAT-solver in the conven- Just like in the run time graphs the values on
tional sequential fashion. The novelty of Tarmo the x-axis of the graph represent bounds. If a
is that it allows sharing of con ict clauses be- variable was hyperactive starting from bound k
tween SAT-solvers even if they are working on up to but not including bound k0 &gt; k then a
hordi erent bounds. The solvers operate otherwise izontal line was drawn in the graph from bound
independently, i.e. if one solver solves a formula k to k0 at the y position corresponding to that
this does not stop the other solvers from at- variable. In other words for all variables v and all
tempting to solve that same formula. This choice bound intervals [k; k0) on which v is hyperactive
was made after observing that interrupting a a line was drawn from (k; y(v)) to (k0; y(v)).
solver to make it \catch-up" with another breaks One may observe that generally variables with
its ability to bene t from incremental SAT to the larger indices become active later. This is
befull extent. As we made this observation in an cause in the solver each newly introduced
varienvironment where clause sharing takes place it able is given a larger index than all existing
varis
leb 1500
a
ir
a
v
e
v
it
c
a
rpe 1000
y
h
60
(b)
bound80
ables, and for each bound a set of new variables is bound local.
created. For each bound a subset of the new vari- We have generated graphs like the ones
preables becomes hyperactive quickly, as the solver sented in this paper for a large set of
benchruns into con icts on the newly added clauses. marks3. We observe that on benchmarks with</p>
          <p>The hyperactive variables in the satis able a bound global variable activity the run time of
benchmarks irst.dme6 and bc57sensors.p2neg the non-incremental SAT-solver for the largest
are displayed in Fig. 1(b) and Fig. 2(b). Note bound solved is smaller than the time spent for
that although for each bound some of the new the incremental solver to get to the same bound
variables become hyperactive all these vari- and solve it. For benchmarks with a bound
loables tend to remain hyperactive throughout the cal variable activity this is never the case and
whole process. This means that whenever a thus a opportunistic heuristic will not improve
bound is added the solver still runs into new con- performance.</p>
          <p>icts regarding variables that represent the state Although we expect all hard satis able
benchat smaller timepoints. We say that the activity marks to have a bound global variable activity it
of variables is bound global. is not the case that all unsatis able benchmarks
have a bound local variable activity. The
bench</p>
          <p>For the benchmark eijk.S1238.S the activity mark abp4.ptimoneg represented in Fig. 6 is an
graph looks very di erent. For each bound the example of an unsatis able benchmark with a
solver creates con ict clauses including the new bound global variable activity. Apparently the
variables, thus creating a new set of hyperactive correctness of the property is not implied within
variables, but there are only very few variables a short number of execution steps here, and the
for which hyperactivity is maintained. This is in incremental solver needs to evaluate large
porline with observation on the run time behavior of tions of the search space for every bound. Note
this benchmark which also indicate that hardly also that for this benchmark an opportunistic
apany work has to be performed to nd unsatis
ability. We say that the activity of variables is 3Available from http://users.ics.tkk.fi/swiering</p>
          <p>Conclusions
proach may help to nd unsatis able formulas at to the solver. Tolerance to bad choices for such
larger bounds faster. an incremental encoding could be achieved by
doing this in a parallel environment with some
opportunistic nodes.</p>
          <p>In this article we have shown a relation between
the run time of the standard sequential
strategy for bounded model checking and the
activity of decision variables in solvers employing this
strategy. We can use this observation in a
SATsolver to predict during the search whether a
more opportunistic strategy could be bene cial
for the search. This is especially useful for
parallel solvers in which di erent threads may be
executing di erent strategies.</p>
          <p>
            It is also easy to envision how these techniques
could be useful for model checkers that use a
combination of truly di erent model checking
techniques such as PdTrav [
            <xref ref-type="bibr" rid="ref21 ref5">5</xref>
            ]. One could
easily engineer a system which would do BMC for
some amount of time, after which the variable
activity could play a role in the decision on how
to continue. If the variable activity appears to
be bound local then the property is likely to hold
for the model and thus we may want to start
doing a complete model checking technique based
on for example k-induction [
            <xref ref-type="bibr" rid="ref16 ref32">16</xref>
            ], Craig
interpolation [
            <xref ref-type="bibr" rid="ref13 ref29">13</xref>
            ] or BDDs [
            <xref ref-type="bibr" rid="ref12 ref28">12</xref>
            ] to prove this.
          </p>
          <p>Another observation we made is that for
deciding the satis ability of the last formula in an
incrementally encoded sequence of formulas it
can sometimes be faster to solve all formulas in
the sequence. This raises the question whether
other applications of SAT solving that currently
rely on solving a single SAT formula could
bene t from the use of incremental problem
encodings. Such encodings allow enforcing a search
direction on the SAT-solver that is natural to
the application and therefore possibly bene cial</p>
          <p>
            References
[
            <xref ref-type="bibr" rid="ref1 ref17">1</xref>
            ] Armin Biere, Alessandro Cimatti,
Edmund M. Clarke, and Yunshan Zhu.
Symbolic model checking without BDDs. In
Rance Cleaveland, editor, TACAS, volume
1579 of Lecture Notes in Computer Science,
pages 193{207. Springer, 1999.
[
            <xref ref-type="bibr" rid="ref18 ref2">2</xref>
            ] Armin Biere, Keijo Heljanko, Tommi A.
          </p>
          <p>
            Junttila, Timo Latvala, and Viktor
Schuppan. Linear encodings of bounded LTL
model checking. Logical Methods in
Computer Science, 2(5), 2006.
[
            <xref ref-type="bibr" rid="ref19 ref3">3</xref>
            ] Armin Biere and Toni Jussila.
Hardware model checking competition 2007
(HWMCC07). Organized as a satellite event
to CAV 2007, Berlin, Germany, July 3-7,
2007.
[
            <xref ref-type="bibr" rid="ref20 ref4">4</xref>
            ] Max Bohm and Ewald Speckenmeyer. A
fast parallel SAT-solver - e cient workload
balancing. Ann. Math. Artif. Intell.,
17(34):381{400, 1996.
[
            <xref ref-type="bibr" rid="ref21 ref5">5</xref>
            ] Gianpiero Cabodi, Paolo Camurati, Luz
          </p>
          <p>Garcia, Marco Murciano, Sergio Nocco,
and Stefano Quer. Trading-o SAT search
and variable quanti cations for e ective
unbounded model checking. In Alessandro
Cimatti and Robert B. Jones, editors,
FM</p>
          <p>
            CAD, pages 1{8. IEEE, 2008.
[
            <xref ref-type="bibr" rid="ref22 ref6">6</xref>
            ] Alessandro Cimatti, Edmund M. Clarke,
          </p>
          <p>
            Enrico Giunchiglia, Fausto Giunchiglia,
Marco Pistore, Marco Roveri, Roberto Se- in Computer Science, pages 1{12. Springer,
bastiani, and Armando Tacchella. NuSMV 2005.
2: An opensource tool for symbolic model
checking. In Ed Brinksma and Kim Guld- [
            <xref ref-type="bibr" rid="ref14 ref30">14</xref>
            ] Matthew W. Moskewicz, Conor F.
Madistrand Larsen, editors, CAV, volume 2404 of gan, Ying Zhao, Lintao Zhang, and Sharad
Lecture Notes in Computer Science, pages Malik. Cha : Engineering an e cient SAT
359{364. Springer, 2002. solver. In DAC, pages 530{535. ACM, 2001.
[
            <xref ref-type="bibr" rid="ref23 ref7">7</xref>
            ] Stephen A. Cook.
          </p>
          <p>theorem-proving procedures.
pages 151{158. ACM, 1971.</p>
          <p>
            The complexity of [
            <xref ref-type="bibr" rid="ref15 ref31">15</xref>
            ] Jussi Rintanen, Keijo Heljanko, and Ilkka
          </p>
          <p>In STOC, Niemela. Planning as satis ability: parallel
plans and algorithms for plan search. Artif.</p>
          <p>
            Intell., 170(12-13):1031{1080, 2006.
[
            <xref ref-type="bibr" rid="ref25 ref9">9</xref>
            ] Niklas Een and Niklas Sorensson. An
extensible SAT-solver. In Enrico Giunchiglia and
Armando Tacchella, editors, SAT, volume
2919 of Lecture Notes in Computer Science,
pages 502{518. Springer, 2003.
[
            <xref ref-type="bibr" rid="ref24 ref8">8</xref>
            ] Martin Davis, George Logemann, and
Donald Loveland. A machine program for [
            <xref ref-type="bibr" rid="ref16 ref32">16</xref>
            ] Mary Sheeran, Satnam Singh, and Gunnar
theorem-proving. Commun. ACM, 5:394{ Stalmarck. Checking safety properties
us397, July 1962. ing induction and a SAT-solver. In Warren
A. Hunt Jr. and Steven D. Johnson, editors,
FMCAD, volume 1954 of Lecture Notes in
Computer Science, pages 108{125. Springer,
2000.
[
            <xref ref-type="bibr" rid="ref12 ref28">12</xref>
            ] Kenneth L. McMillan.
          </p>
          <p>checking. Kluwer, 1993.</p>
          <p>
            Symbolic model
[
            <xref ref-type="bibr" rid="ref13 ref29">13</xref>
            ] Kenneth L. McMillan. Applications of Craig
interpolants in model checking. In
Nicolas Halbwachs and Lenore D. Zuck,
editors, TACAS, volume 3440 of Lecture Notes
[
            <xref ref-type="bibr" rid="ref11 ref27">11</xref>
            ] Antti Eero Johannes Hyvarinen, Tommi A.
          </p>
          <p>
            Junttila, and Ilkka Niemela. Partitioning
search spaces of a randomized search. In
Roberto Serra and Rita Cucchiara, editors,
AI*IA, volume 5883 of Lecture Notes in
Computer Science, pages 243{252. Springer, [
            <xref ref-type="bibr" rid="ref35">19</xref>
            ] Hantao Zhang,
2009.
          </p>
          <p>
            [
            <xref ref-type="bibr" rid="ref33">17</xref>
            ] Jo~ao P. Marques Silva and Karem A.
          </p>
          <p>Sakallah. GRASP: A search algorithm for
propositional satis ability. IEEE Trans.</p>
          <p>
            Computers, 48(5):506{521, 1999.
[
            <xref ref-type="bibr" rid="ref34">18</xref>
            ] Siert Wieringa, Matti Niemenmaa, and
          </p>
          <p>Keijo Heljanko. Tarmo: A framework for
parallelized bounded model checking. In
Lubos Brim and Jaco van de Pol, editors,
PDMC, volume 14 of EPTCS, pages 62{76,
2009.</p>
          <p>Maria Paola Bonacina,
and Jieh Hsiang. PSATO: A distributed
propositional prover and its application to
quasigroup problems. J. Symb. Comput.,
21(4):543{560, 1996.
Keyword Index</p>
          <p>BMC
combinational simpli cation
Data structure
Domain Speci c Embedded Language (DSEL)
Dynamic Spectrum Access
empirical results
Engine Independence
front-end engineering
liveness to safety
model checking
Model checking
On-the- y
Policies
Real-time
Reasoning
SAT
Satis ability Modulo Theories (SMT)
SMT solving
solving strategies
stabilization
sweeping
Timed automata
30
46
22
38
38
30
13
13
3
13
3
46
22</p>
          <p>3
46
38
30
13
3</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Bradley</surname>
          </string-name>
          , “
          <article-title>SAT-based model checking without unrolling,”</article-title>
          <source>in Proc. Int. Conf. on Verification, model checking, and abstract interpretation</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>70</fpage>
          -
          <lpage>87</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H.</given-names>
            <surname>Mony</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Baumgartner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Paruthi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kanzelman</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Kuehlmann</surname>
          </string-name>
          , “
          <article-title>Scalable automated verification via expertsystem guided transformations</article-title>
          ,” in Formal Methods in Computer-Aided Design,
          <year>2004</year>
          , pp.
          <fpage>159</fpage>
          -
          <lpage>173</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Brayton</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishchenko</surname>
          </string-name>
          , “ABC:
          <article-title>An academic industrialstrength verification tool</article-title>
          ,” in Computer Aided Verification,
          <year>2010</year>
          , pp.
          <fpage>24</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Cabodi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nocco</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Quer</surname>
          </string-name>
          , “
          <article-title>Benchmarking a model checker for algorithmic improvements and tuning for performance,”</article-title>
          <source>in Proc. Hardware Verification Workshop</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bjesse</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Boralv</surname>
          </string-name>
          , “
          <article-title>DAG-aware circuit compression for formal verification</article-title>
          ,”
          <source>in Proc. Int. Conf. on Computer-aided design</source>
          ,
          <year>2004</year>
          , pp.
          <fpage>42</fpage>
          -
          <lpage>49</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chatterjee</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Brayton</surname>
          </string-name>
          , “
          <article-title>DAG-aware AIG rewriting a fresh look at combinational logic synthesis</article-title>
          ,”
          <source>in Proc. Design Automation Conference</source>
          ,
          <year>2006</year>
          , pp.
          <fpage>532</fpage>
          -
          <lpage>535</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kuehlmann</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Krohm</surname>
          </string-name>
          , “
          <article-title>Equivalence checking using cuts and heaps</article-title>
          ,”
          <source>in Proc. Design Automation Conference</source>
          ,
          <year>1997</year>
          , pp.
          <fpage>263</fpage>
          -
          <lpage>268</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kuehlmann</surname>
          </string-name>
          , “
          <article-title>Dynamic transition relation simplification for bounded property checking,”</article-title>
          <source>in Proc. Int. Conf. on Computeraided design</source>
          ,
          <year>2004</year>
          , pp.
          <fpage>50</fpage>
          -
          <lpage>57</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>N.</given-names>
            <surname>Ee</surname>
          </string-name>
          ´n, “Cut sweeping,
          <source>” Cadence Design Systems, Tech. Rep.</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chatterjee</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Brayton</surname>
          </string-name>
          , “
          <article-title>FRAIGs: A unifying representation for logic synthesis</article-title>
          and verification,” EECS Dept., UC Berkeley, ERL, Tech. Rep.,
          <year>Mar 2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Biere</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cimatti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Clarke</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhu</surname>
          </string-name>
          , “
          <article-title>Symbolic model checking without BDDs</article-title>
          ,” in TACAS,
          <year>1999</year>
          , pp.
          <fpage>193</fpage>
          -
          <lpage>207</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G.</given-names>
            <surname>Cabodi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Murciano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nocco</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Quer</surname>
          </string-name>
          , “
          <article-title>Stepping forward with interpolants in unbounded model checking,”</article-title>
          <source>in Proc. Int. Conf. on Computer-aided design</source>
          ,
          <year>2006</year>
          , pp.
          <fpage>772</fpage>
          -
          <lpage>778</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chatterjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brayton</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N.</given-names>
            <surname>Een</surname>
          </string-name>
          , “
          <article-title>Improvements to combinational equivalence checking,”</article-title>
          <source>in Proc. Int. Conference on Computer-aided design</source>
          ,
          <year>2006</year>
          , pp.
          <fpage>836</fpage>
          -
          <lpage>843</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Biere</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Claessen</surname>
          </string-name>
          , “
          <article-title>Hardware model checking competition</article-title>
          ,” in Hardware Verification Workshop,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>F.</given-names>
            <surname>Somenzi</surname>
          </string-name>
          ,
          <article-title>CUDD: CU Decision Diagram Package</article-title>
          , University of Colorado at Boulder, ftp://vlsi.colorado.edu/pub/.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>R.</given-names>
            <surname>Brayton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Een</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishchenko</surname>
          </string-name>
          , “
          <article-title>Continued relevance of bit-level verification research,”</article-title>
          <source>in Proc. Usable Verification</source>
          , Nov.
          <year>2010</year>
          , pp.
          <fpage>15</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <article-title>[1] ABC - a system for sequential synthesis and verification</article-title>
          . Berkeley Verification and Synthesis Research Center, http://www.bvsrc.org.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Aiger</surname>
          </string-name>
          , http://fmv.jku.at/aiger/.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Icarus</surname>
            <given-names>verilog</given-names>
          </string-name>
          , http://iverilog.icarus.com.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Verific</given-names>
            <surname>Design</surname>
          </string-name>
          Automation: http://www.verific.com.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K.</given-names>
            <surname>Abrahamson</surname>
          </string-name>
          .
          <article-title>Decidability and expressiveness of logics of processes</article-title>
          . In
          <source>PhD Thesis</source>
          , University of Washington,
          <year>1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Baumgartner</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Mony</surname>
          </string-name>
          .
          <article-title>Scalable liveness checking via propertypreserving transformations</article-title>
          .
          <source>In DATE</source>
          , pages
          <fpage>1680</fpage>
          -
          <lpage>1685</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Biere</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Artho</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Schuppan</surname>
          </string-name>
          .
          <article-title>Liveness checking as safety checking</article-title>
          .
          <source>In In FMICS02: Formal Methods for Industrial Critical Systems</source>
          , volume
          <volume>66</volume>
          (
          <article-title>2) of ENTCS</article-title>
          . Elsevier,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bradley</surname>
          </string-name>
          .
          <article-title>Sat-based model checking without unrolling</article-title>
          .
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Brayton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. D.</given-names>
            <surname>Hachtel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sangiovanni-vincentelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Somenzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aziz</surname>
          </string-name>
          , S. tsung Cheng, and
          <string-name>
            <given-names>S.</given-names>
            <surname>Edwards</surname>
          </string-name>
          .
          <article-title>Vis : A system for verification and synthesis</article-title>
          . pages
          <fpage>428</fpage>
          -
          <lpage>432</lpage>
          . Springer-Verlag,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Chatterjee</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Kishinevsky</surname>
          </string-name>
          .
          <article-title>Automatic generation of inductive invariants from high-level microarchitectural models of communication fabrics</article-title>
          .
          <source>In CAV</source>
          , pages
          <fpage>321</fpage>
          -
          <lpage>338</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Chatterjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kishinevsky</surname>
          </string-name>
          , and
          <string-name>
            <given-names>U.</given-names>
            <surname>Ogras</surname>
          </string-name>
          .
          <article-title>Modeling communication micro-architectures (with one hand tied behind your back)</article-title>
          .
          <source>Intel Technical Report</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [12] S.-T. Cheng and
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Brayton</surname>
          </string-name>
          .
          <source>Compiling verilog into automata</source>
          ,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>E.</given-names>
            <surname>Clarke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grumburg</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Peled</surname>
          </string-name>
          . Model checking.
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>B.</given-names>
            <surname>Cook</surname>
          </string-name>
          , J. Fisher, E. Krepska, and
          <string-name>
            <given-names>N.</given-names>
            <surname>Piterman</surname>
          </string-name>
          .
          <article-title>Proving stabilization of biological systems</article-title>
          .
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Emerson</surname>
          </string-name>
          .
          <article-title>Handbook of theoretical computer science</article-title>
          (vol. b).
          <source>chapter Temporal and modal logic</source>
          , pages
          <fpage>995</fpage>
          -
          <lpage>1072</lpage>
          . MIT Press, Cambridge, MA, USA,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Emerson</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.-L.</given-names>
            <surname>Lei</surname>
          </string-name>
          .
          <article-title>Modalities for model checking: branching time logic strikes back</article-title>
          .
          <source>Sci. Comput. Program.</source>
          ,
          <volume>8</volume>
          :
          <fpage>275</fpage>
          -
          <lpage>306</lpage>
          ,
          <year>June 1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>N.</given-names>
            <surname>Francez</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Kozen</surname>
          </string-name>
          .
          <article-title>Generalized fair termination</article-title>
          .
          <source>In Proceedings of the 11th ACM SIGACT-SIGPLAN symposium on Principles of programming languages, POPL '84</source>
          , pages
          <fpage>46</fpage>
          -
          <lpage>53</lpage>
          , New York, NY, USA,
          <year>1984</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gotmanov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chatterjee</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Kishinevsky</surname>
          </string-name>
          .
          <article-title>Verifying deadlockfreedom of communication fabrics</article-title>
          .
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>R.</given-names>
            <surname>Hojati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Brayton</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R. P.</given-names>
            <surname>Kurshan</surname>
          </string-name>
          .
          <article-title>Bdd-based debugging of design using language containment and fair ctl</article-title>
          .
          <source>In Proceedings of the 5th International Conference on Computer Aided Verification, CAV '93</source>
          , pages
          <fpage>41</fpage>
          -
          <lpage>58</lpage>
          , London, UK,
          <year>1993</year>
          . Springer-Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>L.</given-names>
            <surname>Lamport</surname>
          </string-name>
          .
          <article-title>”sometime” is sometimes ”not never”: on the temporal logic of programs</article-title>
          .
          <source>In Proceedings of the 7th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, POPL '80</source>
          , pages
          <fpage>174</fpage>
          -
          <lpage>185</lpage>
          , New York, NY, USA,
          <year>1980</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pnueli</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Stavi</surname>
          </string-name>
          .
          <article-title>Impartiality, justice and fairness: The ethics of concurrent termination</article-title>
          .
          <source>In Proceedings of the 8th Colloquium on Automata, Languages and Programming</source>
          , pages
          <fpage>264</fpage>
          -
          <lpage>277</lpage>
          , London, UK,
          <year>1981</year>
          . Springer-Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pnueli</surname>
          </string-name>
          .
          <article-title>On the extremely fair treatment of probabilistic algorithms</article-title>
          .
          <source>In Proceedings of the fifteenth annual ACM symposium on Theory of computing, STOC '83</source>
          , pages
          <fpage>278</fpage>
          -
          <lpage>290</lpage>
          , New York, NY, USA,
          <year>1983</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Queille</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Sifakis</surname>
          </string-name>
          .
          <article-title>Fairness and related properties in transition systems a temporal logic to deal with fairness</article-title>
          .
          <source>In Acta Informat</source>
          , pages
          <fpage>195</fpage>
          -
          <lpage>220</lpage>
          ,
          <year>1983</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>V.</given-names>
            <surname>Schuppan</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Biere</surname>
          </string-name>
          .
          <article-title>Efficient reduction of finite state model checking to reachability analysis</article-title>
          .
          <source>Int. J. Softw. Tools Technol. Transf.</source>
          ,
          <volume>5</volume>
          (
          <issue>2</issue>
          ):
          <fpage>185</fpage>
          -
          <lpage>204</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Youssef</surname>
            <given-names>Hamadi</given-names>
          </string-name>
          , Sad Jabbour, and Lakhdar Sais.
          <article-title>ManySAT: A parallel SAT solver</article-title>
          .
          <source>JSAT</source>
          ,
          <volume>6</volume>
          (
          <issue>4</issue>
          ):
          <volume>245</volume>
          {
          <fpage>262</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>