<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Study of Sweeping Algorithms in the Context of Model Checking</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Zyad Hassan, Yan Zhang, and Fabio Somenzi Dept. of Electrical</institution>
          ,
          <addr-line>Computer</addr-line>
          ,
          <institution>and Energy Engineering University of Colorado at Boulder Boulder</institution>
          ,
          <addr-line>CO 80309</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>-Combinational simplification techniques have proved their usefulness in both industrial and academic model checkers. Several combinational simplification algorithms have been proposed in the past that vary in efficiency and effectiveness. In this paper, we report our experience with three algorithms that fall in the combinational equivalence checking (sweeping) category. We propose an improvement to one of these algorithms. We have conducted an empirical study to identify the strengths and weaknesses of each of the algorithms and how they can be synergistically combined, as well as to understand how they interact with ic3 [1].</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>• We carry out a comparative study of the different
sweeping methods.
• We propose a BDD-based cut sweeping method that
is more effective than the original cut sweeping.
• We propose a combined sweeping approach in
which more than one sweeping method is applied.
We show that the combined approach can achieve
more simplification than any of the methods can
achieve individually.
• We perform an empirical study of the effect of
sweeping on ic3.</p>
    </sec>
    <sec id="sec-2">
      <title>I. INTRODUCTION</title>
    </sec>
    <sec id="sec-3">
      <title>Combinational simplification eliminates redundancies and increases sharing of logic in a design. It has been successfully employed in logic synthesis, equivalence checking, and model checking.</title>
      <p>
        In the model checking context, combinational
simplification often dramatically improves the performance The rest of the paper is organized as follows. Section
of the proof engines. This has made it into a primary II contains preliminaries. In Section III, we introduce
component in both industrial [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and academic [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], the BDD-based cut sweeping method. In Section IV,
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] model checkers. Several combinational simplification we explain the rationale behind the combined sweeping
algorithms have been proposed in the past, such as approach. In Section V we present the experimental
DAG-aware circuit compression [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and sweeping results and in Section VI we conclude.
methods [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]–[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Sweeping methods merge functionally
equivalent nodes. They include BDD sweeping [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], SAT II. PRELIMINARIES
sweeping [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], and cut sweeping [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        When designing a model checker, the strengths and A. AND-inverter-graph
weaknesses of each of the sweeping methods should The input and output of our sweeping algorithms
be taken into account. To the best of our knowledge, are AND-inverter-graphs (AIGs). An AIG is a directed
no studies have been carried out so far to evaluate acyclic graph that has four types of nodes: source nodes,
and compare the different sweeping methods, with the sink nodes, internal nodes and the constant TRUE node.
exception of a limited study reported in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. A primary input (PI) is a source node in an AIG. A
      </p>
      <p>
        The effect of combinational simplification on several primary output (PO) is a sink node and has exactly one
model checking engines has been studied in the past. predecessor. The internal nodes represent 2-input AND
In [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], SAT sweeping has been shown to positively gates. A node v is a fanin of v0 if there is an edge (v, v0);
affect bounded model checking (BMC) [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. In [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], it it is a fanout of v0 if there is an edge (v0, v). Left (v)
is shown that combinational redundancy removal tech- and Right (v) refer to the left and right predecessors of v.
Fanin(v) and Fanout (v) denote the fanins and fanouts point is that the original BDD is kept for equivalence
of node v. An edge in an AIG may have the INVERTED checking, but is not used to produce new BDDs. The
attribute to model an inverter. The Boolean function of algorithm is complete if the threshold is so large that
a PI is a unique Boolean variable. For an edge, it is no auxiliary variable is introduced. In practice, however,
the function of its source node if the edge does not have this can be prohibitive.
the INVERTED attribute, and the complement otherwise.
      </p>
      <p>For an internal node, it is the conjunction of the functions C. SAT Sweeping
of its incoming edges. The Boolean function of a PO is The advancements in SAT solver technology over the
that of its incoming edge. past two decades has proliferated SAT-based reasoning</p>
      <p>
        A path from node a to b is a sequence of nodes methods. SAT sweeping is one such method proposed
hv0, v1, v2, . . . , vni, such that v0 = a, vn = b and by Kuehlmann [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] for combinational redundancy
identivi ∈ Fanin(vi+1), 0 ≤ i &lt; n. The height of a node v, fication. SAT sweeping queries the SAT solver to prove
h(v), is the length of the longest path from a PI to v. The or refute the equivalence of two nodes in the circuit.
fanin (fanout) cone of node v is the set of nodes that have The basic SAT sweeping algorithm works as follows.
a path to (from) v. Two nodes are functionally equivalent First, the circuit is simulated with random inputs, and
(complementary) if they represent the same (comple- candidate equivalence classes are formed, where nodes
mentary) Boolean function(s). Functionally equivalent having the same simulation values are placed together.
(complementary) nodes can be merged transferring the Next, for each two nodes belonging to the same class,
fanouts of the redundant nodes to their representative. the SAT solver is queried for their equivalence. If the
To simplify our presentation, in what follows we delib- SAT solver reports they are equivalent, one of them is
erately ignore detection of complementary functions. merged into the other. Otherwise, the counterexample
provided is simulated to break this equivalence class,
B. BDD Sweeping and possibly rule out other candidate equivalences as
      </p>
      <p>
        BDD sweeping identifies two nodes as equivalent if well. This process is repeated until a resource limit is
they have the same BDD representation. The original reached, or until all classes become singletons, indicating
algorithm of Kuehlmann and Krohm [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] works in two the absence of further equivalences.
stages: equivalence checking and false negative detec- In our implementation of SAT sweeping, several
tion. In the first stage, it builds BDDs for each node heuristics were applied. We mention each of them briefly.
and merges nodes having the same BDD. The algorithm 1) Simulating Distance-1 Vectors: This heuristic was
introduces an auxiliary BDD variable (cut point) for each proposed in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Instead of just simulating the
counnode that has been merged, which potentially leads to terexample to equivalence derived by the SAT solver,
false negatives. In the second stage, it takes the BDDs all distance-1 vectors, that have a single bit flipped, are
of the output functions and for each of them, replaces simulated as well. Only the bits corresponding to the
the auxiliary variables with their driving functions. The inputs that are in the fanin cone of the two nodes being
algorithm is complete in the sense that it can find all checked for equivalence are flipped. We have found this
the equivalences in the circuit given a sufficiently large heuristic to be very effective in practice. In [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], this
limit on the BDD sizes. However, a large limit often hurts process is repeated for vectors that were successful in
efficiency. In this paper, we intend to use BDD sweeping breaking up equivalence classes until convergence. In our
in conjunction with SAT sweeping which is complete implementation, we only simulate the distance-1 vectors
and avoids inherent BDD inefficiencies [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. For that, we for the original counterexample: for the benchmark suite
employ a version of BDD sweeping that is incomplete we experimented with, recurring on successful vectors is
yet faster than the original. too expensive for the number of refinements it achieves.
      </p>
      <p>The algorithm we adopt iterates the following steps 2) Clustering: Simulating distance-1 vectors often
on each node v of the AIG in some topological order. results in considerable refinement of the equivalence
It builds a BDD for v and checks if there exists another classes. This is desirable, since an equivalence class
node that has the same BDD. If so, it merges these two is often broken up more cheaply by simulation than
nodes and continues. Otherwise, if the BDD size exceeds by the SAT solver. Moreover, we have observed that
a given threshold, the algorithm introduces an auxiliary with distance-1 vector simulation, it becomes very likely
BDD variables for v to be used in place of the original that nodes remaining in an equivalence class are indeed
BDD when dealing with the fanouts of v. An important equivalent. Therefore, rather than checking the
equivalence of two nodes at a time, we check the equivalence
of all nodes in an equivalence class using a single SAT
query. If they are all indeed equivalent, we find that using
a single SAT query rather than n − 1 queries where n is
the number of nodes in the class.</p>
      <p>
        3) Height-Based Merging: When two nodes are
proved equivalent, we merge the node with a larger
height into the one with a smaller height, instead of
merging based on a topological order as in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The
intuition being that a node having a larger height often
has a larger fanin cone, which suggests that merging it
would lead to a larger reduction. Nodes coming later in a
topological order do not necessarily have a larger height
than nodes coming earlier.
      </p>
      <sec id="sec-3-1">
        <title>D. Cut Sweeping</title>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Cut sweeping [9] is a fast yet incomplete approach for combinational equivalence checking. It iteratively computes cuts for each AIG node and compares the functions associated to the cuts.</title>
      <p>A cut is defined with respect to an AIG node, called
root. A cut C (v) of root v is a set of nodes, called leaves,
such that any path from a PI to v intersects C (v). A
cutset Φ(v) consists of several cuts of v. For cut-sets Φ(v1)
and Φ(v2), the merge operator ⊗ is defined as
in Φ(v) is associated with a truth table. Next, it searches
for a node equivalent to v by looking for a cut equivalent
to some cut in Φ(v). If it succeeds, the two nodes are
merged. Otherwise, a heuristic is applied to prune Φ(v)
to at most N cuts. After pruning, the algorithm stores
Φ(v) as the cut-set of v and builds a cross-reference
between each of its cuts and v.</p>
      <p>The heuristic for pruning, which we call the quality
heuristic, computes a value q for each cut:
1
q(C ) = X
n∈C | Fanout (n)|
.</p>
      <p>(3)</p>
    </sec>
    <sec id="sec-5">
      <title>The cuts with the smallest values of q are kept. The</title>
      <p>intuition of the quality heuristic is two-fold. First, it tries
to localize the equivalence and thus favors smaller cuts.
Second, it normalizes cuts by attempting to express them
with the same set of variables. The chosen variables are
those that have a large fanouts, i.e., that are shared by
many other nodes.</p>
      <p>
        A good truth-table implementation is critical to the
performance of cut sweeping. In [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], truth tables are
implemented as bit vectors. An advantage of bit vectors
is the constant-time cost of Boolean operations. On the
other hand, bit interleaving is required to extend the bit
vectors to the same length so that the corresponding bits
represent the same minterm1.
      </p>
    </sec>
    <sec id="sec-6">
      <title>III. BDD-BASED CUT SWEEPING</title>
      <p>Φ(v1) ⊗ Φ(v2) = {C1 ∪ C2 | C1 ∈ Φ(v1), C2 ∈ Φ(v2)} .</p>
      <p>(1)
Assume k ≥ 1. A k-feasible cut is a cut that contains
at most k leaves. A k-feasible cut-set is a cut-set that
contains only k-feasible cuts. The k-merge operator, ⊗k,
creates only k-feasible cuts. Cut enumeration recursively
computes all k-feasible cuts for an AIG. It computes the
k-feasible cut-set for a node v as follows:</p>
      <p>Representing functions having a small number of
inputs using bit vectors is very efficient. However, the
number of bits required grows exponentially with the
number of variables, which can easily lead to memory
blow-up. As an alternative, BDDs, which are more
suitable for large functions, can also be used to represent cut
functions. Furthermore, the strong canonicity of BDDs
Φ(v) = ({{v}} v ∈ PI makes it trivial to check for equivalence. The use of
{{v}} ∪ Φ (Left (v)) ⊗k Φ (Right (v)) v ∈ AND , BDDs also enables a heuristic which we describe below.</p>
      <p>(2) The proposed algorithm differs from the original one
where PI and AND refer to the set of PIs and 2-input in two aspects. First, we introduce a new parameter s, the
AND gates respectively. Note that cuts are not built for maximum size of a BDD, to replace k. That is, instead of
POs because they are never merged. k-feasible cuts, we keep cuts whose functions contain at</p>
      <p>The function of a cut is the Boolean function of the most s BDD nodes. Node count, as opposed to number
root in terms of the leaves. It can be represented in of inputs, is a more natural characterization of BDD size.
different ways, for instance, using bit vectors or BDDs. The second difference comes from the pruning
heurisTwo cuts are equivalent if their cut functions are equal. tic. We define the height h of a cut C as the average
Hence, two nodes are functionally equivalent if their cut- height of its nodes:
setCsuctosnwtaeinepeinqguivisalpeanrtacmuetstr.ized by k and N , the maxi- h(C ) = X h(v) . (4)
mum cut size and the maximum cut-set size, respectively. v∈C |C |
For each node v in some topological order of the AIG, 1A good reference of bit-interleaving can be found at http://
the algorithm builds a k-feasible cut-set Φ(v). Each cut graphics.stanford.edu/∼seander/bithacks.html.
A smaller h indicates that the leaves in the cut are closer
to the PIs. The height heuristic keeps at most N cuts
choosing the ones with smallest values of h.</p>
      <p>a
b
c
d
p
q
a
c
b
d
r
s
f</p>
      <p>g</p>
    </sec>
    <sec id="sec-7">
      <title>A motivating example for the new heuristic is in</title>
      <p>Figure 1, which shows two different 4-input AND gates.
Nodes a, b, c, and d are PIs. Nodes p, q, r, and s are
internal nodes. Nodes f and g can only be merged if
their cut-sets both contain {a, b, c, d}. However, if the
internal nodes have many more fanouts than the PIs, the
quality heuristic may select cuts containing the internal
nodes instead, causing the merge to be missed.</p>
      <p>As mentioned before, the quality heuristic tries to
normalize the cuts on certain “attractors.” This reduces
the possibility that equivalent functions are represented
differently. However, this might also lead to the loss
of the opportunity to find equivalences that cannot be
expressed by those “attractors,” as in Figure 1.</p>
      <p>On the other hand, the height heuristic tries to push the
cut boundary as far as possible. A supporting argument
is that, if a node is employed in equivalent cuts, then
replacing it with its predecessors preserve equivalence.
Furthermore, new merges that are otherwise
undiscoverable (consider other equivalences that require a and b in
the above example) may be found. The height heuristic
does not attract cuts to certain nodes, which may result
in different cuts for equivalent nodes. As shown in the
experiments, the effectiveness of the height heuristic
reduces as the height of nodes increases.</p>
      <p>The two heuristics have their own strengths and
weaknesses. A natural question is whether it is possible to
combine them to benefit from their individual strengths.
We can choose a few cuts with each heuristic. This may
lead to more merges but may also worsen the efficiency
if it significantly increases the number of cuts. To prevent
such an increase, a combined heuristic only records
height cuts for the lower nodes, while it keeps both types
of cuts for the others.</p>
      <p>There is some connection between cut sweeping with
each of the two heuristics and BDD sweeping. With
the height heuristic, cut sweeping tries to build cuts as
large as possible, as BDD sweeping does. However, BDD
sweeping can store cuts that exceed the threshold while
cut sweeping only keeps those below the threshold. The
quality heuristic tries to attract cuts on certain nodes,
which is similar to the placement of auxiliary variables in
BDD sweeping. Nevertheless, the number of “attractors”
in the quality heuristic tends to be much larger than in
BDD sweeping.</p>
    </sec>
    <sec id="sec-8">
      <title>IV. COMBINING SWEEPING METHODS</title>
      <p>The idea of combining several simplification
algorithms is not new. Many existing model checkers iterate
several simplification algorithms before the problem is
passed to the model checking engines. However, we are
unaware of any studies that have been carried out to
identify the best way they could be combined. In this
section we attempt to give general guidelines to which
a combined approach should adhere. We support our
claims by empirical evidence collected in the
experiments reported in Section V.</p>
      <p>The problem we address is as follows: given a time
budget for combinational simplification, how should it
be allotted to the different algorithms? The sweeping
algorithms discussed in previous sections vary in their
completeness and speed, with cut sweeping being the
most incomplete method yet the fastest of the three
methods, SAT sweeping being a complete, yet the slowest,
and BDD sweeping lying in between, both in terms of
completeness and speed.</p>
      <p>Possible solutions include allocating the whole time
budget to a single algorithm, or dividing it among two or
more algorithms. The fact that some methods are better
in approaching certain problems than others, suggests
that more than one method should be applied. If two or
more methods are to be applied in sequence, the intuition
suggests that the lower effort methods should precede
the higher effort ones. The advantages of doing so are
two-fold. First, although the higher-effort methods are
likely to discover the merges that a lower-effort method
discovers, in general, it will take them more time to do
so. Second, preceding higher-effort methods by
lowereffort methods is beneficial in having them present a
smaller problem that is easier to handle.</p>
      <p>Finally, the percentages of total time that should be
allotted to each of the methods to yield the maximum
possible reduction is studied in Section V.</p>
    </sec>
    <sec id="sec-9">
      <title>V. RESULTS</title>
    </sec>
    <sec id="sec-10">
      <title>The experiments are divided into three parts. The first</title>
      <p>part compares different variations of the cut sweeping
algorithm. The second part shows the results of the com- proved significantly with s. This means that with s, each
bined sweeping methods, and the third part is concerned cut has a larger chance of resulting in a merge.
with the effect of sweeping on ic32. Second, while “k-quality” and “k-combined-1” have</p>
      <p>
        We use the benchmark set from HWMCC’10 [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], a very close sweeping times, the latter achieves 19.8%
set of 818 benchmarks in total. The experiments are run more merges. Furthermore, the decrease in the “Height”
on a machine with a 2.8GHz Quad-Core CPU and 9GB column reveals that the height cuts indeed lead to
of memory. We use CUDD [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] as our BDD package merges. Although “s-quality” is more effective than the
for all the BDD-related algorithms. two above methods, it is less efficient due to the larger
cut sizes.
      </p>
      <p>A. BDD-based Cut Sweeping For the methods with s (excluding “s-quality”), we</p>
      <p>Variations of cut sweeping are applied to the observe that “s-height N = 1” is the fastest and produces
HWMCC’10 benchmark set. The differences between a good number of merges. Increasing the number of
variations are two-fold. First, either the number of vari- height cuts to two triples the run time without gaining
ables, k, or the number of BDD nodes, s, is used to many more merges. Comparing it with “s-combined-1”,
drop over-sized cuts. Second, we experiment with several an improvement on the merges is shown by the latter.
heuristics for pruning cut-sets: the quality heuristic, This indicates that maintaining one height cut and one
the height heuristic, and two combined heuristics. The quality cut works better than two height cuts. For
“snaive combined heuristic (“combined-1”) chooses one combined-2”, the number of merges is between the two
cut based on the height heuristic and the others based above methods, but with lower run time. Furthermore,
on the quality heuristic. The other heuristic (“combined- the numbers of generated cuts and kept cuts are even
2”) sets a threshold on the node height (350 in our comparable to “s-height N = 1”. That is, even though
experiments). For nodes that are below the threshold, we keep three cuts for those nodes with height larger
it only keeps a height cut. For higher nodes, it produces than 350, on average we compute only a few percent
cut-sets consisting of a height cut and two quality cuts. more cuts than we do in the case of one cut per node.
We denote a method by a k or an s followed by the the The “Height” values of the three methods confirm the
heuristic name. All the variations use BDDs to represent assumption made in Section III: most merges produced
the cut functions. by the height heuristic come from cuts close to the</p>
      <p>
        The results are shown in Table I; they are aggregated PIs. When the two heuristics are combined, a significant
over the 818 benchmarks. Based on experiments, both increase on the “Height” value is observed. In Figure
the threshold of BDD sweeping and s in BDD-based 2, we show the number of merges found by “s-height
cut sweeping are set to 250. The total number of AIG N = 1” and “s-combined-2” on nodes within different
nodes before sweeping is 7.22M. “Final” is the size of height ranges. The plot is normalized to “k-quality” and
AIGs after sweeping. “Generated” and “Kept” are the has bin size of 50, i.e., a point at (2, 1) indicates that the
number of cuts generated and kept by the corresponding method finds the same number of merges as “k-quality”
methods. For an individual benchmark, its “height” is the for nodes with height from 100 to 149. Obviously, the
average height of all merged cuts. The “Height” column height heuristic works better on smaller height nodes,
is computed by taking the average of the “height” of while the quality heuristic catches more on larger height
all the benchmarks. A smaller value indicates that more ones. The combined heuristic takes the advantage of
merges are found by cuts that are close to the PIs. Note both and produces an even better profile on nodes with
that since we use BDDs, the results in terms of efficiency larger heights. Note that although the height heuristic
of bit-vectors based methods may not be as good as in works worse on the nodes with larger heights, it can
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Therefore, when dealing with them, we just compare still get more merges. This may be due to the fact that
the effectiveness. in this benchmark set, a large percent of equivalences
      </p>
      <p>Results indicate that the resulting AIGs are con- are located at lower heights.
sistently smaller with s than with k. There are In our setup, cut sweeping is intended for usage as a
a few interesting observations. First, the ratios fast method. Thus we consider “s-height N = 1” and
GeneratedCuts /Merge and KeptCuts/Merge are im- “s-combined-2” to be the best variants. Compared to
BDD sweeping, those two variants are faster because
2Detailed results for first and second parts can be found at http: they create fewer BDD nodes than BDD sweeping, but
//eces.colorado.edu/∼hassanz/sweeping-study are less effective since BDD sweeping may keep BDDs</p>
      <p>Method
k-quality N = 5
k-combined-1 N = 5
s-quality N = 5
s-height N = 1
s-height N = 2
s-combined-1 N = 2
s-combined-2
BDD Sweeping
SAT Sweeping
that exceed the threshold.</p>
      <sec id="sec-10-1">
        <title>B. Combined Sweeping Methods</title>
        <p>In this section, we show experimental evidence that
supports the guidelines of combining sweeping methods
presented in Section IV. In particular, we try several
possibilities of allotting the budget to the different sweeping
algorithms with the purpose of identifying empirically if
they should be combined, and if so, in which way. In
what follows we use the “s-height N = 1” variant of
cut-sweeping since it is the fastest, and we simply refer
to it as cut sweeping.</p>
        <p>In our combined approach, SAT sweeping is always
run last since it is the only complete method of the three,
and should thus be given whatever time is left to find
equivalences not discovered by the other methods. Also,
the time not used by any of the preceding methods is
passed to SAT sweeping. For instance, if cut sweeping
is given a time budget of 4 seconds and only uses 3 of
them, SAT sweeping gets to run for one extra second.</p>
        <p>We compare the reduction measured in terms of the
number of AIG nodes removed, and the total time
spent in sweeping. Data are aggregated over the 818
benchmarks. The base case for our comparisons is the
pure SAT sweeping case in which SAT sweeping gets
the whole budget. The time budget used in our study is
10 seconds.</p>
        <p>We consider the following policies: (a) allocating
the budget to two methods, (b) allocating it to three
methods, and (c) allocating the whole budget to SAT
sweeping. For (a) and (b), we consider all the different
permutations of assigning integer time values to each
method, such that they sum up to 10 seconds. Note that if
a sweeping algorithm times out, what it has achieved thus
far is used in what follows. In all cases, a set of
lightweight sequential simplification algorithms are applied
before sweeping. This set of algorithms includes
coneof-influence reduction, stuck-at-constant latch detection,
and equivalent latch detection. The total number of
AIG nodes for all 818 benchmarks measured after the
sequential simplification step is 6.1M.</p>
        <p>Results are presented in Table II. The first column lists
the methods, where the number before each sweeping
method indicates the number of seconds given to it.
The second column shows the number of AIG nodes
removed. The third column shows the total time spent in
sweeping. The methods are listed in order of decreasing
reduction. The last row is for pure SAT sweeping. We
only show the best three setups in terms of reduction for
each of the possible orders of the method sequences.</p>
        <p>Several observations can be made. First, when it
comes to running two methods in sequence, BDD
sweeping combined with SAT sweeping outperforms cut
sweeping combined with SAT sweeping. The method
that achieves maximum reduction (8 seconds of BDD
sweeping followed by 2 seconds of SAT sweeping)
removes 56K more nodes than pure SAT sweeping (7.7%
more reduction). Second, more reduction is achievable
by running three methods in sequence. As suggested in
Section IV, ordering the methods by increasing effort
(or equivalently by increasing degree of completeness)
achieves more reduction than otherwise. Here, the best
method (4 seconds, 5 seconds, and 1 second for cut,
BDD and SAT sweeping, respectively), has an edge
of 65K nodes over pure SAT sweeping (about 8.9%
more reduction). Third, in terms of sweeping time, it
is clear that a large drop occurs (&gt; 100 seconds) when
two or three methods are combined versus pure SAT
sweeping, which is due to the often smaller time needed
by BDD and cut sweeping to discover equivalences than
SAT sweeping. Given an overall model checking budget,
smaller sweeping time allows more time for the model
checking engine, which is desirable.</p>
        <p>The question of whether such difference has a
considerable effect on the performance of the model checking
engine is answered in the next section.</p>
        <p>C. Effect on ic3</p>
        <p>
          The recently developed model checking algorithm,
ic3 [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], has been regarded as the best standalone model
checking algorithm developed up till now [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. As the
interaction of combinational simplification methods with
different model checking algorithms has been studied in
the past, we here aim to study how they interact with
ic3. In particular, we would like to empirically find
out if ic3 benefits from preprocessing the netlist with
a simplification algorithm or not, and if it does, how
sensitive it is to the magnitude of reductions achieved
through simplification.
        </p>
        <p>In the first experiment, we compare two runs of ic3,
one that is preceded by SAT sweeping, and one that
is not. The experimental setup is as follows. A total
timeout of 10 minutes is used. The budget for SAT
sweeping is 10 seconds. The light-weight sequential
simplification algorithms referred to in Section V-B are
applied once in the no-sweeping case, and twice (before
and after sweeping) in the sweeping case. We compare
the number of solves, and the aggregate runtime among
all benchmarks.</p>
        <p>It is important to note that the ic3 algorithm has a
random flavor. In particular, the order by which
generalization (dropping literals) is attempted is randomized.
Also, since the algorithm is SAT-based, randomization
occurs in the SAT solver decisions. To have reliable
experimental results, each experiment is repeated with
10 different seeds, and the results are averaged over the
different seeds.</p>
        <p>Results are shown in Table III. The first column shows
the seed index, the second and third columns show the
number of solves without and with sweeping, and the
fourth and fifth columns show the aggregate runtime
without and with sweeping.</p>
        <p>The results confirm a positive effect of sweeping on
the performance of ic3. On average, five more solves
are achieved with sweeping, and the aggregate runtime
drops by 3.8%.</p>
        <p>The enhancement in the performance of ic3 in
presence of sweeping can be attributed to two factors. First,
reduction in the number of gates caused by sweeping can
result in the reduction in the SAT solver time. Second,
simplification often results in dropping of latches (e.g.,
if it merges the next-state functions of two latches). For
example, for the benchmark set used in our experiments,
sweeping reduces the aggregate number of latches from
279,161 to 269,091 (3.6% decrease). This reduces the
amount of work done by ic3 in generalization of
counterexamples to induction.</p>
        <p>We now repeat the previous experiment, but this time
we compare the number of solves and the aggregate
runtime between pure SAT sweeping and the empirically
optimum combined sweeping scheme of Section V-B.
The purpose of this experiment is to understand how
sensitive ic3 is to the magnitude of reductions.</p>
        <p>Results are shown in Table IV, where the second and
third columns compare the number of solves for pure
SAT sweeping and the optimum sweeping scheme, and
the fourth and fifth columns compare the total runtime.</p>
        <p>As the results indicate, ic3 does not benefit much
from the better reduction achieved by the combined
sweeping scheme. The lack of performance enhancement
on ic3 can be attributed to the small improvement
in reduction the combined sweeping approach achieves
over pure SAT sweeping. In particular, while pure SAT
sweeping removes 737K nodes out of the total 6.1M
nodes in the 818 benchmarks (12.1% reduction), the
combined approach removes 802K nodes (13.2%
reduction); a mere 1.1% improvement. This suggests that ic3
has a small sensitivity to the magnitude of reduction.</p>
      </sec>
    </sec>
    <sec id="sec-11">
      <title>VI. CONCLUSION</title>
    </sec>
    <sec id="sec-12">
      <title>In this paper, we presented an empirical study of</title>
      <p>the different sweeping methods proposed in the past.
We have shown that a combined sweeping approach
outperforms any of the sweeping methods alone. We
have also proposed a BDD-based cut sweeping method
that is more effective than the original cut sweeping.
Finally, we have studied the effect of sweeping on the
new model checking algorithm, ic3, and investigated
the causes of the better performance it experiences with
sweeping. The goal of this analysis is to help designers
of model checkers to make decisions regarding the
incorporation of sweeping methods and to provide a
deeper understanding of how sweeping methods interact
with ic3.</p>
    </sec>
    <sec id="sec-13">
      <title>ACKNOWLEDGEMENTS</title>
    </sec>
    <sec id="sec-14">
      <title>We thank Aaron Bradley who motivated this work</title>
      <p>and contributed to many discussions. We also thank the
reviewers for their insightful comments regarding cut
sweeping’s pruning heuristics that prompted us to try
the “combined-2” heuristic.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Bradley</surname>
          </string-name>
          , “
          <article-title>SAT-based model checking without unrolling,”</article-title>
          <source>in Proc. Int. Conf. on Verification, model checking, and abstract interpretation</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>70</fpage>
          -
          <lpage>87</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H.</given-names>
            <surname>Mony</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Baumgartner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Paruthi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kanzelman</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Kuehlmann</surname>
          </string-name>
          , “
          <article-title>Scalable automated verification via expertsystem guided transformations</article-title>
          ,” in Formal Methods in Computer-Aided Design,
          <year>2004</year>
          , pp.
          <fpage>159</fpage>
          -
          <lpage>173</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Brayton</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishchenko</surname>
          </string-name>
          , “ABC:
          <article-title>An academic industrialstrength verification tool</article-title>
          ,” in Computer Aided Verification,
          <year>2010</year>
          , pp.
          <fpage>24</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Cabodi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nocco</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Quer</surname>
          </string-name>
          , “
          <article-title>Benchmarking a model checker for algorithmic improvements and tuning for performance,”</article-title>
          <source>in Proc. Hardware Verification Workshop</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bjesse</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Boralv</surname>
          </string-name>
          , “
          <article-title>DAG-aware circuit compression for formal verification</article-title>
          ,”
          <source>in Proc. Int. Conf. on Computer-aided design</source>
          ,
          <year>2004</year>
          , pp.
          <fpage>42</fpage>
          -
          <lpage>49</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chatterjee</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Brayton</surname>
          </string-name>
          , “
          <article-title>DAG-aware AIG rewriting a fresh look at combinational logic synthesis</article-title>
          ,”
          <source>in Proc. Design Automation Conference</source>
          ,
          <year>2006</year>
          , pp.
          <fpage>532</fpage>
          -
          <lpage>535</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kuehlmann</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Krohm</surname>
          </string-name>
          , “
          <article-title>Equivalence checking using cuts and heaps</article-title>
          ,”
          <source>in Proc. Design Automation Conference</source>
          ,
          <year>1997</year>
          , pp.
          <fpage>263</fpage>
          -
          <lpage>268</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kuehlmann</surname>
          </string-name>
          , “
          <article-title>Dynamic transition relation simplification for bounded property checking,”</article-title>
          <source>in Proc. Int. Conf. on Computeraided design</source>
          ,
          <year>2004</year>
          , pp.
          <fpage>50</fpage>
          -
          <lpage>57</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>N.</given-names>
            <surname>Ee</surname>
          </string-name>
          ´n, “Cut sweeping,
          <source>” Cadence Design Systems, Tech. Rep.</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chatterjee</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Brayton</surname>
          </string-name>
          , “
          <article-title>FRAIGs: A unifying representation for logic synthesis</article-title>
          and verification,” EECS Dept., UC Berkeley, ERL, Tech. Rep.,
          <year>Mar 2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Biere</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cimatti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Clarke</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhu</surname>
          </string-name>
          , “
          <article-title>Symbolic model checking without BDDs</article-title>
          ,” in TACAS,
          <year>1999</year>
          , pp.
          <fpage>193</fpage>
          -
          <lpage>207</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G.</given-names>
            <surname>Cabodi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Murciano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nocco</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Quer</surname>
          </string-name>
          , “
          <article-title>Stepping forward with interpolants in unbounded model checking,”</article-title>
          <source>in Proc. Int. Conf. on Computer-aided design</source>
          ,
          <year>2006</year>
          , pp.
          <fpage>772</fpage>
          -
          <lpage>778</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chatterjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brayton</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N.</given-names>
            <surname>Een</surname>
          </string-name>
          , “
          <article-title>Improvements to combinational equivalence checking,”</article-title>
          <source>in Proc. Int. Conference on Computer-aided design</source>
          ,
          <year>2006</year>
          , pp.
          <fpage>836</fpage>
          -
          <lpage>843</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Biere</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Claessen</surname>
          </string-name>
          , “
          <article-title>Hardware model checking competition</article-title>
          ,” in Hardware Verification Workshop,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>F.</given-names>
            <surname>Somenzi</surname>
          </string-name>
          ,
          <article-title>CUDD: CU Decision Diagram Package</article-title>
          , University of Colorado at Boulder, ftp://vlsi.colorado.edu/pub/.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>R.</given-names>
            <surname>Brayton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Een</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishchenko</surname>
          </string-name>
          , “
          <article-title>Continued relevance of bit-level verification research,”</article-title>
          <source>in Proc. Usable Verification</source>
          , Nov.
          <year>2010</year>
          , pp.
          <fpage>15</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>